id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.13998
Traveling salesman problem with time slots: Asymptotic analysis and resolution algorithm
We develop an asymptotic approximation and bounds for the traveling salesman problem with time slots, i.e. when the time windows of points to visit are a partition of a given time horizon. Although this problem is relevant in several delivery applications, operational research researchers did not pay close attention to it, contrary to the extensively studied general formulation with time windows. Exploiting the specificity of this problem allows to solve more efficiently instances which may be hard to solve by specifically designed algorithms for the traveling salesman problem with time windows. The asymptotic analysis of the traveling salesman problem with time slots is a step toward developing new approximations for the general problem with time windows.We discuss this case. We equally provide a formulation of the asymptotic approximation under a worst-case demand distribution of the points to visit, based on the principle of the maximum entropy. Computational results are given for the benchmarks of the literature for which the time slots are randomly generated.
Omar Rifki, Thierry Garaix
2023-03-24T13:52:29Z
http://arxiv.org/abs/2303.13998v1
# Traveling salesman problem with time slots: ###### Abstract We develop an asymptotic approximation and bounds for the traveling salesman problem with time slots, _i.e._ when the time windows of points to visit are a partition of a given time horizon. Although this problem is relevant in several delivery applications, operational research researchers did not pay close attention to it, contrary to the extensively studied general formulation with time windows. Exploiting the specificity of this problem allows to solve more efficiently instances which may be hard to solve by specifically designed algorithms for the traveling salesman problem with time windows. The asymptotic analysis of the traveling salesman problem with time slots is a step toward developing new approximations for the general problem with time windows. We discuss this case. We equally provide a formulation of the asymptotic approximation under a worst-case demand distribution of the points to visit, based on the principle of the maximum entropy. Computational results are given for the benchmarks of the literature for which the time slots are randomly generated. Asymptotic analysis Traveling salesman problem Time windows Solving algorithm ## 1 Introduction One of the main issues in transport logistics is computing optimal Hamiltonian cycles of a number of points, an NP-hard problem famously known as the Traveling Salesman Problem (TSP). When the number of points to visit increases, the practical solving becomes harder, even using heuristic approaches. On the other hand, in a large number of situations, especially those involving decisions on the strategical and the tactical level about the design of routing problems, only knowing the optimal tour length is needed, not the tour in itself. Having a continuous approximation, which is a closed-form formula, easily computed, of the optimal tour length can be highly beneficial in those cases. The strategical and the tactical decisions could be made quickly without running any optimization. For instance, the continuous approximations are used by postal services for districting and sizing territories, as in the case of the United States postal service, see [1, 2, 3]. The sizing and the composition of the fleet is another tactical decision which could rely on the approximations of tour length, _e.g._[4, 5]. In case of the location routing problem, which is a difficult problem composed by two NP-hard problems: a facility location and a vehicle routing problem, the solution of the routing tour could be approximated by an asymptotic approximation, _e.g._[6]. Continuous approximations can be also useful in situations where the location of the points to visit is not known in advance, as the formulas rely on probabilistic assumptions. Logistic and freight distribution have often time-related constraints, which restrict the visit of each point in time to an interval. These time windows constraints can be imposed either by authorities, to limit the access for freight vehicles to city centers [7] or to freight loading zones for instance, or by customers for delivery, commercial and transportation operations, or by patients in case of medical transport. Their application range is wide. In general, time windows have a significant impact on reducing the efficiency of routes and increasing the traveled distances of vehicles [8]. We are interested in a specific format of time windows: non-overlapping time windows which form a partition of a planning horizon. We term them time slots. This structure becomes increasingly present in logistics planning due to the boom of e-commerce services and on-demand businesses, such as online grocery stores [9]. Actually, in order to increase customer satisfaction from one side, and the flexibility of delivery operations from the company side, these companies pre-arrange wide time slots for customers to choose from, such that the service is guaranteed for each customer in the chosen slot. The union of the time slots covers a large interval along the day, which could offer the customer the ability to choose the most adequate slot. Having the same temporal structure to construct day to day tours is also easier from the supplier side, which is not obliged to handle the cumbersome task of managing customers time windows that may differ in structure on a daily basis. The main goal of this paper is to extend the continuous asymptotic approximation of the TSP to the TSP with time slots (TSP-TS). With time slot constraints, it is obvious that the maximum tour length is bounded by the maximum difference between the time windows. However, an approximation function make it possible to approach how the tour length evolves according to the number of customers and also to apprehend the feasibility criteria of the tour. In this context, the main difficulty is to simultaneously manage the geographical and the temporal distributions. The contributions of this paper are listed as follows: * Proposition of an asymptotic approximation, feasibility conditions, and asymptotic bounds for the TSP-TS in case of uniform temporal and spacial distributions; * Extension of this approximation to the worst-case temporal and spacial demands of customers; * Proposition of an exact solving approach for the TSP-TS; * Generation of a dataset benchmark for the TSP-TS. The remaining of the article is organized as follows. Section 2 presents a brief literature review on the topic of continuous approximations in routing problems. Section 4 introduces the proposed asymptotic approximation and bounds accounting for the time windows consideration, while the preliminaries of the study are stated in Section 3. The worst-case demand in terms of space and time is treated in Section 5. The solving approach is given in Section 6, while Section 7 provides computational results. The paper is concluded thereafter. ## 2 Literature review The approximation of the routing problem is grounded on the famous theorem of Beardwood, Halton, and Hammersley (BHH), published in 1959 [10]. This result states that when the number of points to visit is randomly distributed on a compact area and goes to infinity, the optimal tour length approaches a constant value. Noting that the BHH formula underestimates tour lengths in elongated areas even for a larger number of points, Daganzo [11] proposed a strip strategy method, which computes optimized tour length in those types of areas. The two models of [10, 11] gave rise to several extensions accounting for the variants of the transportation problems, and accommodating to the area's shape. Chien [12] through a regression model accounted for the rectangular shape area in the TSP approximation by including the area of the smallest rectangle enclosing all points and the average distance to the depot. Kwon [13] via regression as well and neural networks improved the TSP approximation by including the length to width ratio of the rectangle and a shape factor. The first approximations of the capacitated vehicle routing problem (CVRP) were proposed by Webb [14]. Eilon _et al._ proposed a similar formula to the BHH for the CVRP accounting for the distances between the depot and the customers and for the shape of the support area. An intuitive approximation was equally proposed by Daganzo [15] for the CVRP. BHH formula has also led to the development of solving heuristics such as 'Partition' [16]. For a review of the overall extensions deriving from the continuous asymptotic approximation of the TSP, see [17] and [18]. Continuous approximations of the routing with time-windows concern mainly the vehicle routing problem (VRP). Daganzo [19, 20] has developed a model wherein the day is divided into time periods and customers into rectangles. Using a cluster-first route-second method, he obtained an approximation for the total distance traveled by all vehicles under these considerations. Figliozzi [21] tested several VRP approximations, and proposed a probabilistic modeling of the approximation such that the number of routes for a given number of time windows is derived probabilistically. Nicola _et al._[22] proposed regression-based approximations for the TSP, the CVRP with time windows, the multi-region multi-depot pickup and delivery problem accounting for the time windows, distances, customer demands and capacities of the vehicles. Using similar assumptions to [19, 20], Carlson and Behzoodi [23] studied the worst-case time window distribution in terms of routing costs, and found that it corresponds to a concentrated demand on a single time period when the number of customers is low, or to a uniform distribution over the time for a large number of customers. Although VRP asymptotic approximations are intuitive and simple to use, they are mainly grounded on empirical evidence as opposed to the analytical derivation of the BHH theorem. There has been several recent applications, whether for districting [24], location problems [25], fleet sizing [5] or accounting for pickups and deliveries [26]. Our model is based for the time windows considerations on similar assumptions to [19, 20, 23] in the sense of taking non-overlapping intervals as time windows. However, we differ from the previous accounts by pursuing a theoretical derivation of the asymptotic approximations from the BHH formula. ## 3 Preliminaries In this section, we present the formulation of the routing problem, the assumptions to generate random instances, and some previously obtained asymptotic approximations, starting from the famous BHH formula. ### The TSP with Time Windows (TSP-TW) The set of points to visit is denoted \(\mathcal{P}\), and is a finite set \(|\mathcal{P}|<\infty\). The depot is denoted by the point \(0\), and \(\mathcal{P}_{all}\) denotes the set of all points, _i.e._\(\mathcal{P}_{all}=\mathcal{P}\cup\{0\}\). For each point \(i\in\mathcal{P}_{all}\), we denote \(b_{i}\), \(f_{i}\), and \(s_{i}\) the earliest starting time, the latest finishing time associated with \(i\), and the service time respectively. For each couple of points \(i,j\in\mathcal{P}_{all}\), \(d_{ij}\) and \(c_{ij}\) denote respectively the travel duration between \(i\) and \(j\), and the cost of traversing the arc \((i,j)\). The specifications of the costs \(c_{ij}\), and the times \(d_{ij}\) are discussed in the assumption section. The goal is to find an order of visit that minimizes the total tour duration of the vehicle starting from the depot. For a couple of points \(i,j\in\mathcal{P}_{all}\), let \(x_{ij}\) be a binary variable equal to one if and only if \(j\) is visited after \(i\). For all \(i\in\mathcal{P}_{all}\), let \(t_{i}\) be the start service time of \(i\). Note that the vehicle traveling from \(i\) to \(j\) can wait in case of early arrival at \(j\), _i.e._, \(t_{i}+s_{i}<b_{j}\) is allowed. The parameter \(t_{0}\) represents the start service time from the depot at the beginning of the tour. A formulation of the problem is as follows: \[\min \sum_{i,j\in\mathcal{P}_{all}}c_{ij}x_{ij} \tag{1}\] \[\sum_{j\in\mathcal{P}_{all}\setminus\{i\}}x_{ij}=1 \forall i\in\mathcal{P}_{all}\] (2) \[\sum_{j\in\mathcal{P}_{all}\setminus\{i\}}x_{ji}=1 \forall i\in\mathcal{P}_{all}\] (3) \[t_{j}\geq t_{i}+s_{i}+d_{ij}+M(x_{ij}-1) \forall i\in\mathcal{P}_{all}\;\;\forall j\in\mathcal{P}\] (4) \[f_{0}\geq t_{i}+s_{i}+d_{i0} \forall i\in\mathcal{P}\] (5) \[b_{i}\leq t_{i}\leq f_{i}-s_{i} \forall i\in\mathcal{P}_{all}\] (6) \[t_{i}\geq 0 \forall i\in\mathcal{P}_{all}\] (7) \[x_{ij}\in\{0,1\} \forall i,j\in\mathcal{P}_{all} \tag{8}\] Constraints \((1)\) and \((2)\) are the flow conservation constraints, ensuring that each point is visited exactly once. Constraints \((3)\) and \((4)\) track the arrival times, with \(M\) is a very large number that could take the value of \(f_{0}\). Constraint \((5)\) ensures that the arrival times satisfy the time window constraints. Constraints \((6)\), and \((7)\) represent the binary and the bounding restrictions of the decision variables. ### The TSP with Time Slots (TSP-TS) The TSP with Time Slots is a special case of the TSP-TW. Time slots are defined as a partition of the time horizon \([0,h]\) of the problem. Introducing a horizon \(h\) in the formulation \((P_{1})\) comes down to setting \(f_{0}=h\). Time slots represent non-overlapping time windows, with the characteristic that several points could be assigned to a same time slot. Thus, new parameters are needed for the description: the number of time slots \(m\), and the time slot lengths \((l_{k})_{1\leq k\leq m}\). We use the abbreviation TSP-\(m\)TS to refer to this problem. The TSP with Identical Time Slots (TSP-ITS) is a variant of the TSP-TS, where time slots have the same length, _i.e._\(l_{k}=h/m\). We use the abbreviation TSP-\(m\)ITS for this special case. TSP-TS provides a relevant model in several applications; delivery at home or technicians visits, which are offered in slots morning, afternoon or evening visits. TSP-TS definition is motivated in this paper by its asymptotic properties and its closeness to TSP-TW. ### Assumptions In this subsection, we discuss the assumptions on the characteristics of the input data of the addressed problem. Assumptions for TSP-Tw.The problem input parameters that are used to randomly generate instances are: the size \(n\) of \(\mathcal{P}_{all}\), the time horizon \(h\), and the side \(a\) of the area where the points \(\mathcal{P}_{all}\) are located. Given these parameters, the assumptions to generate a TSP-TW instance are: 1. The points \(\mathcal{P}_{all}\) are uniformly distributed on the _Euclidean_ plane \(\mathbb{R}^{2}\) within a square area \(\mathcal{R}\), _i.e._, \(\mathcal{R}=\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq a,\;\;\text{and}\;\;0\leq y \leq a\}\). The random variables associated with these \(n\) points, \(X_{0},\ldots,X_{n-1}\), are supposed to be independent and identically distributed (i.i.d). \(X_{0}\) is the corresponding random variable of the depot. 2. The cost \(c_{ij}\) is taken to be equal to the travel duration \(d_{ij}\), and the stop durations are taken to be null \(s_{i}=0\;\;(\forall i\in\mathcal{P}_{all})\). 3. The duration \(d_{ij}\) is the _Euclidean_ distance between \(i\) and \(j\) (_i.e._, we assume that the vehicles have a constant speed of one unit of space per unit of time)1. Footnote 1: The given approximations are valid for any cost function at the condition to be proportional to the _Euclidean_ distance metric. 4. For optimal TSP tours in the asymptotic domain, we consider the return arc to the depot to be no different than any other arc of the tour in terms of travel duration. The last assumption is useful for constructing the asymptotic approximation using the BHH formula. No specific assumption is associated with the time windows \([b_{i},f_{i}]\), except for the depot: \(b_{0}=0\) and \(f_{0}=h\). Assumptions for TSP-Ts.The assumptions of the TSP-\(m\)TS are same as those of the TSP-TW, in addition to the following assumptions: 1. The time slots \(A_{k}=[B_{k},F_{k}]\), \(k\in\{1,\ldots m\}\) are a partition of the time horizon \([0,h]\), with \(B_{1}=0\), \(F_{m}=h\) and \(F_{k}=B_{k+1}\)\(\forall 1<k<m\). For the ease of exposition of the results we limit our study to the lengths \(l_{k}=F_{k}-B_{k}\) which are integer dividers of \(n\). Later, this choice allows to consider integer number of nodes to visit during each time slot. 2. The points \(\mathcal{P}\) are assigned to time slots \((A_{k})\) at random in a proportional and an i.i.d. fashion. That the probability to assign a point to the time slot \(A_{k}\) is \(p_{k}=|A_{k}|/m=l_{k}/m\). This assumption is equivalent to draw uniformly at random a point in the time horizon \([0,h]\) and to assign this point to the corresponding time slot. The assumptions of the TSP-\(m\)ITS are same as those of the TSP-\(m\)TS, in addition to the following: 1. Each time slot \(k\in\{1,\ldots m\}\) begins at the time \(B_{k}=(k-1)\times h/m\) and ends at the time \(F_{k}=k\times h/m\). The uniform distribution of the hypotheses (i), and (vi) are relaxed in Section 5, by considering realistic worst-case distributions. ### Approximations in the literature The main results obtained for the TSP, the VRP and the TSP-\(m\)ITS problems are discussed below. TSP asymptotic approximationThe Beardwood-Halton-Hammersley (BHH) theorem gives an approximation of the optimal tour length when the number of points to visit goes to infinity on a compact area [10]. This length approaches a constant value. The theorem below is stated for a planar region given any probability spatial distribution of the points \(\mathcal{P}_{all}\). **Theorem 1** (Bhh).: _For a set of \(n\) random variables \(\{X_{0},...,X_{n-1}\}\;(0<n<\infty)\) independently and identically distributed on a compact support \(\mathcal{R}\subset\mathbb{R}^{2}\), the length \(L_{n}^{sp}\) of a shortest Hamiltonian path linking \(X_{i}\) satisfies_ \[\frac{L_{n}^{sp}}{\sqrt{n}}\xrightarrow[n\to\infty]{}\partial^{sp}\int_{ \mathbb{R}^{2}}\sqrt{\overline{f}(x)}\;dx,\] _with \(\overline{f}(.)\) is the absolutely continuous part of the probability density function \(f(.)\) of the variables \(X_{i}\), and \(\beta^{tp}\) is a constant._ Under the uniform probability distribution of \(\{X_{0},...,X_{n}\}\), the BHH formula becomes, \[\frac{L_{n}^{tsp}}{\sqrt{n}}\xrightarrow[n\to\infty]{}\beta^{tsp}\sqrt{|\mathcal{ R}|}.\] Thus, \(L_{n}^{tsp}=\beta^{tsp}\sqrt{n\ |\mathcal{R}|}+\mathcal{O}(\sqrt{n}),\) where \(|\mathcal{R}|\) is the surface of the planar area \(\mathcal{R}\) where the points are distributed. **Computation of \(\beta^{tsp}\).** The constant \(\beta^{tsp}\) does not depend on the number of points \(n\) in the infinite domain. However, for smaller values of \(n\), several estimates are given for \(\beta^{tsp}\) in the literature. According to Arlotto and Steele [27], \(\beta^{tsp}\) varies in the interval: \(0.62499\leq\beta^{tsp}\leq 0.91996\). Stein [28] uses the estimate \(\beta^{tsp}=0.765\), while Applegate _et al._[29] and Lei _et al._[24] give practical estimates, which depend on the number \(n\), as shown in Table 1. It is worth noting that the overall proposed \(\beta^{tsp}\) values in the literature have been obtained by resolutions on squares surfaces, like those of Table 1. In theory \(\beta^{tsp}\) does not depend on the shape of the support surface or area. But, the impact of the shape can be of high significance. As noted by [11, 12, 13], the BHH formula does not behave well when the base surface becomes elongated. We show this fact in the following two figures by giving some computational results on rectangles of varying aspect ratios \(\alpha\), _e.g._ the ratio of the width to the height of the rectangle. The values of \(\beta^{tsp}\) used for the TSP approximations in the figures are those of Table 1 depending on the number \(n\) of points. Figure 1 shows the gaps of the TSP approximation to the optimal tour length for several \(n=1,000\) points instances uniformly distributed on rectangles with various \(\alpha\) and the same surface. The turns are computed with the Concorde solver [30]. The TSP approximation becomes less accurate for higher values of the aspect ratio \(\alpha\) on the same surface. Additionally, when the number of points to visit \(n\) is small, the impact of \(\alpha\) on the BHH formula is even greater. Figure 2 shows the quality gaps of the TSP approximation for several values of \(n\). For \(n=10\) and \(20\), the average of the absolute gaps is equal to resp. \(8.6\%\) and \(6\%\) on a square surface, and to \(46.81\%\) and \(38.12\%\) on a rectangle surface with \(\alpha=10\). The BHH approximation hugely underestimates the optimal tour length for cases of large \(\alpha\) and small \(n\). When applying a TSP asymptotic approximation on an elongated surface, it could be useful to first derive a set of \(\beta^{tsp}\) values, which better account for the shape of the working surface. \begin{table} \begin{tabular}{|c|c|} \hline \(n\) & \(\beta^{tsp}\) \\ \hline \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: Empirical estimates of \(\beta^{tsp}\) given by [24] for \(n\leq 90\), and by [29] otherwise. Figure 1: The distributions of the gaps of \(L_{n}^{tsp}\) to the optimal tour lengths (\(=(L_{n}^{tsp}/cost_{n}-1)\times 100\) where \(cost_{n}\) is the optimal tour length) of instances of \(n=1000\) points uniformly distributed on rectangles with several aspect ratio \(\alpha\). All the rectangles have the same surface area \(10^{6}\). For each \(\alpha\in\{1,5,10,100,1000\}\), 50 TSP instances are solved. The TSP-\(m\)ITS.Below a satisfiability condition and an asymptotic approximation for the TSP-\(m\)ITS case, previously presented in [31]. The total number of points with the depot is equal to \(n\). We call an TSP-\(m\)ITS random generation model, a mechanism generating instances of the problem TSP-\(m\)ITS satisfying the related assumptions (see Section 3.1.2): **Satisfiability condition of TSP-\(m\)ITS:** For the TSP-\(m\)ITS random generation model, the satisfiability condition of feasible tours linking the realisations of \(\{X_{0},...,X_{n-1}\}\) under the _Euclidean_ metric satisfies on average \[n\times m=\frac{h^{2}}{|\mathcal{R}|\ (\beta^{tsp})^{2}}. \tag{9}\] **Proposition 1** (**Approximation TSP-\(m\)ITS.**).: _For the TSP-\(m\)ITS random generation model, the asymptotic length \(L_{n}^{tsp-mits}\) of the shortest tour linking \(\{X_{0},...,X_{n-1}\}\) under the Euclidean metric if feasible is equal to_ \[L_{n}^{tsp-mits}=\beta^{tsp}\ \sqrt{n\ m\ |\mathcal{R}|}+\mathcal{O}\Big{(} \sqrt{n\ m}\Big{)}. \tag{10}\] The idea behind equation (9) is to impose the condition that each time slot length is enough for visiting the points affected to this time slot. The approximation (10) is obtained by considering that each time slot infers a TSP tour, such that the BHH theorem could be used. Both arguments will be revisited in Section 4.1 for the general case of the TSP-\(m\)TS. ## 4 Asymptotic approximations and bounds In this section, we provide some asymptotic approximations and bounds for the optimal tours of the TSP-\(m\)TS, and the TSP-TW. Table 2 summarizes the obtained results. ### Asymptotic approximation of the TSP-\(m\)TS In the following a satisfiability condition and an asymptotic approximation for the TSP-\(m\)TS case. We provide also a bound linking the approximation of TSP-\(m\)TS and that of TSP-\(m\)ITS. The time slots \((A_{k})_{1\leq k\leq m}\) are considered inputs \begin{table} \begin{tabular}{|c||c|c|c|} \hline TW & \multicolumn{1}{c|}{satisf.} & approx. & \multicolumn{1}{c|}{bound} \\ constraints & & & \\ \hline \(\emptyset\) & - & Theorem 1 & \\ \hline ITS & Eq(9) & Prop. 1 & \\ \hline TS & Eq(11) & Prop. 2 & Prop. 3 \\ \hline TW & & Prop. 4 & \\ \hline \end{tabular} \end{table} Table 2: Results obtained for the satisfiability condition (satisf.), the asymptotic approximation (approx.), and bounds (bound), for the TSP with different time windows constraints. Equation Eq(9), Prop. 1, and Theorem 1 are stated in the preliminaries. Figure 2: The distributions of the gaps of \(L_{n}^{tsp}\) to the optimal tour lengths of instances of \(n\) points uniformly distributed on rectangles with \(\alpha=1\) (a square) and \(\alpha=10\). All the rectangles have the same surface area of \(10^{6}\). For each \(n\in\{10,20,30,40,50,100,1000,2000\}\), 50 TSP instances are solved. of the problem. **Satisfiability condition of TSP-\(m\)TS**: For the TSP-\(m\)TS random generation model with the time slots \((A_{k})_{1\leq k\leq m}\), the satisfiability condition of feasible tours linking the realisations of \(X_{0},...X_{n-1}\) under the _Euclidean_ metric satisfies on average \[l_{min}=\frac{(\beta^{tsp})^{2}\ |\mathcal{R}|}{h}\times n, \tag{11}\] where \(l_{min}=\min_{1\leq k\leq m}|A_{k}|\). In order to derive the condition (11), we adopt the following approach. As the distribution of points across time slots is proportional to their length, the expected number \(E(n_{k})\) of points visited in each time slot \(A_{k}\) is equal to \[E(n_{k})=\frac{n\times l_{k}}{h}.\] To treat all time slots equally, we have considered \(n\) points (instead of \(n-1\)) in this distribution of points. In the TSP and the TSP-TS context, the depot does not carry a special significance, except that it is the first point where the tour starts. In terms of space, the depot is treated equally to the other points (assumption (i)). In the asymptotic domain, one point difference in the set of points to visit has a negligible impact. In order to have at least one feasible tour of the realizations of \(X_{i}\), the duration of the shortest (Hamiltonian) path linking the points of each time slot and bridging to the surrounding slots must be at most equal to the size of the time slot. Figure 3 shows a representation of crossing the points of a time slot \(A_{i}\). The feasibility condition is given by \(b_{i}+c_{i}+e_{i}\leq l_{i}\) for all \(A_{i}\). We approximate the travel time \(b_{i}+c_{i}+e_{i}\) in the asymptotic domain by a BHH formula. This formula allows us to approximate any path of \(E(n_{k})\) arcs by a tour of \(E(n_{k})\) points since its formulation does not require knowing the specific positions of the points to visit. For more precaution, we use assumption (iv), which allows us to treat the return arc of an optimal tour in the asymptotic domain like any other arc of the tour. In this configuration, the proportion of \(B_{i}\) and \(C_{i}\) arcs in the time slot \(A_{i}\) (see Figure 3) is treated as one arc. Then, we suppose asymptotically that \[L_{E(n_{1})}^{tsp}\leq l_{1},\ L_{E(n_{2})}^{tsp}\leq l_{2},...,\ L_{E(n_{m}) }^{tsp}\leq l_{m}. \tag{12}\] Let us denote by \(C=\beta^{tsp}\sqrt{|\mathcal{R}|/h}\), we have \[(\ref{eq:C1}) \Longleftrightarrow\ C\times\sqrt{n}\leq\sqrt{l_{k}}\ \ \ \ \ \forall k\in\{1,2,...m\}\] \[\Longrightarrow\ C\times\sqrt{n}\leq\sqrt{l_{min}}\] According to the BHH formula, the length of the tours of the time slots evolves as \(\sqrt{n}\). The satisfiability of a TSP-TS tour of \(n\) points is determined by the intersection between the function \(C\times\sqrt{n}\), calibrated for the instance parameters (\(|\mathcal{R}|\), \(h\)), and the value \(\sqrt{l_{min}}\) of the smallest time slot length. Figure 4 shows that for several \(\sqrt{l_{min}}\) values the satisfiable number of customers varies: \(n_{1}\) for \(l_{min}=l_{1}\), and \(n_{2}\) for \(l_{min}=l_{2}\). Thus, the upper bound of the number of customers \(n\) that can be served asymptotically is limited by \(l_{min}\), the horizon \(h\), and the area \(|\mathcal{R}|\), _i.e._, \[n\leq n_{max}=\frac{l_{min}}{C^{2}}=\frac{l_{min}\times h}{(\beta^{tsp})^{2}\ | \mathcal{R}|}.\] Figure 3: A representation of crossing the points of a given time slot \(A_{i}\). \(B_{i}\) and \(C_{i}\) represent the bridging arcs of \(A_{i}\), while \(b_{i}\) (resp. \(c_{i}\)) is the duration of crossing \(B_{i}\) (resp. \(C_{i}\)) within the interval of the time slot. \(e_{i}\) is the duration of the shortest path linking the points affected to the time slot. For the first time slot, \(B_{i}\) does not exist and \(b_{i}=0\). For the last time slot, \(C_{i}\) is the returning arc to the depot and \(c_{i}=C_{i}\). Similarly, if \(n\) is fixed, the lower bound of the minimum length of the time slots is given asymptotically by \[\frac{n\times(\beta^{tsp})^{2}\times|\mathcal{R}|}{h}\leq l_{min}.\] The equality case, that is the equation (11) defines a limiting curve of the satisfiable region of tours linking the realizations of \(X_{i}\). Since the distribution of points to visit on time windows follows an i.i.d uniform distribution, then for a given time slot \(A_{k}\), the assignment of a point \(i\) can be seen as a Bernoulli trial with a probability \(p_{k}=l_{k}/h\). The random variable \(n_{k}\) of the number of points of a time slot follows the binomial distribution \(\mathcal{B}(n,p_{k})\). Under the central limit theorem, \(n_{k}\) converges asymptotically to a normal distribution. Being a symmetric distribution, half of the realizations of \(n_{k}\) are over \(E(n_{k})\), and the other half is below for each time slot \(k\). The equation (11) is thus given on average following this argument. **Proposition 2** (**Approximation of TSP-\(m\)Ts)**.: _For the TSP-\(m\)TS random generation model with the time slots \((A_{k})_{1\leq k\leq m}\), the asymptotic length \(L_{n}^{\text{tsp}-mts}\) of the shortest tour linking \(X_{0},....X_{n-1}\) under the Euclidean metric if feasible is equal to_ \[L_{n}^{tsp-mts}=L_{n}^{tsp}\sum_{k=1}^{m}\sqrt{\frac{l_{k}}{h}}=\beta^{tsp}\; \sqrt{\frac{n\;|\mathcal{R}|}{h}}\sum_{k=1}^{m}\sqrt{l_{k}}, \tag{13}\] _where \(l_{k}=|A_{k}|\), \(\forall k\in\{1,\ldots m\}\)._ Proof.: Since time slots are non-overlapping intervals, the optimal tour linking the realizations of \(X_{i}\) in the asymptotic domain can be seen as a summation of optimal TSP tours, one for each time slot \(A_{k}\) with a total of points of \(E(n_{k})=n\times l_{k}/h\) of the time slot and \(l_{k}=|A_{k}|\). The bridge arc at the end of each time slot \(A_{k}\) is considered to be equivalent to the arc closing the optimal Hamiltonian path of points of \(A_{k}\) in order to be a circuit. Since the BHH theorem does not require knowing the specific positions of points to visit in addition to the assumption (iv), we could make use of this assumption. Thus, we have \[L_{n}^{tsp-mts}=\sum_{k=1}^{m}L_{E(n_{k})}^{tsp}=\beta^{tsp}\;\sqrt{\frac{n}{h }|\mathcal{R}|}\sum_{k=1}^{m}\sqrt{l_{k}}=L_{n}^{tsp}\sum_{k=1}^{m}\sqrt{ \frac{l_{k}}{h}}.\] In case of only one-time slot equal to the time horizon \([0,h]\), the TSP-\(m\)TS approximation corresponds to the TSP formula \(L_{n}^{tsp-1ts}=L_{n}^{tsp}\). In case of identical time slots, _i.e._\(l_{k}=h/m\), the satisfiability condition (11) and the approximation (13) correspond respectively to (9) and (10). **Proposition 3** (**Bound of TSP-\(m\)Ts)**.: _For the TSP-\(m\)TS random generation model with the time slots \((A_{k})_{1\leq k\leq m}\), the asymptotic length \(L_{n}^{tsp-mts}\) of the the tour linking \(X_{0},...X_{n-1}\) under the Euclidean metric in the case of feasibility satisfies_ \[L_{n}^{tsp}\leq L_{n}^{tsp-mts}\leq L_{n}^{tsp-mits}. \tag{14}\] Proof.: Given the lengths of the time slots \((l_{k})_{k}\), the lower bound of (14) is obtained by using the following inequality \[\sqrt{\sum_{k=1}^{m}l_{k}}\leq\sum_{k=1}^{m}\sqrt{l_{k}} \tag{15}\] which can be proved by recursion. Using (13) and \(h=\sum_{k=1}^{m}l_{k}\), we have \[(15)\Longrightarrow\beta^{stop}\sqrt{\frac{n}{h}|\mathcal{R}|}\sqrt{\sum_{k=1}^{ m}l_{k}}\leq L_{n-1}^{tsp-mts}\Longrightarrow\beta^{stop}\sqrt{n|\mathcal{R}|}=L_{n}^ {tsp}\leq L_{n}^{tsp-mts}.\] The upper bound of (14) is obtained by using the generalized mean inequality, specifically the inequality between the arithmetic and quadratic mean, which gives \[\sum_{k=1}^{m}\sqrt{l_{k}}\leq\sqrt{m\sum_{k=1}^{m}l_{k}}=\sqrt{m\times h}.\] From (10), the upper bound is then equal to \[\beta^{stop}\sqrt{\frac{n}{h}|\mathcal{R}|}\sqrt{m\times h}=L_{n}^{tsp-mts}.\] ### The TSP-TW In this subsection, we provide a general bound of the TSP-TW asymptotic approximation, based on an induced TSP-TS. We first define the time windows induced time slots that are used in the induced TSP-TS problem. **Definition 4.1** (TW induced TS).: _Given the time windows \([b_{i},f_{i}]\) of the set of points \(\mathcal{P}\) and a time horizon \(h\), we define the induced time slots as follows_ 1. _Sort all begin and end TW bounds of_ \(\mathcal{P}\) _by ascending order, from the set_ \(\mathcal{S}_{0}=\{b_{1},f_{1},...,b_{n-1},f_{n-1}\}\) _to obtain the set_ \(\mathcal{S}_{1}=\{c_{0},c_{1},...,c_{2n-1}\}\)_, where_ \(c_{0}=0\) _and_ \(c_{2n-1}=h\)_._ 2. _Define the time slots according to the_ \(\mathcal{S}_{1}\) _order, such that_ \(A_{k}=[c_{k-1},c_{k}],\)__\(k\in\{1,..,2n-1\}\)_. The_ \((2n-1)\) _time slots_ \((A_{k})\) _constitute a partition of the time horizon._ The total number of time slots is \(2n-1\). This number can be decreased by one for each \(c_{k-1}=c_{k}\), for all \(1\leq k\leq 2n-1\), which represent empty time slots. It can be further decreased by removing time slots with no client assigned to them. In fact, each client with a time window \([b_{i},f_{i}]\) can be served in one of the time slots \(A_{k}\), where \(j+1\leq k\leq j+l\), \(c_{j}=b_{i}\), and \(c_{j+l}=f_{i}\), with \(l\geq 1\), are the correspondent points of \(b_{i}\) and \(f_{i}\) in the ordered set \(\mathcal{S}_{1}\). Let us denote the final number of time slots as \(m^{*}=2n-1-m_{1}-m_{2}\), where \(m_{1}\) is the number of duplicated \(c_{k}\), and \(m_{2}\) is the number of time slots with no client. **Proposition 4** (Bound of TSP-TW).: _For the TSP-TW random generation model, we have_ \[L_{n}^{tsp}\leq L_{n}^{tsp-tw}\leq L_{n}^{tsp-m^{*}its}, \tag{16}\] _where \(m^{*}=2n-1-m_{1}-m_{2}\), with \(m_{1}\) is the number of duplicated time windows bounds of \(\mathcal{P}\), and \(m_{2}\) is the number of time slots with no client affected to them._ Proof.: The lower bound can be easily derived. Given a random instance \(p\) of the TSP-TW, let \(r\) be the optimal route of the corresponding TSP, _i.e._ same problem configuration of the TSP-TW without the time windows, with a duration \(d_{r}\). For all feasible tours \(r^{\prime}\) of \(p\), we have by definition \(d_{r}\leq d_{r^{\prime}}\). Then, asymptotically \(L_{n}^{tsp}\leq L_{n}^{tsp-tw}\). For the upper bound, we consider the induced TSP-\(m^{*}\)TS problem \(p^{\prime}\) of \(p\), which corresponds to the problem with the same settings of \(p\) apart for time windows considerations. They are taken to be the time slots of Definition 4.1. The approximation of the TSP-\(m^{*}\)TS does not require knowing the exact assignment of points to the time slots (see Proposition 2), in the similar way that the BHH formula does not require knowing the exact spatial locations of \(\mathcal{P}_{all}\). Suppose \(r^{*}\) is the optimal solution of the TSP-TW instance \(p\). This solution can be reached for one specific assignment of \(\mathcal{P}\) to the induced time slots. Actually, the problem \(p^{\prime}\) is more constrained than \(p\), as each point is assigned to a time slot smaller or equal to its original time window. Thus, we have \[L_{n}^{tsp-tw}\leq L_{n}^{tsp-m^{*}ts}. \tag{17}\] Across all time slots configurations, the worst case is attained for identical TS, according to Proposition 3: \[L_{n}^{tsp-tw}\leq L_{n}^{tsp-m^{*}its}.\] Concerning the problem satisfiability, if \(p\) is not feasible, \(p^{\prime}\) will be as well as it is more constrained: \(L_{n}^{tsp-tw}=+\infty\Longrightarrow L_{n-1}^{tsp-m^{*}its}=+\infty\). The inequality (16) implies \[1\leq\frac{L_{n}^{tsp-tw}}{L_{n}^{tsp}}\leq\frac{L_{n}^{tsp-m^{*}its}}{L_{n}^{ tsp}}\leq\frac{L_{n}^{tsp-(2n-1)its}}{L_{n}^{tsp}}=\mathcal{O}(\sqrt{n}),\] meaning that the upper bound ratio over the TSP approximation has a slow growth rate. However, the upper bounds (16) and (17) are generally not tight, as the number of points assigned to each time slot can be quite small, because asymptotic approximations are unsuitable when the number of points is small. At the opposite, when the points \(\mathcal{P}\) have large time windows and are densely located in time, the upper bounds can be of a better quality. **Remark.** to provide an approximation of the TSP-TW, one way could be to relax some bounds of the time windows such that the TW-induced TS can be large enough to encompass several points, more than the strict initial configuration. By doing so, this approximation is not an upper bound anymore, and is rather an estimation of the TSP-TW asymptotic approximation. Which time windows bounds of \(\mathcal{P}\) to consider for relaxation is still an open question. The estimation of the asymptotic approximation can be given by \(L_{n}^{tsp-tw}\approx L_{n}^{tsp-mts}=\beta^{tsp}\sqrt{\frac{n\mid\mathcal{R} \mid}{h}}\sum_{k=1}^{m}\sqrt{l_{k}},\) where \((l_{k})_{1\leq k\leq m}\) are the lengths of the relaxed induced time slots of the TSP-TW problem. ## 5 The worst-case demand In this section, we relax the uniform distribution hypothesis of the instances for more realistic consideration. We first explicit the motivations and the choice of the distribution, then provide the expression of these distributions for both time and space. Finally, the satisfiability condition and the asymptotic approximation of the TSP-\(m\)ITS are given accounting for those distributions. ### Motivations and the choice of the distribution The approximations of the previous section rely on uniform distributions, which assign equal probabilities to customers' requests whether in terms of space or time (distribution on the time slots). This is rather a general assumption. The distribution of customers' demand is known to vary considerably during the day, with a low demand during the noon period for instance. Variations are also spatial as city centers and high-density metropolitan areas attract more demands. If we happen to learn some features of the demand distribution, which is generally the case of companies and services using historical data, being able to include this additional information in the choice of the temporal and the spatial demand distributions is beneficial in order to improve the asymptotic approximations of the routing problems. The actual demand distribution (for which the routing problem is solved) is not completely known in advance, although some partial statistical features could be discovered. To face this underlying uncertainty, predictive models can be used to construct the global demand distribution, as done in [32]. Another approach follows the distributionally robust optimization, in which optimization is performed against the worst-case realization of the unknown distribution [33]. This approach falls under the paradigm of robust optimization [34], which becomes popular for optimizations under uncertainty as it constructs computationally tractable robust counterparts of problems. Several statistical methods exist to estimate the worst-case distribution: the maximum likelihood estimator (MLE) [35], the principle of maximum entropy (ME) [36], and the minimum Hellinger distance estimator [37]. Carlson and Behzoodi [23] have generated a worst-case spatial demand distribution for capacitated VRP by maximizing the upper bound of the VRP tour length given by Haimovich and Rinnoy Kan [38]. We opted for the maximum entropy distribution due to its intuitive nature, and its higher probabilistic properties compared to the MLE as smoothing techniques are not needed2. Footnote 2: The MLE and the ME approaches are equivalent for the estimation of natural discrete distributions given a number of moments [39]. The entropy introduced by Shannon [40] is a measure of the degree of disorder or uncertainty in a random variable. A low entropy indicates that the system representing the random variable is organized, can be easily predicted, and needs fewer bits to be represented. At the opposite, a high entropy system tends to be highly dispersed, difficult to predict, and needs considerable amounts of information in order to be stocked. In the event of unknown demand, we could argue that the worst case corresponds to a notoriously difficult to predict scenario in terms of customers' appearance in both time and space. The element of surprise during the transportation operation has to be at its maximum level. Subsequently, the requests of customers would tend to follow and be induced by a geographical and temporal distribution with high entropy. In this respect, maximum entropy distributions on the studied area \(\mathcal{R}\) and for time windows consideration are a suitable choice for worst case demand distributions. The estimation method of ME distribution is based on maximizing of the entropy measure under a given number of first moments. If no moment constraint is imposed, the distribution with the highest entropy corresponds to the uniform distribution. We constrain our study to the first two moments, and to the study of TSP-ITS case. ### Worst-case spatial demand The spatial demand distribution \(f(.)\) is a continuous two dimensional distribution for which the mean \(\mu\) and the covariance matrix \(\Sigma\succ 0\) are given. \(f(.)\) is defined on the square area \(\mathcal{R}=[0,a]\times[0,a]\). The optimization problem aimed to find \(f(.)\) is as follow \[\max_{f(.)} H(f)=-\int_{\mathcal{R}}f(x,y)\ \log f(x,y)\ dx\ dy,\] (18) s.t. \[\mu=\int_{\mathcal{R}}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\ f(x,y)\ dx\ dy \tag{19}\] \[\Sigma+\mu\mu^{T}=\int_{\mathcal{R}}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}\ f(x,y)\ dx\ dy\] (20) \[\int_{\mathcal{R}}f(x,y)\ dx\ dy=1\] (21) \[f(x,y)\geq 0\qquad\forall(x,y)\in\mathcal{R}, \tag{22}\] where \((\ref{eq:18})\) is the entropy function, \((\ref{eq:19})\) is the constraint on the first moment \(\mu\) of \(f(.)\), and \((\ref{eq:20})\) is the constraint on the second moment of \(f(.)\), the covariance matrix \(\Sigma\). The probability distribution constraints are given by \((\ref{eq:21})\) and \((\ref{eq:22})\). Applying the Lagrange multipliers method to this problem leads to the following exponential density function, \[f(x,y)=\exp\{\nu-1+\lambda^{T}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)+\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}Q_{f}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\}, \tag{23}\] where \(\nu\in\mathbb{R}\), \(\lambda\in\mathbb{R}^{2}\), and \(Q_{f}\succeq 0\) are Lagrange multipliers, which are associated respectively to the constraints (21), (19) and (20). These multipliers can be determined by solving the equations (24)-(26) corresponding to the constraints (19)-(21) in which the function \(f(.)\) of (23) is replaced: \[\mu=\int_{\mathcal{R}}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\ e^{\nu-1+\lambda^{T}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)+\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}Q_{f}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\ dx\ dy \tag{24}\] \[\Sigma+\mu\mu^{T}=\int_{\mathcal{R}}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}\ e^{\nu-1+\lambda^{T}\left(\begin{smallmatrix}x \\ y\end{smallmatrix}\right)+\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}Q_{f}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\ dx\ dy\] (25) \[\int_{\mathcal{R}}e^{\lambda^{T}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)+\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)^{T}Q_{f}\left(\begin{smallmatrix}x\\ y\end{smallmatrix}\right)\ dx\ dy=e^{1-\nu} \tag{26}\] The maximum entropy distribution (23) exists and is unique for one dimension, see [41]. In case the support of the distribution \(\mathcal{R}\) is the real domain \(\mathbb{R}^{2}\), the ME distribution corresponds to the bi-variate normal distribution [36]. ### Worst-case temporal demand The ME distribution for the temporal demand follows the same idea of the spatial counterpart, expect that the temporal density function \(g(.)\) is discrete and univariate. We split the time horizon to equal size time slots, which represent the periods of the day. The worst-case computation here is done on the position of the time slot. This is why we omitted the case of TS with various lengths. The support of the distribution \(g(.)\) is \(\mathcal{S}=\{0,1,..m\}\), which incorporates the \(m\) time slots and the case of not choosing any TS, _i.e._ the case 0. We let \(g_{i}=g(l_{i}),\ \ \forall i=0,..,m\), The problem to optimize is as follows ### Worst-case asymptotic approximations Depending on the moment values for the spatial \(f(.)\) and the temporal \(g(.)\) demand distributions, four combinations of the worst-case distribution can be given, as shown in Table 3. Subsequently, four approximations can be stated for the tour length of the TSP-\(m\)ITS case. The equivalent version of the satisfiability condition is given as follows. **Satisfiability TSP-\(m\)ITS under ME** For the TSP-\(m\)ITS random generation model, wherein the demand distributions f(.) and g(.) follow the ME principle, the satisfiability condition of feasible tours linking the realisations of \(X_{0},...X_{n-1}\) under the _Euclidean_ metric satisfies on average, \begin{tabular}{c|c|c|} f(.) \(\backslash\) g(.) & None & \(\mu_{g}\) \\ \hline None & \(n\times m=\) & \(n\times f_{1}(\mu_{g},m)=\) \\ & \(h^{2}/\big{(}(\beta^{\mathit{step}})^{2}\mid\mathcal{R}\big{)}\) & \(h^{2}/\big{(}(\beta^{\mathit{step}})^{2}\mid\mathcal{R}\big{)}\) \\ \hline \(\mu_{f}\) and \(\Sigma_{f}\) & \(n\times m=\) & \(n\times f_{1}(\mu_{g},m)=\) \\ & \(h^{2}/\big{(}\beta^{\mathit{step}}\;F(\mu_{f},\Sigma_{f})\big{)}^{2}\) & \(h^{2}/\big{(}\beta^{\mathit{step}}\;F(\mu_{f},\Sigma_{f})\big{)}^{2}\) \\ \hline \end{tabular} such that, \(\mu_{g}\in[|1,k|]\), and \[F(\mu_{f},\Sigma_{f})=\int_{\mathcal{R}}e^{(\nu-1+\lambda^{T}\big{(}\frac{\pi }{y}\big{)}+\big{(}\frac{\pi}{y}\big{)}^{T}Q_{f}(\frac{\pi}{y}))/2}dxdy,\] where \(\nu\in\mathbb{R}\), \(\lambda\in\mathbb{R}^{2}\) and \(Q_{f}\succeq 0\) satisfy (24), (25) and (26), and \[f_{1}(\mu_{g},m)=m^{2}\times\binom{m}{\mu_{g}}\big{(}\frac{\mu_{g}}{m}\big{)}^ {\mu_{g}}\big{(}1-\frac{\mu_{g}}{m}\big{)}^{m-\mu_{g}}.\] \begin{table} \begin{tabular}{c|c|c|} f(.) \(\backslash\) g(.) & moment constraints: None & \(\mu_{g}=m\;p_{b}\) \\ \hline None & uniform distr. f(.) & uniform distr. f(.) \\ & uniform distr. g(.) & binomial distr. g(.) \\ \hline \(\mu_{f}\) and \(\Sigma_{f}\) & exponential distr. f(.) & exponential distr. f(.) \\ & uniform distr. g(.) & binomial distr. g(.) \\ \hline \end{tabular} \end{table} Table 3: Four worst-case distribution, where the first moment of \(g(.)\) is \(\mu_{g}=m\;p_{b}\), with \(p_{b}\) is the probability of successes of the binomial distribution, and \(\mu_{f}\) and \(\Sigma_{f}\) are respectively the first and second moment of \(f(.)\). A similar analysis to the satisfiability condition (11) of the uniform case is used here, tacking into account the new expression of the building block of the TSP asymptotic approximation modified under ME. When the constraint \(\mu_{f}\), \(\Sigma_{f}\), and \(\mu_{g}\) are imposed, the BHH formula for the \(k\)-th time slot becomes \[\frac{L_{E(n_{k})}^{tsp}}{\sqrt{E(n_{k})}}\xrightarrow[n\to\infty]{}\beta^{tsp} \ F(\mu_{f},\Sigma_{f}),\] where \(E(n_{k})\) is the expected number of points assigned to the \(k\)-th time slot. Let \(Prob(k;m;p_{b})\) be the probability mass function of the binomial distribution assigning a point to a time slot \(k\), such that \(p_{b}=\mu_{g}/m\) is the probability of success. We have \[E(n_{k})=n\times Prob(k;m;p_{b})=n\times\binom{m}{k}(\frac{\mu_{g}}{m})^{k} \big{(}1-\frac{\mu_{g}}{m}\big{)}^{m-k}.\] \(E(n_{k})\) is maximized for \(k=p_{b}\ m=\mu_{g}\). The equality to obtain the satisfiability condition is the following \[L_{E(n_{k})}^{tsp}=\beta^{tsp}\ F(\mu_{f},\Sigma_{f})\sqrt{n\ Prob(\mu_{g};m; \frac{\mu_{g}}{m})}=h/m.\] In case \(\mu_{g}\) is not imposed, then \(E(n_{k})=n/m\). We can solve for this value. In case \(\mu_{f}\), \(\Sigma_{f}\) are not imposed, then \(F(\mu_{f},\Sigma_{f})=\sqrt{|\mathcal{R}|}\). **Proposition 5** (**Approximation TSP-\(m\)ITS under ME**).: _For the TSP-\(m\)ITS random generation model, wherein the distributions of f(.) and g(.) follow the ME principle, the asymptotic optimal length \(L_{n-1}^{tsp-mits}\) of the tour linking \(X_{0},...X_{n-1}\) under the Euclidean metric if feasible is equal to_ \begin{tabular}{c|c|c|} _f(.) \(\backslash\) g(.)_ & _None_ & \(\mu_{g}\) \\ \hline _None_ & \(\beta^{tsp}\sqrt{n\ m\ |\mathcal{R}|}\) & \(\beta^{tsp}\ \sqrt{n\ f_{2}(\mu_{g},m)\ |\mathcal{R}|}\) \\ \hline \(\mu_{f}\) _and_ \(\Sigma_{f}\) & \(\beta^{tsp}\sqrt{n\ m\ }F(\mu_{f},\Sigma_{f})\) & \(\beta^{tsp}\ \sqrt{n\ f_{2}(\mu_{g},m)}\ F(\mu_{f},\Sigma_{f})\) \\ \hline \end{tabular} _such that, \(\mu_{g}\in[[1,k]]\), \(n/Q\in\mathbb{N}\), \(l\times Q\geq n\), and_ \[F(\mu_{f},\Sigma_{f})=\int_{\mathcal{R}}e^{(\nu-1+\lambda^{T}\big{(}\frac{x}{y }\big{)}+\big{(}\frac{x}{y}\big{)}^{T}Q_{f}\big{(}\frac{x}{y}\big{)})/2}dxdy,\] _where \(\nu\in\mathbb{R}\), \(\lambda\in\mathbb{R}^{2}\) and \(Q_{f}\succeq 0\) satisfy (24), (25) and (26), and_ \[f_{2}(\mu_{g},m)=\sum_{k=1}^{m}\sqrt{\binom{m}{k}\big{(}\frac{\mu_{g}}{m} \big{)}^{k}\big{(}1-\frac{\mu_{g}}{m}\big{)}^{m-k}}.\] Proof.: Similar proof to that of proposition 2. When the constraint \(\mu_{f}\), \(\Sigma_{f}\) and \(\mu_{g}\) are imposed, the approximation is equal to the following summation \[L_{n}^{tsp-mits} =\sum_{k=1}^{m}L_{E(n_{k})-1}^{tsp}=\beta^{tsp}\ F(\mu_{f},\Sigma _{f})\sum_{k=1}^{m}\sqrt{n\ Prob(k;m;\frac{\mu_{g}}{m})}\] \[=\beta^{tsp}\ F(\mu_{f},\Sigma_{f})\sqrt{n}\sum_{k=1}^{m}\sqrt{ \binom{m}{k}\big{(}\frac{\mu_{g}}{m}\big{)}^{k}\big{(}1-\frac{\mu_{g}}{m} \big{)}^{m-k}},\] where \(E(n_{k})\) is the expected number of points assigned to the \(k\)-th time slot, and \(Prob(k;m;\mu_{g}/m)\) is the probability mass function of the binomial distribution assigning a point to visit to the \(k\)-th time slot, where \(p_{b}=\mu_{g}/m\) is the probability of success. Notice that through arithmetic mean - quadratic mean (AM-QM) inequality, we have \(f_{2}(\mu_{g},m)\leq\sqrt{m}\), meaning that the TSP-\(m\)ITS approximation when the mean \(\mu_{g}\) is imposed is always lesser or equal to that for uniform temporal distribution. ## 6 Solving approach Solving approaches of the TSP-TW are mostly designed to take advantage of the tightness of the time windows, _e.g._[43]. They do not perform efficiently in case the time window constraints are quite large when for instance equaling half or a third of the time horizon. The most efficient relaxations of the TSP-TW, namely \(t\)-Tour and \(ng\)-Tour relaxations, take advantage of the tightness of the time windows in order to calculate higher lower bounds that are used in the branch-and-price approaches. Since the time windows of the TSP-TS have a specific structure of non-overlapping, we can take advantage of this property in order to propose an adapted approach to solve the problem to optimality. In our approach, we first construct an acyclic directed graph, which uses a TSP solver to obtain the lengths of Hamiltonian paths inside each time slot. In the second step, we run a shortest path algorithm on this graph to generate the optimal tour for the instance. In this way, we could exploit the non-overlapping property of time windows. The construction of the graph of a TSP-TS instance is given by Algorithm 1. It outputs a directed acyclic graph (DAG), with one unique source and one unique sink vertex. Both correspond to the depot of the input instance. The travel cost from \(i\) to \(j\) of an arc in the graph, termed Dist[i,j], corresponds to the original instance cost \(c_{ij}\) if the points have different time slots, _i.e._\(TS(j)=TS(i)+1\). In case the points \(i\) and \(j\) belong to the same time slot, _i.e._\(TS(j)=TS(i)\), Dist[i,j] corresponds to the shortest Hamiltonian path length going from \(i\) to \(j\) and passing through all the remaining points of the time slot. Arcs from \(i\) to \(j\) such as \(TS(i)\neq TS(j)\) and \(TS(j)\neq TS(i)+1\) are not considered in our graph as they correspond to unfeasible paths. The function \(Idx(v)\) tracks the index of a vertex \(v\) in the graph in the original instance. An example DAG graph is given in Figure 5. Figure 5: An example of the DAG of a TSP-TS instance of \(m=2\) time slots. The red (resp. blue) color represents points of the first (resp. second) time slot. \(D\) is the depot. The shortest path computation in the DAG \(G\) of the instance to output the optimal tour length is described in Algorithm 2. It finds the shortest path from the source of the graph, \(source(G)\), to its sink, \(sink(G)\), which constitutes a Hamiltonian circuit starting and ending at the depot. Each sub-path \(N\) has the 4 attributes: the last vertex visited, \(LastVertex\), the optimal path length from \(source(G)\) to this vertex, \(PathLength\), the arrival time to this vertex, \(ArriveTime\), and a lower bound value \(Binf\). The main data structure of Algorithm 2 is a list of list of sub-paths, indexed by \(LastVertex\) and sorted by increasing \(PathLength\). It is denoted \(Q\). The algorithm includes also a mechanism of dominance and cutting by lower bounds, which allows us to not explore all dominated sub-paths and their extensions. A sub-path \(M\) with \(LastVertex(M)=j\) is said to be dominated in case it exists a sub-path \(L\) in \(Q\) such that \(LastVertex(L)=j\), \(PathLength(L)\leq PathLength(M)\) and \(ArriveTime(L)\leq ArriveTime(M)\). The lower bound of a sub-path \(M\), \(Binf(M)\), is calculated by completing \(PathLength(M)\) with the lowest cost value from \(LastVertex\) to \(sink(G)\), without connectivity considerations. To do so, a minimal distance between each successive graph levels has to be previously computed. The minimal distance between a level \(p\) and its next level \(p+1\) is given by the function \(\text{Dist}_{min}[\text{p ; p+1}]=\min_{NV(k)=p,\;NV(m)=p+1}\text{Dist}[\text{k ; m}]\). The following functions are also used in Algorithm 2: * \(NV(v)\): the level of the vertex \(v\) in the DAG \(G\). The maximal level of \(G\) is denoted \(NV_{max}=\max_{v\in V}NV(v)\), which corresponds to the level of \(sink(G)\). * TS(v): the time slot of the vertex \(v\) in \(G\). * StartTime(ts): the start time of a time slot ts. * EndTime(ts): the end time of a time slot ts * GreedyPath(G): the greedy growing path from \(source(G)\) to \(sink(G)\). ## 7 Computational results In this section, we report experimental results that examine the quality of the proposed asymptotic approximation on a number of benchmarks of the TSP-TW altered in this occasion for time slots considerations. ### Benchmark generation To generate instances of the TSP-TS, we make use of the benchmarks of the TSP-TW listed in Table 4. The table encompasses benchmark datasets that are often used in the literature for evaluating solving algorithms of the TSP-TW3. The column \(n-1\) designates the number of customers to visit, and \(w\) is the maximum width used in the generation of the lengths of the time windows. Our goal is to take the physical distribution of the points to visit and the depot from these instances, and change their time windows configuration to time slots. Footnote 3: These datasets can be downloaded from the websites: [https://myweb.uiowa.edu/bthoa/tsptwbenchmarkdatasets.htm](https://myweb.uiowa.edu/bthoa/tsptwbenchmarkdatasets.htm), and [http://lopez-ibanez.eu/tsptw-instances](http://lopez-ibanez.eu/tsptw-instances) [Accessed 2022-05-29]. To generate time slots, we rely on the line segment partitioning procedures of [51], especially the method of the repulsion between the partitioning points. Given a time horizon \(h\), to produce \(m\) time slots, \(m-1\) points have to be placed in the interval \([0,h]\). The repulsion method is based on randomly uniformly generating more points than needed, precisely \(p\times m-1\) points, with \(p>1\), and then retaining the points \(p\times l\) where \(l=1,..,m-1\). The points represent the bounds of the time slots, in addition to \(0\) and \(h\). Figure 6 shows an example of this distribution. The constant \(\beta^{tsp}\) is taken to be the estimates of Lei _et al._[24] for values of \(n\leq 90\), and to the estimates of Applegate _et al._ for \(n\geq 100\)[29], as shown in Table 1. The paper results are presented for DU and GD benchmarks, since they have the same base surface. We take the time horizon to be \(15\) times the diameter of the square surface, _i.e._\(h=1060.66\). For each \(n\in\{21,41,61,81,101\}\), five instances of DU and GD are chosen at random, thus a total of 25 initial instances. For each \(m\in\{1,2,3,4,5,6,7,8,9,10\}\), we generate \(5\) time slot partitions \(ts\): Identical time slots (\(ts=1\)), and time slots using the repulsion based partitioning method with \(p=20,50,100,150\) (\(\leq ts\leq 5\)). For each \(n\in\{21,41,61,81,101\}\), we generate \(15\) distributions of the \((n-1)\) clients on \([0,h]\), \(5\) uniformly (mode zero), \(5\) with one mode, and \(5\) with two modes. For one mode, we use one Normal distribution: \(\mathcal{N}(\mu=h/2,\sigma=h/4)\), and for two modes, we use a mixture of two Normal distributions: \(\mathcal{N}(\mu=h/4,\sigma=h/4)\) and \(\mathcal{N}(\mu=3\,h/4,\sigma=h/4)\). The uniform distribution allows to assign clients to time slots proportionally to their lengths. The distributions with mode(s) try to replicate realistic cases of repartitions of clients. Urban traffic flow have often two distinct peaks in terms of traffic volumes, occurring at the morning and evening. In summary, for each initial TSP-TW instance, we generate 25 different time slot configuration, given for \(1\leq m\leq 10\), \(1\leq ts\leq 5\), and the \(15\) random distributions of clients on the time slots. A total of \(18.750\) instances are generated for benchmarking. The experiments are performed on an _Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz_ processor with 32 GB RAM memory machine. ### Performance measures To evaluate the accuracy of the feasibility condition, we report type I (false positive - FP) and type II (false negative - FN) errors for the following null hypothesis \(H_{0}\): the TSP-TS instance is feasible, by examining the condition (11). Type I and type II errors are essential concepts used in the interpretation of the results in statistical hypothesis testing. In our case, the most serious error is type II error, for which the instance is predicted to be feasible, which it is not in fact. To examine the quality of the proposed approximations, we use a quality gap to the actual tour lengths. This allows us to obtain a precise assessment of how close are the asymptotic approximations to the optimal length of the tours, under the assumptions of _Euclidean_ distances and a uniform random generation of the locations of points to visit and the Figure 6: An instance of time slot distribution for identical time slots (\(ts=1\)), and using repulsion based partitioning method [51] for \(p=20,50,100,150\) (\(ts\in[2,5]\)). e depot. The quality gap of for an instance of TSP-TS is equal to \[G_{n}^{tsp-ts}=(L_{n}^{tsp-ts}-T_{n}^{tsp-ts})\times 100/T_{n}^{tsp-ts},\] where \(L_{n}^{tsp-ts}\) and \(T_{n}^{tsp-ts}\) are resp. the asymptotic approximation and the actual optimal tour length of the instance problem. The absolute gaps \(|G_{n}^{tsp-ts}|\) are also reported. In addition to \(L_{n}^{tsp-ts}\) of proposition 2, called here the distributional approximation, we compute an experimental approximation of the TSP-TS, called the sampling approximation, which takes into account the actual distribution of points to visit to the time slots. In other terms, this approximation is equal to \[L_{n}^{tsp-ts-sample}=\beta^{tsp}\sqrt{|\mathcal{R}|}(\sqrt{(1+n_{1})}+\sum_{k =2}^{m}\sqrt{n_{k}}), \tag{32}\] where \(n_{k}\) is the actual number of points assigned to time slot \(A_{k}\), \(1\leq k\leq m\). The quality gap of this approximation can be noted \(G_{n}^{tsp-ts-sample}\). The actual distribution \(n_{k}\) can be equally used to derive a feasibility condition. An instance is feasible if \[\beta^{tsp}\sqrt{|\mathcal{R}|n_{k}}\leq l_{k}=|A_{k}|,\ \ \forall k\in[|1,m|]. \tag{33}\] ### Quality of the approximation In this subsection, we examine the quality of the approximations \(L_{n}^{tsp-ts}\) and \(L_{n}^{tsp-ts-sample}\). The results for the sections 7.3.1 and 7.3.2 are given for the uniform temporal distribution of clients. Section 7.3.3 discusses the impact of the modes of the temporal distribution. #### 7.3.1 Impact of the number of points \(n\) Figure 7 displays the absolute gaps of the proposed approximations in function of the number of clients \(n\). Knowing the actual distribution of clients on time slots leads consistently to a lower gap compared to using the distributional assumption: \(|G_{n}^{tap-ts-sample}|\leq|G_{n}^{tsp-ts}|\). However, the difference between both gaps becomes quite small when the number \(n\) gets larger. For \(n=(81,101)\), we have an average absolute gap of \(|G_{n}^{tsp-ts}|=(5.68,5.07\%)\) and of \(|G_{n}^{tsp-ts-sample}|=(4.98,4.06\%)\). Both formulas are good quality approximations of the tour lengths when \(n\) is large. #### 7.3.2 Impact of the time slots configuration \((m,ts)\) The approximations' gaps for \(n\in\{81,101\}\) are shown in the Figure 8, when varying the time slot configuration: \(1\leq m\leq 6\) and \(1\leq ts\leq 5\). Some observations could be made: Figure 7: Average of the absolute gaps to the optimal solutions of the distr. approximation \(L_{n}^{tsp-ts}\) and the sampling approximation \(L_{n}^{tsp-ts-sample}\) for varying values of \(n\). Figure 8: Distribution of gaps \(G_{n}^{tsp-ts-sample}\) and \(G_{n}^{tsp-ts}\), for \(n\in\{81,101\}\) and \(m\in\{1,2,3,4,5,6\}\) in function of \(ts\). * The experimental approximation \(L_{n}^{tsp-ts-sample}\) is consistently better than the one based on the distributional assumption \(L_{n}^{tsp-ts}\), except when \(m=1\), and the cases \((m,ts=1)\) independently of the value of \(m\), wherein both approximations are equivalent. * The gaps \(G_{n}^{tsp-ts-sample}\) and \(G_{n}^{tsp-ts}\) are smaller for identical time slots (\(ts=1\)) compared to time slot configurations \(ts>1\). Among time slots \(ts>1\), there is no clear order in terms of the gaps. For instance, when \(m=2\), the approximations' gaps get larger for increasing values of \(ts\) (increasing of the repulsion parameter \(p\)), while for instance for \(m=4\), the configurations of \(ts=2\) have the largest gaps. * Most of the values of gaps of Figure 8 are lower than \(10\%\). The medians of the all gaps \((G_{p}^{tsp-ts},G_{n}^{tsp-ts-sample})\) of the figure are equal to \((4.47,4.13\%)\). Some time slot configurations, such as \((m=2,ts=4)\), \((m=3,ts=2)\), \((m=3,ts=5)\), have gaps overall larger than \(10\%\), but the median of the distribution of the gaps is always lower than \(20\%\). * The effect of number of time slots \(m\) is negligible on the distributions of the gaps. #### 7.3.3 Impact of the distribution of points Figure 9 shows the distribution of gaps in function of the temporal modes. For identical time slots (\(ts=1\)), the difference between the modes is unnoticed, except for a higher number of time slots \(m\). For the remaining configurations of time slots (\(ts>1\)), the one mode, which corresponds to one normal distribution of points, tends to have higher gaps than to the two modes and the uniform distribution. This observation is more pronounced for a large \(m\). The reason behind this discrepancy is due to the number of time slots, which has a low allocation of clients, for which the related BHH formula behaves poorly. This number is bigger for one mode. For instance, for \(m=6\) and \(n\in\{81,101\}\), the number of time slots with less than 10 customers in our benchmark is equal to \((505,710,560)\) for the modes \((0,1,2)\). For \(n=101\), the average absolute gap of \((|G_{n}^{tsp-ts}|,|G_{n}^{tsp-ts-sample}|)\) is equal to \((5.07,4.06\%)\) for mode 0, \((10.38,5.46\%)\) for mode 1, and \((6.18,4.49\%)\) for mode 2, thus an average of \((7.56,4.76\%)\). Figure 9: Distribution of gaps \(G_{n}^{tsp-ts-sample}\) and \(G_{n}^{tsp-ts}\), for \(n=101\), \(m\in\{2,3,6\}\) (rows) and \(ts\in\{1,2,4\}\) (columns), in function of the temporal distributional modes. ### Feasibility In this subsection, we examine the accuracy of the feasibility condition, based on the distributional assumption, _i.e._ the expression (11), or on the actual temporal distribution of points to visit, _i.e._ the expression (33). #### 7.4.1 Impact of the number of points \(n\) Figure 10 shows the percentage of type I and type II errors in function of \(n\) the number of points, for all instances with uniform temporal distribution of customers. For an instance to be unfeasible, only one-time slot can be sufficient, if the number of points of this time slot cannot be served within its length. Relying solely on the distributional assumption, _i.e._ (11), to discover unfeasible instances leads to high errors (the percentage of false negatives) due to this sensitivity. Knowing the actual distribution of points on the time slot is highly beneficial in this case. The percentage of FN for \(n=61,81,101\) is respectively equal to \(94.16,91.02,91.24\%\) when relying on the distributional formula, and equal to \(2.04,2.27,1.05\%\) if the actual distribution is known. The BHH formula is not accurate when \(n\) is low, which could explain the high percentage of the false positive error when using the actual distribution, especially since the formula is applied for each time slot. However, this percentage decreases for large \(n\), thus making the distributional information of the points to visit very useful to know beforehand. #### 7.4.2 Impact of the time slots configuration \((m,ts)\) When \(n\) is large, the influence of the time slots configuration \((m,ts)\) on feasibility is negligible, expect for the false positive error when relying on the actual temporal distribution. Figure 11 shows this impact when \(n\in\{81,101\}\). As \(m\) increases the chance to make this error becomes higher since infeasibility can be determined by only one-time slot, as said before. If time slots are identical in size (\(ts=1\)), the percentage of this error is smaller than the other repartitions \(ts\in\{2,3,4\}\). The percentage of all the other errors is close to zero, except for the false negative percentage when relying on the distributional assumption, which is consistently high. ### The TSP-TW preliminary study In this section, we give some preliminary results for approximating the length of TSP-TW instances in order to support the discussion of Section 4.2 with tangible data. As said before, the upper bounds (16) and (17) give poor approximations for TSP-TW in case the time windows are tight, which cover the great majority of the benchmark instances in the literature. Table 5 shows the gap of the upper bound (17) for the original 25 TSP-TW instances used in this experimentation (see Section 7.1). The average gap is around \(514\%\). To examine the performance of the upper bound (17) on instances with large time windows and densely located in time, we generate new TSP-TW instances given the original \(25\) instances. The length of the time windows \(w\) is taken to be Figure 10: Averages of false positive (FP) and false negative (FN) error percentages among all instances for varying \(n\), \(2\leq m\leq 10\) and \(1\leq ts\leq 5\). equal to either to a third or a quarter of the time horizon of the instance. The location in time of the central point of the time windows is governed by a Normal distribution: \(\mathcal{N}(\mu=h/2,\sigma=h/8)\). For each original instance and time window length \(w\), \(5\) new instances are created. The average of the absolute gaps of the upper bound (17) are as follows: \((25.26,19.34,37.59,46.67,53.33\%)\) for \(n=(21,41,61,81,101)\), which are much better than those of Table 5. In this calculation, we did not account for time slots lower than \(3\%\) of the time horizon, which can be neglected due to its size. Figure 11: Averages of FP and FN errors for varying \(m\) (left plot) and varying \(ts\) (right plot), for \(n\in\{81,101\}\). \begin{table} \begin{tabular}{c c|c c c|c c|c c} \hline \hline & & \multicolumn{3}{c|}{Instance parameters} & \multicolumn{3}{c}{\(L_{n-1}^{tap-m}\,ts\)} \\ \cline{3-10} \# & Name & & Dataset & \(n-1\) & \(\sqrt{|\mathcal{R}|}\) & Sol & apx & gap \\ \hline [MISSING_PAGE_POST] \hline \multicolumn{2}{c|}{Mean of gap} & - & - & - & - & 514.04 \\ \hline \hline \end{tabular} \end{table} Table 5: Gaps of the upper bound (17). The best-known solutions of DU and GD benchmark instances are respectively given by [45] and [52]. ## 8 Conclusions We propose an asymptotic approximation of the TSP-TS derived from the well-known TSP approximation for two variants with identical and different time slot lengths. By means of several numerical experiments, we show that the proposed approximations provide good estimators of the tour length and the instance feasibility, even for a limited number of customers independent of the depot location. We also show that the direct extension of our approximation does not provide a good approximation to the general case of TSP-TW. We report a gap of \(4.76\%\) and \(7.56\%\) of the approximation if the temporal distribution of customers on the time slots is known in advance or not4. Dealing with both spatial and temporal distributions for customers under the TSP-TW assumptions requires more investigation. Footnote 4: These values are obtained for \(100\) customers and across different temporal modes. The BBH approximation is given for any spatial distribution of the customers visited by the TSP. As the TSP-TS adds a temporal dimension to the TSP, we analyzed the impact of spatial and temporal distributions of customer demands through a worst-case study on the base of the maximal entropy principle. We show that our asymptotic approximation can be adapted to some worst-case hypotheses. We think that our preliminary results on the feasibility of the TSP-TS are valuable material to consider in the case of multiple vehicles, and also for the general case of overlapping time slots, which constitute the next step in our investigation. In addition, the results presented on the TSP-TW can be exploited to compute better approximations and bounds. The final objective of this line of research is to propose an accurate closed formula to approximate the Capacitated-VRP and the VRP-TW.
2310.02775
High order numerical methods based on quadratic spline collocation method and averaged L1 scheme for the variable-order time fractional mobile/immobile diffusion equation
In this paper, we consider the variable-order time fractional mobile/immobile diffusion (TF-MID) equation in two-dimensional spatial domain, where the fractional order $\alpha(t)$ satisfies $0<\alpha_{*}\leq \alpha(t)\leq \alpha^{*}<1$. We combine the quadratic spline collocation (QSC) method and the $L1^+$ formula to propose a QSC-$L1^+$ scheme. It can be proved that, the QSC-$L1^+$ scheme is unconditionally stable and convergent with $\mathcal{O}(\tau^{\min{\{3-\alpha^*-\alpha(0),2\}}} + \Delta x^{2}+\Delta y^{2})$, where $\tau$, $\Delta x$ and $\Delta y$ are the temporal and spatial step sizes, respectively. With some proper assumptions on $\alpha(t)$, the QSC-$L1^+$ scheme has second temporal convergence order even on the uniform mesh, without any restrictions on the solution of the equation. We further construct a novel alternating direction implicit (ADI) framework to develop an ADI-QSC-$L1^+$ scheme, which has the same unconditionally stability and convergence orders. In addition, a fast implementation for the ADI-QSC-$L1^+$ scheme based on the exponential-sum-approximation (ESA) technique is proposed. Moreover, we also introduce the optimal QSC method to improve the spatial convergence to fourth-order. Numerical experiments are attached to support the theoretical analysis, and to demonstrate the effectiveness of the proposed schemes.
Xiao Ye, Jun Liu, Bingyin Zhang, Hongfei Fu, Yue Liu
2023-10-04T12:45:20Z
http://arxiv.org/abs/2310.02775v1
High order numerical methods based on quadratic spline collocation method and averaged L1 scheme for the variable-order time fractional mobile/immobile diffusion equation ###### Abstract In this paper, we consider the variable-order time fractional mobile/immobile diffusion (TF-MID) equation in two-dimensional spatial domain, where the fractional order \(\alpha(t)\) satisfies \(0<\alpha_{s}\leq\alpha(t)\leq\alpha^{*}<1\). We combine the quadratic spline collocation (QSC) method and the \(L1^{+}\) formula to propose a QSC-\(L1^{+}\) scheme. It can be proved that, the QSC-\(L1^{+}\) scheme is unconditionally stable and convergent with \(\mathcal{O}(\tau^{\min\,(3-\alpha^{*}-\alpha(0),2)}+\Delta x^{2}+\Delta y^{2})\), where \(\tau\), \(\Delta x\) and \(\Delta y\) are the temporal and spatial step sizes, respectively. With some proper assumptions on \(\alpha(t)\), the QSC-\(L1^{+}\) scheme has second temporal convergence order even on the uniform mesh, without any restrictions on the solution of the equation. We further construct a novel alternating direction implicit (ADI) framework to develop an ADI-QSC-\(L1^{+}\) scheme, which has the same unconditionally stability and convergence orders. In addition, a fast implementation for the ADI-QSC-\(L1^{+}\) scheme based on the exponential-sum-approximation (ESA) technique is proposed. Moreover, we also introduce the optimal QSC method to improve the spatial convergence to fourth-order. Numerical experiments are attached to support the theoretical analysis, and to demonstrate the effectiveness of the proposed schemes. keywords: variable-order TF-MID equations, quadratic spline collocation method, \(L1^{+}\) formula, stability, convergence, acceleration techniques ## 1 Introduction Over the past several decades, the fractional partial differential equations (FPDEs) have attracted more and more attention as a tool for modeling various physical phenomena with memory or hereditary properties, such as damping laws, diffusion processes and viscoelastic behavior and so on [11; 27; 29]. Recent studies showed that the variable order fractional PDEs are more powerful tools for modeling many multiphysics phenomena, where the properties of the materials or systems evolve with time [26; 35]. As an important class of variable-order FPDEs, the variable-order time fractional mobile/immobile diffusion (TF-MID) equation describes the transport characteristics of particle in the fluid, and provides a more realistic model for solute diffusion transport in heterogeneous porous media [39; 44]. In recent years, various numerical methods for FPDEs have been proposed, such as finite difference methods [5; 16; 28; 37], finite element methods [8; 13], finite volume methods [9; 19; 34], spectral methods [6; 24], quadratic spline collocation (QSC) method [2; 20; 22] and so on. The QSC method gives an approximation to the solution of the original differential equations in the quadratic spline space. Since the QSC method employs smoother basis functions, and needs less degrees of freedom than some classical methods for the same number of grid points, then the QSC method results in algebraic systems of relatively smaller scale. The QSC method, as well as its optimal version, have been widely applied for various kinds of integer-order PDEs [3; 4; 10]. For time FPDEs, piecewise interpolation based numerical method is one of the main strategies for discretization. Sun and Wu [37] derived the \(L1\) scheme for the fractional diffusion-wave equation. Lin and Xu [25] constructed a stable \(L1\) scheme for time fractional diffusion equation. Li et al. [18] used the linearized \(L1\)-Galerkin finite element method to solve the multidimensional nonlinear time-fractional Schrodinger equation. Alikhanov [1] proposed the \(L_{2-1\sigma}\) formula for Caputo fractional derivative to achieve second convergence order. Lv and Xu [17] proposed the \(L2\) scheme based on parabolic interpolation, to achieve high-order accuracy. Quan and Wang [31] established the energy stability of high-order \(L2\)-type schemes for time fractional phase-field equations. Shen et al. [36] developed \(L1^{+}\) scheme for constant order Caputo fractional derivative on suitably graded meshes to achieve second-order convergence in time. Ji et al. [12] employed \(L1^{+}\) scheme to solve the time-fractional molecular beam epitaxial models with constant order Caputo fractional derivative. The \(L1^{+}\) scheme can achieve second-order convergence for functions with enough regularity just based on the piecewise linear interpolation. Variable-order fractional differential operators, like their constant-order counterparts, are nonlocal and weakly singular. But the construction of numerical discretization is more complicated, and the numerical analysis is more difficult. Zeng et al. [40] proposed spectral collocation methods for variable-order fractional advection-diffusion equation. Zheng and Wang [46] proposed an \(L1\) scheme for a hidden-memory variable-order space-time fractional diffusion equation. Du et al. [7] developed a temporal second-order finite difference scheme for the variable-order time-fractional wave equation. We proposed and analyzed a first-order numerical method based on the classical \(L1\) scheme, for variable-order TF-MID equation with variable diffusive coefficients [21]. For multi-dimensional problems, the alternating direction implicit (ADI) method is an efficient solution strategy, and it can divide the multi-dimensional problem into a series of independent one-dimensional problems. ADI methods have also been widely used for variants of FPDEs. Ran and Zhang [33] proposed compact ADI difference schemes to solve a class of spatial fractional nonlinear damped wave equations in two space dimensions. Qiu et al. [32] presented the ADI Galerkin finite element method to solve the distributed-order time-fractional mobile/immobile equation in two dimensions. We proposed the QSC method in the ADI framework for two-dimensional space fractional diffusion equation [20]. The historical dependence of the time fractional operators results in the computational complexity \(\mathcal{O}(n^{2})\), with \(n\) the number of the time levels, which is much more expensive than that of the integer-order operator. In order to reduce computational cost, many methods have been proposed to accelerate the evaluation of the fractional derivatives. Jiang et al. [14] employed sum-of-exponentials (SOE) technique with the \(L1\) scheme for constant-order time fractional derivatives, which reduced the computational cost to \(\mathcal{O}(n\log^{2}n)\). Liao et al. [12] applied the SOE technique to speed up the evaluation of the \(L1^{+}\) formula for constant-order fractional derivatives. For variable-order FPDEs, Zhang et al. [41] approached the singular kernel in Caputo fractional derivatives by the exponential-sum-approximation (ESA) technique, which reduced the computational cost to \(\mathcal{O}(n\log^{2}n)\). Based on the ESA technique, they developed a fast temporal second-order scheme with \(L2\)-\(1_{\sigma}\) formula in [42]. In this paper, we first combine the \(L1^{+}\) formula in time discretization with the QSC method in space discretization to propose the QSC-\(L1^{+}\) scheme, for solving the variable-order TF-MID equation in two-dimensional space domain. Such a scheme is a one-step method, and easy to implement. We will prove that the scheme is unconditionally stable and convergent with the order \(\mathcal{O}(\tau^{\min\{3-\alpha^{*}-\alpha(0),2\}}+\Delta x^{2}+\Delta y^{2}))\). Then, we design a novel ADI framework to produce an ADI-QSC-\(L1^{+}\) scheme, where the error caused by alternating direction is much smaller than that caused by \(L1^{+}\) formula. Numerical tests show that the ADI-QSC-\(L1^{+}\) scheme preserves almost the same observation error as the QSC-\(L1^{+}\) scheme. Furthermore, the fast computation based on the ESA technique with properly chosen parameters for \(L1^{+}\) formula of the variable-order differential operator is constructed, which leads to the ADI-QSC-\(FL1^{+}\) scheme, and it can reduce the computational cost and the memory requirement effectively. In addition, we employ the optimal QSC method in space by introducing proper perturbations to get the optimal ADI-QSC-\(FL1^{+}\) scheme with fourth-order convergence in space, which results in the numerical solution with a desired accuracy with much less spatial meshes. The outline of this paper is as follows. In Section 2, we propose the QSC-\(L1^{+}\) scheme for the variable-order TF-MID equation. Then the unconditional stability and convergence are proved in Section 3. In Section 4, we introduce the ADI framework to develop the ADI-QSC-\(L1^{+}\) scheme, and anaylze the unconditional stability and convergence. In Section 5, we respectively consider the fast implementation based on the ESA technique along the time direction and the optimal QSC method in the space domain, to reduce the computational cost. Numerical experiments are presented in Section 6, to verify the theoretical results of the proposed schemes. Finally, conclusions are given in Section 7. Throughout this paper, we use \(C_{i}\) and \(Q_{i}\) to denote positive constants which are independent of the temporal and spatial step sizes. ## 2 Variable-order TF-MID equation and the QSC-\(L1^{+}\) scheme In this section, we consider the numerical solution of the following two-dimensional variable-order TF-MID equation, which is used to model the anomalously diffusive transport [44]. \[u_{t}(x,y,t)+{}_{0}^{C}D_{t}^{\alpha(t)}u(x,y,t)=\kappa\mathcal{L}u(x,y,t)+f(x,y,t),\quad(x,y,t)\in\Omega\times(0,T], \tag{2.1}\] subjecting to the initial condition \[u(x,y,0)=u^{0}(x,y),\quad(x,y)\in\bar{\Omega}=\Omega\cup\partial\Omega, \tag{2.2}\] and the boundary condition \[u(x,y,t)=0,\quad(x,y,t)\in\partial\Omega\times(0,T], \tag{2.3}\] where constant \(\kappa>0\) is the diffusion coefficient, \(f\) is a given source function, \(u^{0}\) is the initial function. \(\Omega=(x_{L},x_{R})\times(y_{L},y_{R})\) is a rectangular domain, and \(\partial\Omega\) is the boundary. \(\mathcal{L}\) is the spatial elliptic operator with \[\mathcal{L}u=\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{ \partial y^{2}}.\] \([0,T]\) is the time interval, and \(\alpha(t)\in C[0,T]\) is the variable time fractional order which satisfies the following conditions \[0<\alpha_{*}\leq\alpha(t)\leq\alpha^{*}<1,\ t\in[0,T],\ \ \lim_{t\to 0+}( \alpha(t)-\alpha(0))\ln t\ \text{exists}. \tag{2.4}\] The variable-order Caputo fractional differential operator \({}^{C}_{0}D^{\alpha(t)}_{t}\) is usually used to describe the subdiffusive transport of a large amount of solute particles, which is defined as \[{}^{C}_{0}D^{\alpha(t)}_{t}u(x,y,t):=\int_{0}^{t}\omega_{1-\alpha(t)}(t-s) \partial_{s}u(x,y,s)ds,\] where \[\omega_{1-\beta}(t):=\frac{t^{-\beta}}{\Gamma(1-\beta)}. \tag{2.5}\] The term \(u_{t}(x,y,t)\) in (2.1) describes the the Fickian diffusive transport of the remaining portion of the total solute mass. The function \(u(x,y,t)\) is to be determined. For illustrating the singularity of the solution at the initial time, one can define the weighted Banach space involving time \(C^{m}_{\mu}((0,T];\mathcal{X})\) with \(m\geq 2,0\leq\mu<1\)[43], \[\begin{split}& C^{m}_{\mu}((0,T];\mathcal{X}):=\left\{v\in C^{1}([0,T]; \mathcal{X}):\|v\|_{C^{m}_{\mu}(0,T];\mathcal{X})}<\infty\right\},\\ &\|v\|_{C^{m}_{\mu}((0,T];\mathcal{X})}:=\|v\|_{C^{1}(0,T]; \mathcal{X})}+\sum_{l=2}^{m}\sup_{t\in(0,T]}t^{l-1-\mu}\left\|\frac{\partial^ {l}v}{\partial t^{l}}\right\|_{\mathcal{X}}.\end{split}\] In addition, the eigenfunctions \(\{\varphi_{i}\}_{i=1}^{\infty}\) of the Sturm-Liouville problem \[\mathcal{L}\varphi_{i}(x,y)=\lambda_{i}\varphi_{i}(x,y),\ (x,y)\in\Omega;\quad \varphi_{i}(x,y)=0,\ (x,y)\in\partial\Omega\] form an orthonormal basis in \(L^{2}(\Omega)\). The eigenvalues \(\{\lambda_{i}\}_{i=1}^{\infty}\) are positive and nondecreasing that tend to \(\infty\) with \(i\). By the theory of sectorial operators, we can define the fractional Sobolev space \[\tilde{H}^{\gamma}(\Omega):=\left\{v\in L^{2}(\Omega):|v|_{\tilde{H}^{\gamma} }^{2}:=\sum_{i=1}^{\infty}\lambda_{i}^{\gamma}\left(v,\varphi_{i}\right)^{2}< \infty\right\},\] with the norm \(\|v\|_{\tilde{H}^{\gamma}}:=\left(\|v\|_{L^{2}}^{2}+|v|_{\tilde{H}^{\gamma}} ^{2}\right)^{1/2}\). Furthermore, \(\tilde{H}^{\gamma}(\Omega)\) is a subspace of the fractional Sobolev space \(H^{\gamma}(\Omega)\) that can be characterized by [38] \[\tilde{H}^{\gamma}(\Omega)=\left\{v\in H^{\gamma}(\Omega):\mathcal{L}^{s}v(x, y)=0,(x,y)\in\partial\Omega,\,s<\gamma/2\right\},\] and the norms \(|v|_{\tilde{H}^{\gamma}}\) and \(|v|_{H^{\gamma}}\) are equivalent in \(\tilde{H}^{\gamma}\). With conditions (2.4) and suitable assumptions on the data, the following important lemma ensures the regularity and well-posedness of the solution of model (2.1)-(2.3). **Lemma 2.1** ([39, 45]).: _If condition (2.4) holds and \(u^{0}\in\tilde{H}^{\gamma+2}\), \(f\in H^{d}\left([0,T];\tilde{H}^{\gamma}(\Omega)\right)\) for \(\gamma>1/2\) and \(d>1/2\), then the variable-order TF-MID model (2.1)-(2.3) have a unique solution \(u\in C^{1}\left([0,T];\tilde{H}^{\gamma}(\Omega)\right)\) and_ \[\begin{split}&\|u\|_{C\left([0,T];\tilde{H}^{\gamma}(\Omega) \right)}\leq Q\left(\left\|u^{0}\right\|_{\tilde{H}^{\gamma+2}(\Omega)}+\|f \|_{H^{d}\left([0,T];\tilde{H}^{\gamma}(\Omega)\right)}\right),\\ &\|u\|_{C^{1}\left([0,T];\tilde{H}^{\gamma}(\Omega)\right)}\leq Q \left(\left\|u^{0}\right\|_{\tilde{H}^{\gamma+2}(\Omega)}+\|f\|_{H^{d}\left([0, T];\tilde{H}^{\gamma}(\Omega)\right)}\right)\end{split} \tag{2.6}\] _for \(0\leq s\leq\gamma\). Here \(Q=Q\left(\alpha^{*},\|\alpha\|_{C[0,T]},T,d\right)\)._ _Moreover, suppose that \(u_{0}\in\tilde{H}^{\gamma+6}\), \(f\in H^{1}\left([0,T];\tilde{H}^{s+4}\right)\cap H^{2}\left([0,T];\tilde{H}^{s +2}\right)\cap H^{3}\left([0,T];\tilde{H}^{s}\right)\) for \(s\geq 0\), \(\alpha\in C^{2}[0,T]\), and (2.4) holds. If \(\alpha(0)>0\), we have \(u\in C^{3}\left((0,T];\tilde{H}^{\gamma}(0,L)\right)\cap C_{1-\alpha(0)}^{3} \left((0,T];\tilde{H}^{\gamma}(0,L)\right)\) and_ \[\|u\|_{C_{1-\alpha(0)}^{3}\left((0,T];\tilde{H}^{\gamma}(0,L)\right)}\leq C_{0 }\left(\|u_{0}\|_{\tilde{H}^{\gamma+6}(0,L)}+\|f\|_{H^{1}\left(\tilde{H}^{s+4} \right)}+\|f\|_{H^{2}\left(\tilde{H}^{s+2}\right)}+\|f\|_{H^{3}\left(\tilde{H} ^{s}\right)}\right). \tag{2.7}\] Next, we will consider \(L1^{+}\) discretization in time and QSC discretization in space. ### The \(L1^{+}\) formula for time discretization Given a positive integer \(N\), we define a uniform partition on the time interval \([0,T]\) by \(t_{n}=n\tau\), for \(n=0,1,...,N\) with \(\tau=\frac{T}{N}\). For function \(v(t)\), piecewise linear interpolant on the temporal mesh is denoted by \(\Pi v(t)\), and we define \(\theta v\left(t\right)=v\left(t\right)-\Pi v\left(t\right)\). By the Taylor's expansion with integral remainder, we can obtain \[\theta v\left(t\right)=\int_{t_{n-1}}^{t}(t-s)\partial_{s}^{2}v\left(s\right) \mathrm{d}s+\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\left(t-t_{n-1}\right)\left(t _{n}-s\right)\partial_{s}^{2}v\left(s\right)\mathrm{d}s,\quad t_{n-1}\leq t \leq t_{n},\ 1\leq n\leq N.\] Based on Lemma 2.1, we have \(\left\|\frac{\partial^{2}v}{\partial t^{2}}\right\|_{\mathcal{X}}\leq C_{1}t^ {-\alpha(0)}\). Therefore, it can be verified that \[|\theta v\left(t\right)|\leq C_{2}\tau\left(t_{n}^{1-\alpha(0)}-t_{n-1}^{1- \alpha(0)}\right),\quad t_{n-1}\leq t\leq t_{n},\ 1\leq n\leq N. \tag{2.8}\] We denote by \(v^{n}\) the approximation of \(v(t)\) at time instant \(t=t_{n}\). Let \(\mathfrak{T}=\{v^{n},\ n=0,1,\cdots,N\}\) be a temporal grid function space, we define \[\delta_{t}v^{n-\frac{1}{2}}=\frac{v^{n}-v^{n-1}}{\tau}\quad\text{and}\quad v^ {n-\frac{1}{2}}=\frac{v^{n}+v^{n-1}}{2}.\] Then, taking the mean value of the \(L1\) discretization of \({}_{0}^{C}D_{t}^{\alpha(t)}v(t)\) over \([t_{n-1},t_{n}]\), that is \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}{}_{0}^{C}D_{t}^{\alpha(t)}v(t)dt=\frac{1 }{\tau}\int_{t_{n-1}}^{t_{n}}{}_{0}^{C}D_{t}^{\tilde{\alpha}_{n}}v(t)dt+r_{1, n}, \tag{2.9}\] where \(\tilde{\alpha}_{n}:=\alpha_{n-\frac{1}{2}}\), for \(n=1,2,\cdots,N\). Based on trapezoidal formula, we can verify that \(r_{1,n}=\mathcal{O}\left(\tau^{2}\right)\). For completeness, we give a detailed proof in A. Moreover, we have \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}{}_{0}^{C}D_{t}^{\tilde{ \alpha}_{n}}v(t)dt\] \[=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_{0}^{t}\omega_{1- \tilde{\alpha}_{n}}(t-s)\partial_{s}\Pi v(s)dsdt+r_{2,n}\] \[=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\sum_{k=1}^{n-1}\int_{t_{k-1} }^{t_{k}}\frac{(t-s)^{-\tilde{\alpha}_{n}}}{\Gamma(1-\tilde{\alpha}_{n})} \cdot\frac{v^{k}-v^{k-1}}{\tau}dsdt+\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_ {t_{n-1}}^{t}\frac{(t-s)^{-\tilde{\alpha}_{n}}}{\Gamma(1-\tilde{\alpha}_{n})} \cdot\frac{v^{n}-v^{n-1}}{\tau}dsdt+r_{2,n}. \tag{2.10}\] The local truncation error \(r_{2,n}=\mathcal{O}\left(\tau^{2}t_{n}^{-\bar{a}_{n}-\alpha(0)}\right)\), based on the truncation error analysis in [12; 36; 47]. The detailed proof will be given in B. By some fundamental calculations, the integration in (2.9) can be expressed as \[\begin{split}\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}{}^{C}_{0}D_{t}^ {\alpha(t)}v(t)dt&=\sum_{k=1}^{n}a_{n-k+1}^{(n)}\left(v^{k}-v^{k- 1}\right)+r_{1,n}+r_{2,n}\\ &:=\delta_{t}^{\bar{\alpha}_{n}}v^{n-\frac{1}{2}}+r_{1,n}+r_{2,n },\quad n=1,2,\cdots,N,\end{split} \tag{2.11}\] where \[a_{n-k+1}^{(n)}=\frac{1}{\tau^{2}}\int_{t_{n-1}}^{t_{n}}\int_{t_{k-1}}^{\min\{ t,t_{k}\}}\omega_{1-\bar{\alpha}_{n}}(t-s)dsdt,\quad k=1,2,\cdots,n. \tag{2.12}\] The discretization (2.11) for the variable-order Caputo time fractional derivative \({}^{C}_{0}D_{t}^{\alpha(t)}v(t)\), with the coefficients (2.12), is called \(L1^{+}\) formula. The coefficients \(a_{n-k+1}^{(n)}\) for \(k=1,2,\cdots,n\), satisfy the following lemma, which will be used in the numerical analysis below. **Lemma 2.2** ([12; 36]).: _At time instant \(t=t_{n}\), the coefficients \(\left\{a_{n-k+1}^{(n)},k=1,2,\cdots,n\right\}\) of \(L1^{+}\) scheme satisfy_ \[a_{2}^{(n)}>a_{3}^{(n)}>\cdots>a_{n}^{(n)}>0.\] ### The QSC method for space discretization Let \(M_{x}\) and \(M_{y}\) be two positive integers. We define the uniform spatial partitions of \([x_{L},x_{R}]\) and \([y_{L},y_{R}]\) as \[\triangle_{x}:=\{x_{L}=x_{0}<x_{1}<\ldots<x_{M_{x}}=x_{R}\},\quad\triangle_{y }:=\{y_{L}=y_{0}<y_{1}<\ldots<y_{M_{y}}=y_{R}\},\] respectively, with corresponding mesh sizes \(\Delta x=\frac{x_{R}-x_{L}}{M_{x}}\) and \(\Delta y=\frac{y_{R}-y_{L}}{M_{y}}\). Furthermore, let \(\triangle:=\triangle_{x}\times\triangle_{y}\) be the mesh partition of \(\bar{\Omega}\). Define the quadratic splines space along each spatial direction as \[\mathcal{V}_{x} :=\left\{v\in\mathcal{C}^{1}\left(x_{L},x_{R}\right),\;\;v|_{[x_{ i-1}\times v]}\in\mathbf{P}_{2}(\triangle_{x}),i=1,2,\ldots,M_{x}\right\},\] \[\mathcal{V}_{y} :=\left\{v\in\mathcal{C}^{1}\left(y_{L},y_{R}\right),\;\;v|_{[y_{ j-1}\times y_{j}]}\in\mathbf{P}_{2}(\triangle_{y}),j=1,2,\ldots,M_{y}\right\},\] where \(\mathbf{P}_{2}(\cdot)\) represent the set of quadratic polynomials in a single variable. Besides, let \[\mathcal{V}_{x}^{0}:=\left\{v\in\mathcal{V}_{x},\;v\left(x_{L}\right)=v\left( x_{R}\right)=0\right\},\quad\mathcal{V}_{y}^{0}:=\left\{v\in\mathcal{V}_{y},\;v \left(y_{L}\right)=v\left(y_{R}\right)=0\right\},\] and denote by \(\mathcal{V}^{0}:=\mathcal{V}_{x}^{0}\otimes\mathcal{V}_{y}^{0}\) the space of piecewise biquadratic polynomials with respect to the spatial partition \(\triangle\), which satisfy the homogeneous Dirichlet boundary conditions (2.3). Now we consider the basis functions for the space \(\mathcal{V}^{0}\). First, let \[\phi(x)=\frac{1}{2}\begin{cases}x^{2},&0\leq x\leq 1,\\ -2(x-1)^{2}+2(x-1)+1,&1\leq x\leq 2,\\ (3-x)^{2},&2\leq x\leq 3,\\ 0,&\text{elsewhere}.\end{cases}\] We define the quadratic B-splines \[\phi_{j}(x)=\phi\left(\frac{x-x_{L}}{\Delta x}-j+2\right),\quad j=0,1,\cdots,M_{x }+1, \tag{2.13}\] and choose \(\left\{\phi_{j}(x),\ j=0,\cdots,M_{x}+1\right\}\) as the basis function of \(\mathcal{V}_{x}^{0}\). Similarly, we can define the basis functions \(\left\{\phi_{j}(y),\ j=0,\cdots,M_{y}+1\right\}\) for \(\mathcal{V}_{y}^{0}\) with replacing the variable \(x\) by the variable \(y\). Then, the basis functions of \(\mathcal{V}^{0}\) can be defined as the tensor product of the basis functions for the spaces \(\mathcal{V}_{x}^{0}\) and \(\mathcal{V}_{y}^{0}\). Thus, the quadratic spline solution \(u_{h}^{n}\in\mathcal{V}^{0}\) of the model (2.1) can be represented as \[u_{h}^{n}(x,y)=\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}c_{ij}^{n}\phi_{i}(x) \phi_{j}(y),\quad n=1,\cdots,N, \tag{2.14}\] where the coefficients \(c_{ij}^{n}\) are degrees of freedom (DOFs). In order to determine the DOFs, we define the midpoints of \(\triangle\) as \[\xi=\left\{\left(\xi_{i}^{x},\xi_{j}^{y}\right),\ i=1,2,\ldots,M_{x};\ j=1,2, \cdots,M_{y}\right\},\] where \(\left\{\xi_{i}^{x}=\frac{1}{2}\left(x_{i-1}+x_{i}\right),\ i=1,2,\cdots,M_{x}\right\}\) and \(\left\{\xi_{j}^{y}=\frac{1}{2}(y_{j-1}+y_{j}),\ j=1,2,\cdots,M_{y}\right\}\). Denote \(\partial\xi=\left\{\left(\xi_{i}^{x},\xi_{j}^{y}\right),\ i\in\left\{0,M_{x}+1 \right\}\ \text{or}\ \ j\in\left\{0,M_{y}+1\right\}\right\}\), where \(\xi_{0}^{x}=x_{0}\), \(\xi_{M_{x}+1}^{x}=x_{M_{x}}\) and \(\xi_{0}^{y}=y_{0}\), \(\xi_{M_{y}+1}^{y}=y_{M_{y}}\) are boundary points of \(\triangle_{x}\) and \(\triangle_{y}\), respectively. Thus, we choose \(\bar{\xi}=\xi\cup\partial\xi\) as the collocation points. For convenience, we also define index sets \(\Lambda=\left\{(i,j),\ (\xi_{i}^{x},\xi_{j}^{y})\in\xi\right\}\), \(\partial\Lambda=\left\{(i,j),\ (\xi_{i}^{x},\xi_{j}^{y})\in\partial\xi\right\}\) and \(\bar{\Lambda}=\Lambda\cup\partial\Lambda\). Based on the definition (2.13), we can get the following lemma by some fundamental calculations. **Lemma 2.3** ([23]).: _(1). For the basis function \(\phi_{0}(x)\), we have_ \[\phi_{0}\left(\xi_{i}^{x}\right)=\frac{1}{8}\begin{cases}4,&i=0,\\ 1,&i=1,\\ 0,&else,\end{cases}\quad\phi_{0}^{\prime\prime}\left(\xi_{i}^{x}\right)= \frac{1}{\Delta x^{2}}\begin{cases}1,&i=1,\\ 0,&i=2,3,\ldots,M_{x}+1.\end{cases}\] _(II). For the basis functions \(\phi_{j}(x)\) with \(j=1,\ldots,M_{x}\), we have_ \[\phi_{j}\left(\xi_{0}^{x}\right)=\frac{1}{8}\begin{cases}4,&j=1,\\ 0,&else,\end{cases}\quad\phi_{j}\left(\xi_{M_{x}+1}^{x}\right)=\frac{1}{8} \begin{cases}4,&j=M_{x},\\ 0,&else,\end{cases}\] _and for \(i=1,\ldots,M_{x}\),_ \[\phi_{j}\left(\xi_{i}^{x}\right)=\frac{1}{8}\begin{cases}1,&|i-j|=1,\\ 6,&i=j,\\ 0,&else,\end{cases}\quad\phi_{j}^{\prime\prime}\left(\xi_{i}^{x}\right)= \frac{1}{\Delta x^{2}}\begin{cases}1,&|i-j|=1,\\ -2,&i=j,\\ 0,&else.\end{cases}\] _(III). For the basis function \(\phi_{M_{x}+1}(x)\), we have_ \[\phi_{M_{x}+1}\left(\xi_{i}^{x}\right)=\frac{1}{8}\begin{cases}4,&i=M_{x}+1,\\ 1,&i=M_{x},\\ 0,&else,\end{cases}\quad\phi_{M_{x}+1}^{\prime\prime}\left(\xi_{i}^{x}\right)= \frac{1}{\Delta x^{2}}\begin{cases}1,&i=M_{x},\\ 0,&i=0,1,\ldots,M_{x}-1.\end{cases}\] The properties in Lemma 2.3 also holds for the basis functions \(\left\{\phi_{j}(y),\ j=0,\cdots,M_{y}+1\right\}\). Taking the collocation points into (2.14), together with Lemma 2.3, we can get for \(k=0,\cdots,M_{x}+1,l=0,\cdots,M_{y}+1\) that \[u_{h}^{n}\left(\xi_{k}^{x},\xi_{l}^{y}\right)=\sum_{i=\max\{k-1,0 \}}^{\min\{k+1,M_{x}+1\}}\sum_{j=\max\{l-1,0\}}^{\min\{l+1,M_{y}+1\}}c_{ij} \phi_{i}\left(\xi_{k}^{x}\right)\phi_{j}\left(\xi_{l}^{y}\right):=\mathbf{\theta}_{ x}\mathbf{\theta}_{y}c_{kl}^{n},\] \[\frac{\partial^{2}}{\partial x^{2}}u_{h}^{n}\left(\xi_{k}^{x},\xi _{l}^{y}\right)=\sum_{i=\max\{k-1,0\}}^{\min\{l+1,M_{x}+1\}}\sum_{j=\max\{l-1,0\}}^{\min\{l+1,M_{y}+1\}}c_{ij}\phi_{i}^{\prime\prime\prime}\left(\xi_{k}^{ x}\right)\phi_{j}\left(\xi_{l}^{y}\right):=\mathbf{\eta}_{x}\mathbf{\theta}_{y}c_{kl}^{n},\] \[\frac{\partial^{2}}{\partial y^{2}}u_{h}^{n}\left(\xi_{k}^{x},\xi _{l}^{y}\right)=\sum_{i=\max\{k-1,0\}}^{\min\{l+1,M_{x}+1\}}\sum_{j=\max\{l-1,0\}}^{\min\{l+1,M_{y}+1\}}c_{ij}\phi_{i}\left(\xi_{k}^{x}\right)\phi_{j}^{ \prime\prime}\left(\xi_{l}^{y}\right):=\mathbf{\eta}_{y}\mathbf{\theta}_{x}c_{kl}^{n},\] where the operators \(\mathbf{\theta}_{x}\) and \(\mathbf{\eta}_{x}\) are defined as \[\mathbf{\theta}_{x}c_{k,l}^{n}=\frac{1}{8}\begin{cases}4c_{0,l}^{n}+4c_{1,l}^{n}, &k=0,\\ c_{k-1,l}^{n}+6c_{k,l}^{n}+c_{k+1,l}^{n},&k=1,2,\cdots,M_{x},\\ 4c_{M_{x},l}^{n}+4c_{M_{x}+1,l}^{n},&k=M_{x}+1,\end{cases} \tag{2.15}\] \[\mathbf{\eta}_{x}c_{k,l}^{n}=\frac{1}{\Delta x^{2}}\begin{cases}0,&k=0,M_{x}+1,\\ (c_{k-1,l}^{n}-2c_{k,l}^{n}+c_{k+1,l}^{n}),&k=1,2,\cdots,M_{x}.\end{cases} \tag{2.16}\] Moreover, we define \[\mathbf{\vartheta}_{x}c_{k,l}^{n}=\frac{1}{\Delta x}\left(c_{k,l}^{n}-c_{k-1,l}^ {n}\right),\quad k=1,2,\cdots,M_{x}+1, \tag{2.17}\] then we have \[\mathbf{\eta}_{x}c_{k,l}^{n}=\frac{1}{\Delta x}\left(\mathbf{\vartheta}_{x}c_{k+1,l}^ {n}-\mathbf{\vartheta}_{x}c_{k,l}^{n}\right),\quad k=1,2,\cdots,M_{x}.\] In addition, the operators \(\mathbf{\theta}_{y}\), \(\mathbf{\eta}_{y}\) and \(\mathbf{\vartheta}_{y}\) are defined along the \(y\) direction, and have the similar expressions as \(\mathbf{\theta}_{x}\), \(\mathbf{\eta}_{x}\) and \(\mathbf{\vartheta}_{x}\), just with \(\Delta x\) replaced by \(\Delta y\). Next, we will consider the full discretization scheme based on the above approximations. ### The QSC-L1\({}^{+}\) scheme We average model (2.1) over the time subinterval \([t_{n-1},t_{n}]\) to get \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}u_{t}(x,y,t)dt+\frac{1}{\tau}\int_{t_{n-1} }^{t_{n}}{}^{C}_{0}D_{t}^{\alpha(t)}u(x,y,t)dt=\frac{1}{\tau}\int_{t_{n-1}}^{ t_{n}}\kappa\mathcal{L}u(x,y,t)dt+\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}f(x,y,t)dt. \tag{2.18}\] It can be verified for the first term on the left hand side of (2.18) that \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}u_{t}(x,y,t)dt=\frac{u^{n}(x,y)-u^{n-1}(x,y)}{\tau}=\delta_{t}u^{n-\frac{1}{2}}(x,y). \tag{2.19}\] For the first term on right hand side of (2.18), we have \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\kappa\mathcal{L}u(x,y,t)dt=\kappa\mathcal{ L}u^{n-\frac{1}{2}}(x,y)+r_{3,n}, \tag{2.20}\] where \[r_{3,n} =\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{\kappa}{2}\mathcal{L}[u \left(x,y,t\right)-u^{n-\frac{1}{2}}(x,y)]\] \[=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{\kappa}{2}\mathcal{L}[u \left(x,y,t\right)-\Pi u(x,y,t)]=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{ \kappa}{2}\mathcal{L}\partial u\left(x,y,t\right),\] and satisfies \[|r_{3,n}|\leq\frac{\kappa}{2\tau}\int_{t_{n-1}}^{t_{n}}|\mathcal{L}\partial u \left(x,y,t\right)|dt\leq C_{3}\tau\left(t_{n}^{1-\alpha(0)}-t_{n-1}^{1-\alpha (0)}\right)\] Similarly, for the second term on the right hand side of (2.18), we have \[\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}f(x,y,t)dt=f^{n-\frac{1}{2}}(x,y)+r_{4,n}, \tag{2.21}\] where \[r_{4,n}=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{1}{2}f_{tt}\left(x,y,\rho_{ 1}\right)\left(t-t_{n}\right)\left(t-t_{n-1}\right)dt,\quad\rho_{1}\in(t_{n-1 },t_{n}),\] and satisfies \[|r_{4,n}|\leq\frac{C_{4}}{2\tau}\int_{t_{n-1}}^{t_{n}}\left(t-t_{n}\right) \left(t-t_{n-1}\right)dt=\mathcal{O}\left(\tau^{2}\right),\] with \(C_{4}\) is the bound of \(f_{tt}(x,y,t)\). Based on equations (2.19)-(2.21), together with the discretization (2.11), equation (2.18) can be rewritten as \[\delta_{t}u^{n-\frac{1}{2}}(x,y)+\bar{\delta}_{t}^{\bar{\alpha}_{n}}u^{n- \frac{1}{2}}(x,y)=\kappa\mathcal{L}u^{n-\frac{1}{2}}(x,y)+f^{n-\frac{1}{2}}(x, y)+R^{n}, \tag{2.22}\] where \[R^{n}=r_{1,n}+r_{2,n}+r_{3,n}+r_{4,n}=\mathcal{O}\left(\tau^{2}t_{n}^{-\bar{ \alpha}_{n}-\alpha(0)}+\tau\left(t_{n}^{1-\alpha(0)}-t_{n-1}^{1-\alpha(0)} \right)+\tau^{2}\right). \tag{2.23}\] In order to find numerical solution of model (2.1) in the quadratic splines space, we take \(u_{h}^{n}(x,y)\) with the form (2.14) into (2.22) and drop truncation errors, which lead to \[\begin{split}&\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}\delta_{t}c_{ ij}^{n-\frac{1}{2}}\phi_{i}(x)\phi_{j}(y)+\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1} \bar{\delta}_{t}^{\bar{\alpha}_{n}}c_{ij}^{n-\frac{1}{2}}\phi_{i}(x)\phi_{j}(y )\\ &=\kappa\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}c_{ij}^{n-\frac{1} {2}}\left[\phi_{i}^{\prime\prime}(x)\phi_{j}(y)+\phi_{i}(x)\phi_{j}^{\prime \prime}(y)\right]+f^{n-\frac{1}{2}}(x,y),\quad(x,y)\in\Omega,\ 1\leq n\leq N,\end{split} \tag{2.24}\] with the initial condition \[\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}c_{ij}^{0}\phi_{i}(x)\phi_{j}(y)=u^{0} (x,y),\quad(x,y)\in\bar{\Omega}, \tag{2.25}\] and the boundary condition \[\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}c_{ij}^{n}\phi_{i}(x)\phi_{j}(y)=0, \quad(x,y)\in\partial\Omega,\ 1\leq n\leq N. \tag{2.26}\] Now taking the collocation point \((\xi_{i}^{x},\xi_{j}^{y})\) for \((i,j)\in\bar{\Lambda}\) into (2.24)-(2.26), respectively, we directly get the QSC-\(L1^{+}\) scheme, \[\mathbf{\delta}_{t}\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{n-\frac{1}{2}}+\bar{\mathbf{ \delta}}_{t}^{\bar{\alpha}_{n}}\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{n}=\kappa( \mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{\theta}_{x})c_{ij}^{n-\frac{1}{2 }}+f_{ij}^{n-\frac{1}{2}},\quad(i,j)\in\bar{\Lambda},\quad 1\leq n\leq N, \tag{2.27}\] with the initial condition \[\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{0}=u_{ij}^{0},\quad(i,j)\in\bar{\Lambda}, \tag{2.28}\] and the boundary condition \[\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{n}=0,\quad(i,j)\in\partial\Lambda,\quad 1 \leq n\leq N. \tag{2.29}\] Next, we will analyze the stability and convergence of the scheme. ## 3 Numerical analysis of the QSC-\(L1^{+}\) scheme Before the numerical analysis, we need some definitions of the inner products and norms. Define \(\mathcal{M}_{h}=\{u,\ u=\{u_{i,j},\ (i,j)\in\bar{\Lambda}\}\}\) as the spatial grid function space with respect to the partition \(\triangle\), and \(\hat{\mathcal{M}}_{h}=\{u\in\mathcal{M}_{h},\ u_{ij}=0\ \text{for}\ (i,j)\in\partial\Lambda\}\). For any \(u,v\in\hat{\mathcal{M}}_{h}\), we define the discrete inner product \[(u,v):=\Delta x\Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}u _{ij}v_{ij},\quad(\mathbf{\vartheta}_{x}u,\mathbf{\vartheta}_{x}v):=\Delta x\Delta y \sum_{i=1}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}\left(\mathbf{\vartheta}_{x}u_{ij}\right) \left(\mathbf{\vartheta}_{x}v_{ij}\right),\] \[\left(\mathbf{\vartheta}_{y}u,\mathbf{\vartheta}_{y}v\right):=\Delta x \Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=1}^{M_{y}+1}\left(\mathbf{\vartheta}_{y}u_{ij} \right)\left(\mathbf{\vartheta}_{y}v_{ij}\right).\] Thus, the corresponding discrete norms can be obtained as \[\|u\|:=\ \sqrt{(u,u)},\quad|u|_{1x}:=\sqrt{\left(\mathbf{\vartheta}_{x}u,\mathbf{ \vartheta}_{x}u\right)}\,\quad|u|_{1y}:=\ \sqrt{\left(\mathbf{\vartheta}_{y}u,\mathbf{ \vartheta}_{y}u\right)}\.\] Recalling that \[\bar{\mathbf{\delta}}_{t}^{\bar{\alpha}_{n}}v^{n-\frac{1}{2}}=\sum_{k=1}^{n}a_{n-k +1}^{(n)}\left(v^{k}-v^{k-1}\right),\] we can reformulate scheme (2.27) as \[(1+\tau a_{1}^{(n)})\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{n}-\mathbf{\theta }_{x}\mathbf{\theta}_{y}c_{ij}^{n-1}\right)+\tau\sum_{k=1}^{n-1}a_{n-k+1}^{(n)} \left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{k}-\mathbf{\theta}_{x}\mathbf{\theta}_{y}c _{ij}^{k-1}\right)=\tau\kappa(\mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{ \theta}_{x})c_{ij}^{n-\frac{1}{2}}+\tau f_{ij}^{n-\frac{1}{2}}, \tag{3.1}\] for \((i,j)\in\bar{\Lambda}\) and \(1\leq n\leq N\). For convenience, we define the coefficients of (3.1) uniformly as \[\begin{cases}b_{1}^{(n)}=1+\tau a_{1}^{(n)},\\ b_{n-k+1}^{(n)}=\tau a_{n-k+1}^{(n)},\quad k=1,\ 2,\ \cdots,\ n-1,\end{cases}\] then the QSC-\(L1^{+}\) scheme (2.27) can be further rewritten as \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{k}- \mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{k-1}\right)=\tau\kappa\left(\mathbf{\eta}_{x }\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{\theta}_{x}\right)c_{ij}^{n-\frac{1}{2}}+ \tau f_{ij}^{n-\frac{1}{2}},\quad(i,j)\in\bar{\Lambda},\ 1\leq n\leq N, \tag{3.2}\] Based on Lemma 2.2, the new coefficients \(\left\{b_{n-k+1}^{(n)},k=1,2,\cdots,n\right\}\) in scheme (3.2) satisfy the following lemma. **Lemma 3.1**: _At time instant \(t=t_{n}\), if \(\tau\leq 1\), the coefficients \(\left\{b_{n-k+1}^{(n)},k=1,2,\cdots,n\right\}\) in scheme (3.2) satisfy_ \[b_{1}^{(n)}>b_{2}^{(n)}>\cdots>b_{n}^{(n)}>0.\] **Proof.** According to the definition of \(a_{n-k+1}^{(n)}\), we have \[a_{1}^{(n)}=\frac{1}{\tau^{2}}\int_{t_{n-1}}^{t_{n}}\int_{t_{n-1}}^{t}\omega_{1 -\tilde{\alpha}_{n}}(t-s)dsdt=\frac{1}{\tau^{2}}\omega_{3-\tilde{\alpha}_{n}} (\tau)=\frac{\tau^{-\tilde{\alpha}_{n}}}{\Gamma\left(3-\tilde{\alpha}_{n} \right)},\] and \[a_{2}^{(n)}=\frac{1}{\tau^{2}}\int_{t_{n-1}}^{t_{n}}\int_{t_{n-2}}^{t_{n-1}} \omega_{1-\tilde{\alpha}_{n}}(t-s)dsdt=\frac{\tau^{-\tilde{\alpha}_{n}}}{ \Gamma\left(3-\tilde{\alpha}_{n}\right)}\left(2^{2-\tilde{\alpha}_{n}}-2 \right).\] Thus, we have \[b_{1}^{(n)}-b_{2}^{(n)}=1+\tau(a_{1}^{(n)}-a_{2}^{(n)})=1+\frac{\tau^{1- \tilde{\alpha}_{n}}}{\Gamma\left(3-\tilde{\alpha}_{n}\right)}\left(3-2^{2- \tilde{\alpha}_{n}}\right).\] When \(\tilde{\alpha}_{n}\in\left[2-\frac{ln3}{ln2},1\right)\), we can get \(3-2^{2-\tilde{\alpha}_{n}}\geq 0\), which verifies that \(b_{1}^{(n)}>b_{2}^{(n)}\). When \(\tilde{\alpha}_{n}\in\left(0,2-\frac{ln3}{ln2}\right)\), we have \(-1<3-2^{2-\tilde{\alpha}_{n}}<0\). Since \(\Gamma\left(3-\tilde{\alpha}_{n}\right)\geq 1\), we have \(\frac{\tau^{1-\tilde{\alpha}_{n}}}{\Gamma\left(3-\tilde{\alpha}_{n}\right)}\leq 1\), with \(\tau\leq 1\), which leads to \(b_{1}^{(n)}>b_{2}^{(n)}\). The monotonicity of series \(\left\{b_{k}^{(n)},\ k=2,3,\cdots,n\right\}\) can be found in Lemma 2.2. Lemma 3.1 means that all the coefficients of the QSC-\(L1^{+}\) scheme (3.2) are monotonic, and this property plays an important role in the following numerical analysis. ### Auxiliary lemmas To proceed with the analysis of stability, we need some auxiliary lemmas. We first investigate some properties of the coefficients in the QSC-\(L1^{+}\) scheme (3.2), which are exhibited in the following lemmas. **Lemma 3.2**: _Suppose that \(\alpha^{\prime}(t)\leq 0\), and \(\alpha^{\prime}(t)\) is uniformly bounded for \(0\leq t\leq T\), then for any fixed \(n\) with \(2\leq n\leq N\), we have_ \[b_{n-k}^{(n)}\leq\left(1+C_{5}\tau\right)b_{n-k}^{(n-1)},\quad k=1,\cdots,n-1,\] where \(C_{5}\) is a positive constant. **Proof.** The proof is generally divided into two parts. In the first part, we consider the case \(k=1,\cdots,n-2\), and in the second part, we consider the case \(k=n-1\). (I) For \(k=1,\cdots,n-2\), we have \(t_{k+1}\leq t_{n-1}\), and \[\begin{split} b_{n-k}^{(n)}&=\tau a_{n-k}^{(n)}= \frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_{t_{k}}^{t_{k+1}}\omega_{1-\tilde{ \alpha}_{n}}(t-s)dsdt\\ &=\frac{1}{\tau}\left[\omega_{3-\tilde{\alpha}_{n}}\left(t_{n}-t _{k}\right)-\omega_{3-\tilde{\alpha}_{n}}\left(t_{n-1}-t_{k}\right)-\omega_{3- \tilde{\alpha}_{n}}\left(t_{n}-t_{k+1}\right)+\omega_{3-\tilde{\alpha}_{n}} \left(t_{n-1}-t_{k+1}\right)\right]\\ &=\frac{\tau^{2-\tilde{\alpha}_{n}}}{\tau\Gamma(3-\tilde{\alpha} _{n})}\left[(n-k)^{2-\tilde{\alpha}_{n}}-2(n-k-1)^{2-\tilde{\alpha}_{n}}+(n-k- 2)^{2-\tilde{\alpha}_{n}}\right].\end{split} \tag{3.3}\] Then the quotient of \(b_{n-k}^{(n)}\) and \(b_{n-k}^{(n-1)}\) can be simplified as \[\frac{b_{n-k}^{(n)}}{b_{n-k}^{(n-1)}}=\cdot\frac{\Gamma\left(3-\tilde{\alpha}_{n -1}\right)}{\Gamma\left(3-\tilde{\alpha}_{n}\right)}\cdot\frac{t_{n-k}^{2- \tilde{\alpha}_{n}}-2t_{n-k-1}^{2-\tilde{\alpha}_{n}}+t_{n-k-2}^{2-\tilde{ \alpha}_{n}}}{t_{n-k}^{2-\tilde{\alpha}_{n-1}}-2t_{n-k-1}^{2-\tilde{\alpha}_{n -1}}+t_{n-k-2}^{2-\tilde{\alpha}_{n-1}}}:=A^{(n)}\cdot B^{(n)}.\] We first investigate the quantity \(A^{(n)}\). Notice the fact that \(0<\alpha(t)\leq\alpha^{*}<1\), and \(\Gamma(x)\) is a increasing and differentiable function on the interval \([3-\alpha^{*},3]\). Denote \(\Gamma_{*}=\min_{3-\alpha^{*}\leq s\leq 3}\left\|\Gamma(s)\right|\) and \(\Gamma_{*}^{\prime}=\max_{3-\alpha^{*}\leq s\leq 3}\left\|\Gamma^{\prime}(s)\right\|\). Using Taylor expansion, we have \[A^{(n)}=\frac{\Gamma\left(3-\tilde{\alpha}_{n}\right)+\Gamma^{\prime}\left( \xi_{n}\right)\left(\tilde{\alpha}_{n}-\tilde{\alpha}_{n-1}\right)}{\Gamma \left(3-\tilde{\alpha}_{n}\right)}=1+\frac{\Gamma^{\prime}\left(\xi_{n} \right)}{\Gamma\left(3-\tilde{\alpha}_{n}\right)}\cdot\alpha^{\prime}(\eta_{ n})\cdot\tau\leq 1+\tau\left\|\alpha^{\prime}\right\|_{\infty}\cdot\frac{ \Gamma_{*}^{\prime}}{\Gamma_{*}}\leq 1+C_{6}\tau,\] where \(\xi_{n}\in\left(3-\tilde{\alpha}_{n-1},3-\tilde{\alpha}_{n}\right)\) and \(\eta_{n}\in\left(t_{n-1},t_{n}\right)\). Next, for the quantity \(B^{(n)}\), it can be verified that \[\begin{split} B^{(n)}&=\frac{\left[t_{n-k}^{2- \tilde{\alpha}_{n}}-t_{n-k-1}^{2-\tilde{\alpha}_{n}}\right]-\left[t_{n-k-1}^{ 2-\tilde{\alpha}_{n}}-t_{n-k-2}^{2-\tilde{\alpha}_{n}}\right]}{\left[t_{n-k}^{ 2-\tilde{\alpha}_{n-1}}-t_{n-k-1}^{2-\tilde{\alpha}_{n-1}}\right]-\left[t_{n-k -1}^{2-\tilde{\alpha}_{n-1}}-t_{n-k-2}^{2-\tilde{\alpha}_{n-1}}\right]}\\ &=\frac{\left(2-\tilde{\alpha}_{n}\right)\left(1-\tilde{\alpha}_{ n}\right)}{\left(2-\tilde{\alpha}_{n-1}\right)\left(1-\tilde{\alpha}_{n-1} \right)}\cdot\frac{\int_{t_{n-k-1}}^{t_{n}+\int_{x-\tau}^{x}s^{-\tilde{\alpha} _{n}}dsdx}}{\int_{t_{n-k-1}}^{t_{n-k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n-1 }}dsdx}:=B_{1}^{(n)}\cdot B_{2}^{(n)}.\end{split} \tag{3.4}\] For the term \(B_{1}^{(n)}\) in (3.4), since \(\alpha^{\prime}(t)\) is bounded and \(0<\alpha(t)\leq\alpha^{*}<1\), we can obtain the following estimate \[\begin{split} B_{1}^{(n)}&=\frac{\left(2-\tilde{ \alpha}_{n}\right)\left(1-\tilde{\alpha}_{n}\right)}{\left(2-\tilde{\alpha}_{n -1}\right)\left(1-\tilde{\alpha}_{n-1}\right)}=\left(1+\frac{\tilde{\alpha}_{n -1}-\tilde{\alpha}_{n}}{2-\tilde{\alpha}_{n-1}}\right)\cdot\left(1+\frac{ \tilde{\alpha}_{n-1}-\tilde{\alpha}_{n}}{1-\tilde{\alpha}_{n-1}}\right)\\ &\leq\left(1+\frac{\tau\left\|\alpha^{\prime}\right\|_{\infty}}{2 -\tilde{\alpha}_{n-1}}\right)\cdot\left(1+\frac{\tau\left\|\alpha^{\prime} \right\|_{\infty}}{1-\tilde{\alpha}_{n-1}}\right)\leq 1+C_{7}\tau.\end{split} \tag{3.5}\] For the term \(B_{2}^{(n)}\) in (3.4), we define a continuous auxiliary function \[h_{1}(z)=s^{-z},\quad s>0.\] Next, we discuss \(B_{2}^{(n)}\) separately according to the value of \(s\). If \(s\leq 1\), since the function \(h_{1}(z)\) is increasing and \(\alpha(t)\) is decreasing, we can get \(h_{1}(\tilde{\alpha}_{n})\leq h_{1}(\tilde{\alpha}_{n-1})\). Thus, we have \[B_{2}^{(n)}=\frac{\int_{t_{n-k-1}}^{t_{n+k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha} _{n}}dsdx}{\int_{t_{n-k-1}}^{t_{n+k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n-1 }}dsdx}\leq\frac{\int_{t_{n-k-1}}^{t_{n+k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n -1}}dsdx}{\int_{t_{n-k-1}}^{t_{n+k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n-1}}dsdx }=1. \tag{3.6}\] If \(s>1\), we take the derivative of \(h_{1}(z)\), \[h_{1}^{\prime}(z)=-\ln s\cdot s^{-z}.\] Since \(h_{1}^{\prime}(z)\) is increasing when \(z>0\), we can get \[h_{1}\left(\tilde{\alpha}_{n}\right)-h_{1}\left(\tilde{\alpha}_{n-1}\right)=h_{ 1}^{\prime}(\gamma_{n})\left(\tilde{\alpha}_{n}-\tilde{\alpha}_{n-1}\right)=\ln s \cdot\left(\tilde{\alpha}_{n-1}-\tilde{\alpha}_{n}\right)s^{-\gamma_{n}}\leq \ln s\cdot\left\|\alpha^{\prime}\right\|_{\infty}\cdot\tau\cdot s^{-\tilde{ \alpha}_{n}}, \tag{3.7}\] where \(\gamma_{n}\in(\tilde{\alpha}_{n},\tilde{\alpha}_{n-1})\). Based on (3.7), we have \[B_{2}^{(n)} =\frac{\int_{t_{n-k-1}}^{t_{n-k}}\int_{x-\tau}^{x}s^{-\tilde{ \alpha}_{n}}-s^{-\tilde{\alpha}_{n-1}}dsdx+\int_{t_{n-k-1}}^{t_{n-k}}\int_{x- \tau}^{x}s^{-\tilde{\alpha}_{n-1}}dsdx}{\int_{t_{n-k-1}}^{t_{n-k}}\int_{x- \tau}^{x}s^{-\tilde{\alpha}_{n-1}}dsdx}\] \[\leq 1+\frac{\ln T\cdot\|\alpha^{\prime}\|_{\infty}\cdot\tau\cdot \int_{t_{n-k-1}}^{t_{n-k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n}}dsdx}{\int_{ t_{n-k-1}}^{t_{n-k}}\int_{x-\tau}^{x}s^{-\tilde{\alpha}_{n-1}}dsdx}=1+C_{8} \tau B_{2}^{(n)},\] which leads to \[B_{2}^{(n)}\leq\frac{1}{1-C_{8}\tau}\leq 1+C_{9}\tau. \tag{3.8}\] Combining with (3.5), (3.6) and(3.8), we can obtain \[B^{(n)}\leq 1+C_{10}\tau.\] (II) For \(k=n-1\), we aim to prove \(a_{1}^{(n)}\leq\left(1+C_{5}\tau\right)a_{1}^{(n-1)}\), that is \[\frac{a_{1}^{(n)}}{a_{1}^{(n-1)}}=\tau^{\tilde{\alpha}_{n-1}-\tilde{\alpha}_{ n}}\cdot\frac{\Gamma\left(3-\tilde{\alpha}_{n-1}\right)}{\Gamma\left(3- \tilde{\alpha}_{n}\right)}\leq A^{(n)}\leq 1+C_{6}\tau,\] which leads to \(b_{1}^{(n)}\leq\left(1+C_{5}\tau\right)b_{1}^{(n-1)}\) by the definition of \(b_{1}^{(n)}\), and the proof is completed. **Lemma 3.3**.: _For \(n\geq 2\), there exists a positive constant \(C_{11}\), such that_ \[a_{n}^{(n)}\geq\frac{T^{-\tilde{\alpha}_{n}}}{\Gamma(1-\tilde{\alpha}_{n})} \geq C_{11},\] _where \(C_{11}\) is a constant._ **Proof.** For \(n\geq 2\), we have \(t_{n-1}\geq t_{1}\). By the definition of \(a_{n}^{(n)}\), \[a_{n}^{(n)}=\frac{1}{\tau^{2}}\int_{t_{n-1}}^{t_{n}}\int_{t_{0}}^{t_{1}}\omega _{1-\tilde{\alpha}_{n}}(t-s)dsdt.\] Then, by the monotonicity of \(\omega_{1-\tilde{\alpha}_{n}}(t)\), we have \[a_{n}^{(n)}\geq\frac{1}{\tau^{2}}\int_{t_{n-1}}^{t_{n}}\omega_{1-\tilde{\alpha }_{n}}(t)\tau dt=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{T^{-\tilde{\alpha} _{n}}}{\Gamma\left(1-\tilde{\alpha}_{n}\right)}dt\geq\frac{T^{-\tilde{\alpha} _{n}}}{\Gamma\left(1-\tilde{\alpha}_{n}\right)}.\] If \(T\leq 1\), we have \(T^{-\tilde{\alpha}_{n}}\geq T^{-\alpha_{n}}\). Conversely, if \(T>1\), we can get \(T^{-\tilde{\alpha}_{n}}\geq T^{-\alpha^{*}}\). Thus, we can see that \[a_{n}^{(n)}\geq\frac{\min\{T^{-\alpha_{*}},T^{-\alpha^{*}}\}}{\Gamma\left(1- \alpha^{*}\right)}.\] The proof is completed. **Lemma 3.4**.: _For \(0<\tilde{\alpha}_{k}\leq\alpha^{*}<1\), we have_ \[\sum_{k=1}^{n}b_{k}^{(k)}\leq C_{12},\] _where \(C_{12}\) is a positive constant._ **Proof.** We will prove the lemma in two steps. In the first step, we estimate \(b_{k}^{(k)}\) individually for \(k=1,2,\cdots,n\). In the second step, we consider the summation of \(b_{k}^{(k)}\) for \(k\) from \(1\) to \(n\). _Step \(1\)_. According to the definition of \(b_{k}^{(k)}\), we have \[b_{1}^{(1)}=1+\tau a_{1}^{(1)}=1+\frac{\tau^{1-\tilde{\alpha}_{1}}}{\Gamma\left( 3-\tilde{\alpha}_{1}\right)},\] and \[b_{k}^{(k)}=\frac{1}{\tau\Gamma\left(3-\tilde{\alpha}_{k}\right)}\left[\left( t_{k}^{2-\tilde{\alpha}_{k}}-t_{k-1}^{2-\tilde{\alpha}_{k}}\right)-\left(t_{k-1 }^{2-\tilde{\alpha}_{k}}-t_{k-2}^{2-\tilde{\alpha}_{k}}\right)\right]\leq \frac{t_{k-1}^{1-\tilde{\alpha}_{1}}-t_{k-2}^{1-\tilde{\alpha}_{k}}}{\Gamma \left(2-\tilde{\alpha}_{k}\right)},\quad 2\leq k\leq n.\] Specially, when \(k=2\), we can directly get the estimate \(b_{2}^{(2)}\leq\frac{t_{1}^{1-\tilde{\alpha}_{2}}}{\Gamma\left(2-\tilde{\alpha }_{2}\right)}\). When \(k\geq 3\), we have \[b_{k}^{(k)}\leq\frac{\tau}{\Gamma\left(1-\tilde{\alpha}_{k}\right)}\cdot t_{k- 2}^{-\tilde{\alpha}_{k}},\quad 3\leq k\leq n.\] _Step \(2\)_. Based on the fact \(\Gamma\left(1-\tilde{\alpha}_{k}\right)>1\), we summate \(b_{k}^{(k)}\) for \(k\) from \(1\) to \(n\), \[\begin{split}\sum_{k=1}^{n}b_{k}^{(k)}&\leq 1+\frac{ \tau^{1-\tilde{\alpha}_{1}}}{\Gamma\left(3-\tilde{\alpha}_{1}\right)}+\frac{t_ {1}^{1-\tilde{\alpha}_{2}}}{\Gamma\left(2-\tilde{\alpha}_{2}\right)}+\tau\sum _{k=3}^{n}\frac{t_{k-2}^{-\tilde{\alpha}_{k}}}{\Gamma\left(1-\tilde{\alpha}_{ k}\right)}\\ &\leq 1+\frac{\tau^{1-\tilde{\alpha}_{1}}}{\Gamma\left(3-\tilde{\alpha }_{1}\right)}+\frac{\tau^{1-\tilde{\alpha}_{2}}}{\Gamma\left(2-\tilde{\alpha }_{2}\right)}+\tau\sum_{k=1}^{n-2}t_{k}^{-\tilde{\alpha}_{k+2}}.\end{split} \tag{3.9}\] Next, we discuss the summation depending on the value of \(t_{n-2}\). (I) If \(t_{n-2}\leq 1\), then \(t_{k}^{-\tilde{\alpha}_{k+2}}\leq t_{k}^{-\alpha^{*}}\). The last term of (3.9) can be estimated as \[\tau\sum_{k=1}^{n-2}t_{k}^{-\tilde{\alpha}_{k+2}}\leq\tau\sum_{k=1}^{n-2}t_{k} ^{-\alpha^{*}}=\tau^{1-\alpha^{*}}\sum_{k=1}^{n-2}k^{-\alpha^{*}}\leq\tau^{1- \alpha^{*}}\int_{0}^{n-2}s^{-\alpha^{*}}ds=\frac{t_{n-2}^{1-\alpha^{*}}}{1- \alpha^{*}}\leq C_{12}. \tag{3.10}\] The other terms in (3.9) are also bounded. (II) If \(t_{n-2}>1\), then there exists an integer \(k^{*}\) such that \(t_{k}\leq 1\) for \(1\leq k\leq k^{*}\), and \(t_{k}>1\) for \(k^{*}+1\leq k\leq n\). The summation of \(b_{k}^{(k)}\) for \(k\) from \(1\) to \(k^{*}\) is similar to (3.10), that is, \(\tau\sum_{k=1}^{k^{*}}t_{k}^{-\tilde{\alpha}_{k+2}}\leq\frac{t_{k}^{1-\alpha^{* }}}{1-\alpha^{*}}\). Then we have \[\begin{split}\sum_{k=1}^{n}b_{k}^{(k)}&\leq 1+\frac{\tau^{1- \tilde{\alpha}_{1}}}{\Gamma\left(3-\tilde{\alpha}_{1}\right)}+\frac{\tau^{1- \tilde{\alpha}_{2}}}{\Gamma\left(2-\tilde{\alpha}_{2}\right)}+\tau\left(\sum_{ k=1}^{k^{*}}t_{k}^{-\tilde{\alpha}_{k+2}}+\sum_{k=k^{*}+1}^{n-2}t_{k}^{- \tilde{\alpha}_{k+2}}\right)\\ &\leq 1+\frac{\tau^{1-\tilde{\alpha}_{1}}}{\Gamma\left(3-\tilde{\alpha}_{ 1}\right)}+\frac{\tau^{1-\tilde{\alpha}_{2}}}{\Gamma\left(2-\tilde{\alpha}_{2} \right)}+\frac{t_{k^{*}}^{1-\alpha^{*}}}{1-\alpha^{*}}+t_{n-k^{*}-2}\leq C_{1 2}.\end{split}\] The proof of Lemma 3.4 is completed. \(\blacksquare\) In addition, the following several lemmas on the properties of the operators defined above are necessary in the stability analysis. **Lemma 3.5**.: _For the operator \(\theta_{x}\) defined in (2.15), there exists an operator \(\zeta_{x}\) satisfying \(\theta_{x}=\zeta_{x}^{2}\). Similarly, there exists an operator \(\zeta_{y}\) satisfying \(\theta_{y}=\zeta_{y}^{2}\)._ **Proof.** We only prove the result for \(\theta_{x}\), and the result for \(\theta_{y}\) can be obtained similarly. According to Lemma 2.3, the matrix representation of the operator \(\theta_{x}\) for one-dimensional case is \[\mathbf{Q}=\frac{1}{8}\begin{pmatrix}4&4&&&&\mathbf{0}\\ 1&6&1&&\\ &\ddots&\ddots&\ddots&\\ &&1&6&1\\ \mathbf{0}&&&4&4\end{pmatrix}_{(M_{x}+2)}\quad. \tag{3.11}\] We let \(\mathbf{S}=diag\{2,1,\cdots,1,2\}\) with sizes \(M_{x}+2\), and we define the matrix \(\mathbf{A}\) as \[\mathbf{A}:=\mathbf{S}^{-1}\mathbf{Q}\mathbf{S}=\frac{1}{8}\begin{pmatrix}4&2&&&&\mathbf{0}\\ 2&6&1&&&&\\ &1&6&1&&\\ &&\ddots&\ddots&\ddots&&\\ &&&1&6&1&\\ &&&&1&6&2\\ \mathbf{0}&&&&2&4\end{pmatrix}_{(M_{x}+2)}\quad,\] which is a symmetric and positive definite matrix. There exists a unique symmetric positive definite matrix \(\mathbf{B}\) such that \(\mathbf{A}=\mathbf{B}^{2}\). Thus, we have \(\mathbf{Q}=\mathbf{S}\mathbf{B}^{2}\mathbf{S}^{-1}=\left(\mathbf{S}\mathbf{B}\mathbf{S}^{-1}\right)^{2}\). Accordingly, there exist an operator \(\zeta_{x}\) satisfying \(\theta_{x}=\zeta_{x}^{2}\). Similarly, there is an operator \(\zeta_{y}\) satisfying \(\theta_{y}=\zeta_{y}^{2}\). \(\blacksquare\) **Lemma 3.6**.: _For any \(v\in\overset{\circ}{\mathcal{M}}_{h}\), we have_ \[\frac{3}{16}\|v\|^{2} \leq\left\|\zeta_{x}v\right\|^{2}=\left(\zeta_{x}v,\zeta_{x}v \right)=\left(\theta_{x}v,v\right)\leq\|v\|^{2},\] \[\frac{3}{16}\|v\|^{2} \leq\left\|\zeta_{y}v\right\|^{2}=\left(\zeta_{y}v,\zeta_{y}v \right)=\left(\theta_{y}v,v\right)\leq\|v\|^{2}.\] **Proof.** We only consider the first estimate due to the similarity of them. Based on the definition of \(\theta_{x}\) in (2.15), we can get \[\begin{split}\left(\theta_{x}v,v\right)&=\Delta x \Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{x}+1}\left(\theta_{x}v_{ij}\right) \left(v_{ij}\right)\\ &=\Delta x\Delta y\sum_{j=0}^{M_{x}+1}\left(\frac{1}{8}v_{0j}v_{1 j}+\frac{1}{4}\sum_{i=1}^{M_{x}-1}v_{ij}v_{i+1,j}+\frac{3}{4}\sum_{i=1}^{M_{x}}v_{ ij}^{2}+\frac{1}{8}v_{M_{x}-1,j}v_{M_{x},j}\right).\end{split} \tag{3.12}\] We first use the inequality \(2ab\leq a^{2}+b^{2}\) in equality (3.12) to obtain \[\begin{split}\left(\theta_{x}v,v\right)&\leq\Delta x \Delta y\sum_{j=0}^{M_{x}+1}\left(\frac{1}{16}v_{0j}^{2}+\frac{15}{16}v_{1j}^{ 2}+\sum_{i=2}^{M_{x}-1}v_{ij}^{2}+\frac{15}{16}v_{M_{x},j}^{2}+\frac{1}{16}v_{ M_{x}+1,j}^{2}\right)\\ &\leq\Delta x\Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}v_{ ij}^{2}=\|v\|^{2}.\end{split}\] Then using the inequality \(2ab\geq-a^{2}-b^{2}\) in equality (3.12), together with \(\theta_{x}v_{0j}=\theta_{x}v_{M_{x}+1,j}=0\) for \(j=0,1,\cdots,M_{y}+1\), we can get \[(\theta_{x}v,v) \geq\Delta x\Delta y\sum_{j=0}^{M_{x+1}}\Big{(}(\theta_{x}v_{0j}) (v_{0j})-\frac{1}{16}v_{0j}^{2}+\frac{9}{16}v_{1j}^{2}+\frac{1}{2}\sum_{i=2}^{ M_{x-1}}v_{ij}^{2}+\frac{9}{16}v_{M_{x},j}^{2}-\frac{1}{16}v_{M_{x}+1,j}^{2}\] \[\quad+(\theta_{x}v_{M_{x}+1,j})(v_{M_{x}+1,j})\Big{)}\] \[\geq\Delta x\Delta y\sum_{j=0}^{M_{x}+1}\left(\frac{3}{16}v_{0j}^ {2}+\frac{5}{16}v_{1j}^{2}+\frac{1}{2}\sum_{i=2}^{M_{x}-1}v_{ij}^{2}+\frac{5}{ 16}v_{M_{x},j}^{2}+\frac{3}{16}v_{M_{x}+1,j}^{2}\right)\] \[\geq\frac{3}{16}\Delta x\Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M _{x}+1}v_{ij}^{2}=\frac{3}{16}\|v\|^{2}.\] The proof of the first estimate is completed, and the second result can be obtained similarly. \(\blacksquare\) **Lemma 3.7** ([1]).: _If \(b_{1}^{(n)}>b_{2}^{(n)}>\cdots>b_{n}^{(n)}>0\), \(n=1,2,\cdots,N\), then for any quadratic spline solution \(u_{h}\in\mathcal{V}^{0}\), the following estimate holds,_ \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left(\theta_{x}\theta_{y}c^{k}-\theta_{x}\theta_{ y}c^{k-1},\theta_{x}\theta_{y}c^{n}\right)\geq\frac{1}{2}\left[\sum_{k=1}^{n}b_{n -k+1}^{(n)}\left(\left\|\theta_{x}\theta_{y}c^{k}\right\|^{2}-\left\|\theta_{ x}\theta_{y}c^{k-1}\right\|^{2}\right)\right],\] _where \(c^{k}=\{c_{ij}^{k},\ (i,j)\in\bar{\Lambda}\}\), for \(k=1,2,\cdots,n\), are the DOFs in the expression (2.14) of \(u_{h}^{k}\)._ **Lemma 3.8**.: _For the QSC-L\(1^{+}\) scheme (2.27)-(2.29) for model (2.1)-(2.3), we have the following estimate,_ \[\left((\eta_{x}\theta_{y}+\eta_{y}\theta_{x})c^{n-\frac{1}{2}},\theta_{x} \theta_{y}c^{n}\right)\leq-\frac{1}{4}\left(\left|\zeta_{x}\theta_{y}c^{n} \right|_{1x}^{2}+\left|\zeta_{y}\theta_{x}c^{n}\right|_{1y}^{2}\right)+\frac{1 }{4}\left(\left|\zeta_{x}\theta_{y}c^{n-1}\right|_{1x}^{2}+\left|\zeta_{y} \theta_{x}c^{n-1}\right|_{1y}^{2}\right). \tag{3.13}\] **Proof.** Recalling the notation \(c^{n-\frac{1}{2}}=\frac{1}{2}\left(c^{n}+c^{n-1}\right)\), the left hand side of (3.13) can be separated as \[\left((\eta_{x}\theta_{y}+\eta_{y}\theta_{x})c^{n-\frac{1}{2}}, \theta_{x}\theta_{y}c^{n}\right) \tag{3.14}\] \[=\frac{1}{2}\left(\eta_{x}\theta_{y}c^{n-1},\theta_{x}\theta_{y} c^{n}\right)+\frac{1}{2}\left(\eta_{x}\theta_{y}c^{n},\theta_{x}\theta_{y}c^{n} \right)+\frac{1}{2}\left(\eta_{y}\theta_{x}c^{n-1},\theta_{y}\theta_{x}c^{n} \right)+\frac{1}{2}\left(\eta_{y}\theta_{x}c^{n},\theta_{y}\theta_{x}c^{n}\right)\] \[:=\sum_{i}^{4}P_{i}.\] Since \(P_{1}\) and \(P_{2}\) are similar as \(P_{3}\) and \(P_{4}\), respectively, and we just investigate \(P_{1}\) and \(P_{2}\). For simplicity, we turn to the one-dimensional case for the term \(P_{1}\). Together with \(0\), we have \[P_{1}= \frac{1}{2}\Delta x\sum_{i=0}^{M_{x}+1}\left(\mathbf{\eta}_{x}c_{i}^{n-1 }\right)\left(\mathbf{\theta}_{x}c_{i}^{n}\right)\] \[= \frac{1}{2}\sum_{i=1}^{M_{x}}\left(\mathbf{\vartheta}_{x}c_{i+1}^{n-1 }-\mathbf{\vartheta}_{x}c_{i}^{n-1}\right)\left(\mathbf{\theta}_{x}c_{i}^{n}\right)\] \[= -\frac{1}{2}\left(\mathbf{\vartheta}_{x}c_{1}^{n-1}\right)\left(\bm {\theta}_{x}c_{1}^{n}\right)-\frac{1}{2}\sum_{i=2}^{M_{x}}\left(\mathbf{\vartheta }_{x}c_{i}^{n-1}\right)\left(\mathbf{\theta}_{x}c_{i}^{n}-\mathbf{\theta}_{x}c_{i-1}^ {n}\right)+\frac{1}{2}\left(\mathbf{\vartheta}_{x}c_{M_{x}+1}^{n-1}\right)\left( \mathbf{\theta}_{x}c_{M_{x}}^{n}\right)\] \[= -\frac{\Delta x}{2}\sum_{i=1}^{M_{x}+1}\left(\mathbf{\vartheta}_{x}c_ {i}^{n-1}\right)\left(\mathbf{\vartheta}_{x}\mathbf{\theta}_{x}c_{i}^{n}\right)=- \frac{1}{2}\left(\mathbf{\vartheta}_{x}c^{n-1},\mathbf{\vartheta}_{x}\mathbf{\theta}_{x}c ^{n}\right).\] Thus, we can obtain the following result for two-dimensional case with \(c\Leftarrow\mathbf{\theta}_{y}c\) and Lemma 3.5, \[P_{1}=-\frac{1}{2}\left(\mathbf{\vartheta}_{x}\mathbf{\theta}_{y}c^{n-1 },\mathbf{\vartheta}_{x}\mathbf{\theta}_{y}c^{n}\right)=-\frac{1}{2}\left(\mathbf{ \vartheta}_{x}\mathbf{\theta}_{y}c^{n-1},\mathbf{\vartheta}_{x}\xi_{x}^{2}\mathbf{\theta} _{y}c^{n}\right)=-\frac{1}{2}\left(\mathbf{\vartheta}_{x}\xi_{x}\mathbf{\theta}_{y}c^ {n-1},\mathbf{\vartheta}_{x}\zeta_{x}\mathbf{\theta}_{y}c^{n}\right).\] Using the similar routine for \(P_{1}\), we can get \[P_{2}=-\frac{1}{2}\left(\mathbf{\vartheta}_{x}\mathbf{\zeta}_{x}\mathbf{\theta}_{y}c^{n}, \mathbf{\vartheta}_{x}\mathbf{\zeta}_{x}\mathbf{\theta}_{y}c^{n}\right)=-\frac{1}{2}\left| \zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}.\] Furthermore, with the equality \(2ab=(a+b)^{2}-a^{2}-b^{2}\), we will obtain \[P_{1}+P_{2}= -\frac{1}{2}\left(\mathbf{\vartheta}_{x}\zeta_{x}\mathbf{\theta}_{y}c^{n -1},\mathbf{\vartheta}_{x}\zeta_{x}\mathbf{\theta}_{y}c^{n}\right)-\frac{1}{2}\left| \zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}\] \[= -\frac{1}{2}\left|\zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}- \frac{1}{4}\left(\left\|\mathbf{\vartheta}_{x}\zeta_{x}\mathbf{\theta}_{y}c^{n}+\mathbf{ \vartheta}_{x}\zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right\|^{2}-\left|\zeta_{x}\bm {\theta}_{y}c^{n}\right|_{1x}^{2}-\left|\zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right| _{1x}^{2}\right)\] \[\leq -\frac{1}{4}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x} ^{2}-\left|\zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right|_{1x}^{2}\right).\] The terms \(P_{3}\) and \(P_{4}\) in (3.14) have similar results. Therefore, we can get the following estimate \[\sum_{i}^{4}P_{i}\leq-\frac{1}{4}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n} \right|_{1x}^{2}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|_{1y}^{2}\right)+ \frac{1}{4}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right|_{1x}^{2}+\left| \zeta_{y}\mathbf{\theta}_{x}c^{n-1}\right|_{1y}^{2}\right).\] The proof of Lemma 3.8 is completed. At last, we need the discrete Gronwall inequality. **Lemma 3.9** ([30]).: _Let \(v\), \(w\in\mathfrak{T}\) be nonnegative temporal grid functions, and \(C_{13}\) is a nonnegative constant. If \(v^{n}\leq(1+\tau C_{13})v^{n-1}+\tau w^{n-1}\) for \(1\leq n\leq N\), then_ \[v^{n}\leq e^{C_{13}\tau t}\left[v^{0}+\tau\sum_{l=0}^{n-1}w^{l} \right],\quad n=1,2,\cdots,N.\] With all the lemmas above, we next consider the stability of the QSC-\(L1^{+}\) scheme. ### The stability of the QSC-\(L1^{+}\) scheme **Theorem 3.1**.: _Assume that \(\alpha^{\prime}(t)\leq 0\), and suppose that \(c^{n}=\{c^{n}_{ij},\ (i,j)\in\bar{\Lambda},0\leq n\leq N\}\) is the solution of the QSC-\(L1^{+}\) scheme (3.2). Then we have_ \[\begin{split}&\left\|\theta_{x}\mathbf{\theta}_{y}c^{n}\right\|^{2}+ \frac{3\tau\kappa}{32}\left(\left|\theta_{y}c^{n}\right|^{2}_{1x}+\left|\theta _{x}c^{n}\right|^{2}_{1y}\right)\\ &\leq C_{14}\left[\left\|\theta_{x}\mathbf{\theta}_{y}c^{0}\right\|^{ 2}+\frac{\tau\kappa}{2}\left(\left|\theta_{y}c^{0}\right|^{2}_{1x}+\left|\theta _{x}c^{0}\right|^{2}_{1y}\right)\right]+C_{15}\tau\sum_{k=1}^{n}\left\|f^{k- \frac{1}{2}}\right\|^{2}.\end{split} \tag{3.15}\] **Proof.** Multiplying both sides of equation (3.2) by \(2\Delta x\Delta y\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}_{ij}\), and summing up for \(i\) from \(0\) to \(M_{x}+1\) and for \(j\) from \(0\) to \(M_{y}+1\), we can get \[\sum_{k=1}^{n}b^{(n)}_{n-k+1}\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}-\mathbf{ \theta}_{x}\mathbf{\theta}_{y}c^{k-1},2\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right) =\tau\kappa\left((\mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{ \theta}_{x})c^{n-\frac{1}{2}},2\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right)+ \tau\left(f^{n-\frac{1}{2}},2\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right). \tag{3.16}\] For the summation term on the left hand side of (3.16), we have by Lemma 3.7 that \[\sum_{k=1}^{n}b^{(n)}_{n-k+1}\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}-\mathbf{ \theta}_{x}\mathbf{\theta}_{y}c^{k-1},2\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right) \geq\sum_{k=1}^{n}b^{(n)}_{n-k+1}\left(\left\|\theta_{x}\mathbf{\theta}_{y}c^{k} \right\|^{2}-\left\|\theta_{x}\mathbf{\theta}_{y}c^{k-1}\right\|^{2}\right).\] For the first term on the right hand side of (3.16), we have by Lemma 3.5 and Lemma 3.8 that \[\tau\kappa\left((\mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{ \theta}_{x})c^{n-\frac{1}{2}},2\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right)\] \[\leq-\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n} \right|^{2}_{1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|^{2}_{1y}\right)+ \frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right|^{2}_{1x }+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n-1}\right|^{2}_{1y}\right).\] Then, we can obtain \[\sum_{k=1}^{n}b^{(n)}_{n-k+1}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y }c^{k}\right\|^{2}+\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{ n}\right|^{2}_{1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|^{2}_{1y}\right)\] \[\leq\sum_{k=1}^{n-1}b^{(n)}_{n-k}\left\|\mathbf{\theta}_{x}\mathbf{\theta }_{y}c^{k}\right\|^{2}+\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y }c^{n-1}\right|^{2}_{1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n-1}\right|^{2}_{1y} \right)+b^{(n)}_{n}\left\|\theta_{x}\mathbf{\theta}_{y}c^{0}\right\|^{2}+2\tau \left(f^{n-\frac{1}{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right).\] Combining with Lemma 3.2, we obtain \[\begin{split}&\sum_{k=1}^{n}b^{(n)}_{n-k+1}\left\|\mathbf{\theta}_{x} \mathbf{\theta}_{y}c^{k}\right\|^{2}+\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{ \theta}_{y}c^{n}\right|^{2}_{1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|^{2 }_{1y}\right)\\ &\leq(1+C_{5}\tau)\sum_{k=1}^{n-1}b^{(n-1)}_{n-k}\left\|\mathbf{ \theta}_{x}\mathbf{\theta}_{y}c^{k}\right\|^{2}+\frac{\tau\kappa}{2}\left(\left| \zeta_{x}\mathbf{\theta}_{y}c^{n-1}\right|^{2}_{1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c ^{n-1}\right|^{2}_{1y}\right)\\ &\quad+b^{(n)}_{n}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{0} \right\|^{2}+2\tau\left(f^{n-\frac{1}{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n} \right).\end{split} \tag{3.17}\] Denote \[G^{0}=\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{0}\right|^{2}_ {1x}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{0}\right|^{2}_{1y}\right),\] \[G^{n}=\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k} \right\|^{2}+\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n} \right|_{1x}^{2}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|_{1y}^{2}\right), \quad 1\leq n\leq N,\] inequality (3.17) can be simplified as \[G^{n}\leq(1+C_{5}\tau)\,G^{n-1}+b_{n}^{(n)}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{ y}c^{0}\right\|^{2}+2\tau\left(f^{n-\frac{1}{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{ n}\right).\] It is easy to proof the theorem when \(n=1\). Then, for \(n\geq 2\), applying Lemma 3.9 to deduce that \[G^{n}\leq e^{C_{5}\pi\tau}\left[G^{0}+\sum_{k=1}^{n}b_{k}^{(k)}\left\|\mathbf{ \theta}_{x}\mathbf{\theta}_{y}c^{0}\right\|^{2}+2\tau\sum_{k=1}^{n}\left(f^{k- \frac{1}{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}\right)\right],\quad 2\leq n\leq N. \tag{3.18}\] According to the definition of \(\left\{b_{n-k+1}^{(n)},\ k=1,2,\cdots,n\right\}\), we have \(b_{1}^{(n)}=1+\tau a_{1}^{(n)}>1+C_{11}\tau\), and \(b_{n-k+1}^{(n)}>C_{11}\tau\), for \(k=1,2,\cdots,n-1\), by Lemma 3.3. Then \(G^{n}\) has the following lower bound \[G^{n}\geq\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right\|^{2}+\tau C_{11} \sum_{k=1}^{n}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}\right\|^{2}+\frac{ \tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}+\left| \zeta_{y}\mathbf{\theta}_{x}c^{n}\right|_{1y}^{2}\right). \tag{3.19}\] We combine estimates (3.18) and (3.19) to conclude that for \(1\leq n\leq N\) \[\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right\|^{2}+C_{11}\tau \sum_{k=1}^{n}\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}\right\|^{2}+\frac{ \tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}+\left| \zeta_{y}\mathbf{\theta}_{x}c^{n}\right|_{1y}^{2}\right) \tag{3.20}\] \[\leq e^{C_{5}T}\left[\frac{\tau\kappa}{2}\left(\left|\zeta_{x} \mathbf{\theta}_{y}c^{0}\right|_{1x}^{2}+\left|\zeta_{y}\mathbf{\theta}_{x}c^{0} \right|_{y}^{2}\right)+\sum_{k=1}^{n}b_{k}^{(k)}\left\|\mathbf{\theta}_{x}\mathbf{ \theta}_{y}c^{0}\right\|^{2}\right]+2e^{C_{5}T}\tau\sum_{k=1}^{n}\left(f^{k- \frac{1}{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{k}\right).\] For the third term on the left hand side and the first term on the right hand side of (3.20), we have from Lemma 3.6 that \[\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}+ \left|\zeta_{y}\mathbf{\theta}_{x}c^{n}\right|_{1y}^{2}\right)\geq\frac{3\tau\kappa }{32}\left(\left|\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}+\left|\theta_{x}c^{n} \right|_{1y}^{2}\right), \tag{3.21}\] and \[\frac{\tau\kappa}{2}\left(\left|\zeta_{x}\mathbf{\theta}_{y}c^{0}\right|_{1x}^{2}+ \left|\zeta_{y}\mathbf{\theta}_{x}c^{0}\right|_{1y}^{2}\right)\leq\frac{\tau\kappa }{2}\left(\left|\mathbf{\theta}_{y}c^{0}\right|_{1x}^{2}+\left|\mathbf{\theta}_{x}c^{0} \right|_{1y}^{2}\right). \tag{3.22}\] For the last term in (3.20), recalling the inequality \(ab\leq\ \varepsilon a^{2}+(1/4\varepsilon)b^{2}\), we can obtain \[2e^{C_{5}T}\tau\sum_{k=1}^{n}\left(f^{k-\frac{1}{2}},\mathbf{\theta}_{x}\mathbf{ \theta}_{y}c^{k}\right)\leq C_{11}\tau\sum_{k=1}^{n}\left\|\mathbf{\theta}_{x} \mathbf{\theta}_{y}c^{k}\right\|^{2}+\frac{\tau e^{2C_{5}T}}{C_{11}}\sum_{k=1}^{n} \left\|f^{k-\frac{1}{2}}\right\|^{2}. \tag{3.23}\] Then we can deduce from (3.20) - (3.23) and Lemma 3.4 that \[\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{n}\right\|^{2}+\frac{3\tau \kappa}{32}\left(\left|\mathbf{\theta}_{y}c^{n}\right|_{1x}^{2}+\left|\mathbf{\theta}_{ x}c^{n}\right|_{1y}^{2}\right)\] \[\leq C_{16}e^{C_{5}T}\left[\left\|\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^ {0}\right\|^{2}\right.\left.+\frac{\tau\kappa}{2}\left(\left|\mathbf{\theta}_{y}c^{0 }\right|_{1x}^{2}+\left|\mathbf{\theta}_{x}c^{0}\right|_{1y}^{2}\right)\right]+ \frac{\tau e^{2C_{5}T}}{C_{11}}\sum_{k=1}^{n}\left\|f^{k-\frac{1}{2}}\right\|^{2}.\] The proof of Theorem 3.1 is completed. ### Convergence of the QSC-\(L1^{+}\) scheme Based on the stability, we will investigate the convergence of the QSC-\(L1^{+}\) scheme in this subsection. For a function \(w(x,y)\in C^{4}(\bar{\Omega})\), we let \(\mathcal{I}w(x,y)\in\mathcal{V}^{0}\) be the quadratic spline interpolation of \(w(x,y)\), such that \[\mathcal{I}w\left(\xi_{i}^{x},\xi_{j}^{y}\right)=w\left(\xi_{i}^{x},\xi_{j}^{ y}\right),\quad(i,j)\in\bar{\Lambda}, \tag{3.24}\] where \(\left(\xi_{i}^{x},\xi_{j}^{y}\right)\) for \((i,j)\in\bar{\Lambda}\) are the collocation points described in Section 2. Let \(\|\cdot\|_{c}\) denote the maximum norm over all the collocation points, _i.e._, \[\|w\|_{c}=\max_{(i,j)\in\bar{\Lambda}}|w(\xi_{i}^{x},\xi_{j}^{y})|.\] Then it follows from the conclusions in [3; 15] that the interpolation error \((\mathcal{I}w-w)\) satisfies \[\|(\mathcal{I}w-w)_{xx}\|_{c} \leq\frac{\Delta x^{2}}{12}\|w^{(4)}\|_{\infty}+\mathcal{O}( \Delta x^{3}), \tag{3.25}\] \[\|(\mathcal{I}w-w)_{yy}\|_{c} \leq\frac{\Delta y^{2}}{12}\|w^{(4)}\|_{\infty}+\mathcal{O}( \Delta y^{3}).\] We denote by \[u^{n}=\left\{u^{n}(\xi_{i}^{x},\xi_{j}^{y}),\;(i,j)\in\bar{\Lambda}\right\} \quad\text{and}\quad u_{h}^{n}=\left\{u_{h}^{n}(\xi_{i}^{x},\xi_{j}^{y}),\;(i, j)\in\bar{\Lambda}\right\}\] the true solution of the problem (2.1)-(2.3) and the quadratic spline collocation solution of the the QSC-\(L1^{+}\) scheme (2.24)-(2.25), respectively, at the collocation points, where \(u_{h}^{n}(x,y)\) has the expression (2.14). Then, we have the following conclusion. **Theorem 3.2**: _For \(0<\alpha_{*}\leq\alpha(t)\leq\alpha^{*}<1\), there exists a positive constant \(C_{17}\), such that_ \[\left\|u^{n}-u_{h}^{n}\right\|\leq C_{17}\left(r^{\min\left\{3-\alpha^{*}- \alpha(0),2\right\}}+\Delta x^{2}+\Delta y^{2}\right),\quad 1\leq n\leq N.\] **Proof.** According to equation (2.22), the interpolation function \(\mathcal{I}u^{n}(x,y)\), for \(n=1,2,\cdots,N\), satisfy the following equation \[\delta_{t}\mathcal{I}u^{n-\frac{1}{2}}(x,y)+\bar{\delta}_{t}^{\bar{\alpha}_{ n}}\mathcal{I}u^{n-\frac{1}{2}}(x,y)=\kappa\left[\mathcal{I}u_{xx}^{n-\frac{1}{2}}(x,y)+\mathcal{I}u_{yy}^{n-\frac{1}{2}}(x,y)\right]+f^{n-\frac{1}{2}}(x,y)+g^{n- \frac{1}{2}}(x,y), \tag{3.26}\] where \[g^{n-\frac{1}{2}}(x,y)= \delta_{t}(\mathcal{I}u-u)^{n-\frac{1}{2}}(x,y)+\bar{\delta}_{t}^ {\bar{\alpha}_{n}}(\mathcal{I}u-u)^{n-\frac{1}{2}}(x,y) \tag{3.27}\] \[-\kappa\left[(\mathcal{I}u-u)_{xx}^{n-\frac{1}{2}}\left(x,y)+( \mathcal{I}u-u)_{yy}^{n-\frac{1}{2}}\left(x,y\right)\right]+R^{n}.\] We take the the collocation point \((\xi_{i}^{x},\xi_{j}^{y})\) for \((i,j)\in\bar{\Lambda}\) into (3.26) and (3.27), and they can be rewritten as \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left[\mathcal{I}u^{k}\left(\xi_{i}^ {x},\xi_{j}^{y}\right)-\mathcal{I}u^{k-1}\left(\xi_{i}^{x},\xi_{j}^{y}\right)\right] \tag{3.28}\] \[=\tau\kappa\left[\mathcal{I}u_{xx}^{n-\frac{1}{2}}\left(\xi_{i}^ {x},\xi_{j}^{y}\right)+\mathcal{I}u_{yy}^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi _{j}^{y}\right)\right]+\tau f^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y} \right)+\tau g^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y}\right),\] where \[g^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y}\right)=-\kappa\left[\left(\mathcal{I}u -u\right)_{xx}^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y}\right)+\left( \mathcal{I}u-u\right)_{yy}^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y}\right) \right]+R^{n},\quad(i,j)\in\bar{\Lambda},\] \(R^{n}\) is defined in (2.23), and can be bounded by \(\mathcal{O}\left(\tau^{2}t_{n}^{-\bar{\alpha}_{k}-\alpha(0)}+\tau\left(t_{n}^{ 1-\alpha(0)}-t_{n-1}^{1-\alpha(0)}\right)+\tau^{2}\right)\). Since \(\mathcal{I}u^{n}(x,y)\in\mathcal{V}^{0}\), it is reasonable to suppose that \(\mathcal{I}u^{n}(x,y)\) can be expressed in the form \[\mathcal{I}u^{n}(x,y)=\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}d_{ij}^{n}\phi_{i }(x)\phi_{j}(y),\] where \(d_{ij}^{n}\) are DOFs corresponding to \(\mathcal{I}u^{n}(x,y)\). Then equation (3.28) can be rewritten as \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left(\theta_{x}\theta_{y}d_{ij}^{k}-\theta_{x} \theta_{y}d_{ij}^{k-1}\right)=\tau\kappa\left(\eta_{x}\theta_{y}+\eta_{y} \theta_{x}\right)d_{ij}^{n-\frac{1}{2}}+\tau f_{ij}^{n-\frac{1}{2}}+\tau g_{ ij}^{n-\frac{1}{2}},\quad(i,j)\in\bar{\Lambda}, \tag{3.29}\] where \(g_{ij}^{n-\frac{1}{2}}=g^{n-\frac{1}{2}}\left(\xi_{i}^{x},\xi_{j}^{y}\right)\). Denote \(e^{n}=d^{n}-c^{n}\), we substitute (3.2) from (3.29) to obtain \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left(\theta_{x}\theta_{y}e_{ij}^{k}-\theta_{x} \theta_{y}e_{ij}^{k-1}\right)=\tau\kappa\left(\eta_{x}\theta_{y}+\eta_{y} \theta_{x}\right)e_{ij}^{n-\frac{1}{2}}+\tau g_{ij}^{n-\frac{1}{2}},\quad(i,j )\in\bar{\Lambda},\ 1\leq n\leq N.\] Applying Theorem 3.1, together with \(e^{0}=0\), we can get \[\begin{split}&\left\|\theta_{x}\theta_{y}e^{n}\right\|^{2}+\frac{3 \tau\kappa}{32}\left(\left|\theta_{y}e^{n}\right|_{1x}^{2}+\left|\theta_{x}e^{ n}\right|_{1y}^{2}\right)\\ &\leq C_{14}\left[\left\|\theta_{x}\theta_{y}e^{0}\right\|^{2}+ \frac{\tau\kappa}{2}\left(\left|\theta_{y}e^{0}\right|_{1x}^{2}+\left|\theta_{ x}e^{0}\right|_{1y}^{2}\right)\right]+C_{14}\tau\sum_{k=1}^{n}\left\|g^{k-\frac{1}{2}} \right\|^{2}\leq C_{18}\tau\sum_{k=1}^{n}\left\|g^{k-\frac{1}{2}}\right\|_{c}^ {2},\end{split} \tag{3.30}\] where \(C_{18}=(x_{R}-x_{L})(y_{R}-y_{L})C_{14}\). Based on (2.23) and (3.25), we see \[\begin{split} C_{18}\tau\sum_{k=1}^{n}\left\|g^{k-\frac{1}{2}} \right\|_{c}^{2}\leq& C_{18}\tau\sum_{k=1}^{n}\left[\kappa\left( \Delta x^{2}+\Delta y^{2}\right)+\mathcal{O}\left(\tau^{2}t_{k}^{-\bar{\alpha }_{k}-\alpha(0)}+\tau\left(t_{k}^{1-\alpha(0)}-t_{k-1}^{1-\alpha(0)}\right)+ \tau^{2}\right)\right]^{2}\\ \leq& C_{19}\left[\kappa T\left(\Delta x^{2}+\Delta y ^{2}\right)+\tau\sum_{k=1}^{n}\left(\tau^{2}t_{k}^{-\bar{\alpha}_{k}-\alpha( 0)}+\tau\left(t_{k}^{1-\alpha(0)}-t_{k-1}^{1-\alpha(0)}\right)+\tau^{2}\right) \right]^{2}\\ \leq& C_{19}\left[\kappa T\left(\Delta x^{2}+\Delta y ^{2}\right)+\tau^{2}t_{n}^{1-\alpha(0)}+\tau\sum_{k=1}^{n}\left(\tau^{2}t_{k}^ {-\bar{\alpha}_{k}-\alpha(0)}+\tau^{2}\right)\right]^{2}.\end{split} \tag{3.31}\] Next, we give further discussions according to the value of \(t_{n}\). (I) If \(t_{n}\leq 1\), we have \(t_{k}^{-\bar{\alpha}_{k}-\alpha(0)}\leq t_{k}^{-\alpha^{*}-\alpha(0)}\) for \(k\leq n\). Therefore, we have \[\begin{split}\tau\sum_{k=1}^{n}\left(\tau^{2}t_{k}^{-\bar{\alpha} _{k}-\alpha(0)}\right)&\leq\tau^{2}\left(\tau\sum_{k=1}^{n}t_{k}^ {-\alpha^{*}-\alpha(0)}\right)\leq\tau^{3-\alpha^{*}-\alpha(0)}+\tau^{2}\int_ {t_{1}}^{t_{n}}t^{-\alpha^{*}-\alpha(0)}dt\\ &=\frac{t_{n}^{1-\alpha^{*}-\alpha(0)}}{1-\alpha^{*}-\alpha(0)} \tau^{2}-\frac{\alpha^{*}+\alpha(0)}{1-\alpha^{*}-\alpha(0)}\tau^{3-\alpha^{*}- \alpha(0)}.\end{split} \tag{3.32}\] (II) If \(t_{n}>1\), then there exists an integer \(k^{*}\), such that \(t_{k}\leq 1\) for \(k\leq k^{*}\) and \(t_{k}>1\) for \(k^{*}+1\leq k\leq n\). With \(t_{k}^{-\bar{\alpha}_{k}-\alpha(0)}\leq t_{k}^{-\alpha_{*}-\alpha(0)}\) for \(k^{*}+1\leq k\leq n\), we can obtain \[\begin{split}\tau\sum_{k=1}^{n}\left(\tau^{2}t_{k}^{-\alpha_{k}} \right)&\leq\tau^{2}\left(\tau\sum_{k=1}^{k^{*}}t_{k}^{-\alpha^ {*}-\alpha(0)}\right)+\tau^{2}\left(\sum_{k=k^{*}+1}^{n}t_{k}^{-\alpha_{*}- \alpha(0)}\right)\\ &=\frac{t_{k^{*}}^{1-\alpha^{*}-\alpha(0)}}{1-\alpha^{*}-\alpha( 0)}\tau^{2}-\frac{\alpha^{*}+\alpha(0)}{1-\alpha^{*}-\alpha(0)}\tau^{3-\alpha ^{*}-\alpha(0)}+\frac{t_{n}^{1-\alpha_{*}-\alpha(0)}-t_{k^{*}}^{1-\alpha_{*}- \alpha(0)}}{1-\alpha_{*}-\alpha(0)}\tau^{2}.\end{split} \tag{3.33}\] Thus, we can substitute (3.32) and (3.33) into (3.31) to obtain \[\begin{split} C_{18}\tau\sum_{k=1}^{n}\left\|g^{k-\frac{1}{2}} \right\|_{c}^{2}\leq& C_{20}\left(\tau^{\min\left\{3-\alpha^{*}- \alpha(0),2\right\}}+\Delta x^{2}+\Delta y^{2}\right)^{2}.\end{split} \tag{3.34}\] Based on (3.30) and (3.34), we can have \[\left\|\theta_{x}\theta_{y}e^{n}\right\|^{2}+\frac{3\tau\kappa}{32}\left( \left|\theta_{y}e^{n}\right|_{1x}^{2}+\left|\theta_{x}e^{n}\right|_{1y}^{2} \right)\leq C_{20}\left(\tau^{\min\left\{3-\alpha^{*}-\alpha(0),2\right\}}+ \Delta x^{2}+\Delta y^{2}\right)^{2}. \tag{3.35}\] Since \(\left(\mathcal{I}u-u\right)^{n}\left(\xi_{i}^{x},\xi_{j}^{y}\right)=0\) for \((i,j)\in\bar{\Lambda}\), we can get \[\left\|u^{n}-u_{h}^{n}\right\|=\left\|\mathcal{I}u^{n}-u_{h}^{n}\right\|=\left\| \theta_{x}\theta_{y}e^{n}\right\|.\] This together with the estimation (3.35) complete the proof. \(\blacksquare\) It can be seen that, if \(\alpha(t)\) satisfies the condition \(\alpha^{*}+\alpha(0)<1\), we can get \(3-\alpha^{*}-\alpha(0)>2\). Thus, the following corollary can be obtained directly from Theorem 3.2. **Corollary 3.1**.: _If the fractional order \(\alpha(t)\) satisfies \(0<\alpha_{*}\leq\alpha(t)\leq\alpha^{*}<1\) and \(\alpha^{*}+\alpha(0)<1\), there exists a positive constant \(C_{21}\), such that_ \[\left\|u^{n}-u_{h}^{n}\right\|\leq C_{21}\left(\tau^{2}+\Delta x^{2}+\Delta y ^{2}\right),\quad 1\leq n\leq N.\] **Remark 3.1**.: _If the variable fractional order \(\alpha(t)\) satisfies the condition \(\alpha^{*}+\alpha(0)=1\), the estimations (3.32)-(3.33) in the proof of Theorem 3.2 need to be modified slightly, and the resulting convergence order is \(\mathcal{O}(\tau^{2}|\ln\tau|+\Delta x^{2}+\Delta y^{2})\), which is consistent with the result in Ref. [47]._ **Remark 3.2**.: _We can see from the proof of Theorem 3.2 that, the truncation error \(\mathcal{O}(\tau^{3-\alpha^{*}-\alpha(0)})\) is from the error estimation near the initial time. We will investigate the behavior of the numerical solution near the initial time point and the final time point, respectively, in Example 6.1 in Section 6. It will be seen that, the convergence order near the initial time behaves indeed as we have estimated, and the convergence order at the final time point behaves well._ **Remark 3.3**.: _If the solution of model (2.1)-(2.3) has better regularity, such as \(u\in C^{2}[0,T]\), the QSC-\(L1^{+}\) scheme can achieve the second temporal convergence order, without the restriction \(\alpha^{*}+\alpha(0)<1\), which is consistent with the example confirmation in Ref. [12]._ Next we will take the QSC-\(L1^{+}\) scheme into the ADI framework for model (2.1)-(2.3), defined in the two-dimensional space domain. ## 4 The ADI-QSC-L1+ scheme It is known that the computational cost for multi-dimensional FPDEs is usually expensive. The ADI method is able to change the solution of the multi-dimensional problem to the solutions of a series of one-dimensional subproblems, and the computational cost can be efficiently reduced. In this section, we will investigate the QSC-\(L1^{+}\) scheme in the ADI framework. We reformulate the QSC-\(L1^{+}\) scheme (2.27) as \[\begin{split}&\theta_{x}\theta_{y}c_{ij}^{n}-\gamma_{n}\Big{(} \boldsymbol{\eta}_{x}\theta_{y}+\boldsymbol{\eta}_{y}\theta_{x}\Big{)}c_{ij}^ {n}\\ &=\boldsymbol{\theta}_{x}\boldsymbol{\theta}_{y}c_{ij}^{n-1}+ \gamma_{n}\Big{(}\boldsymbol{\eta}_{x}\boldsymbol{\theta}_{y}+\boldsymbol{ \eta}_{y}\boldsymbol{\theta}_{x}\Big{)}c_{ij}^{n-1}-\frac{2\gamma_{n}}{\kappa }\sum_{k=1}^{n-1}a_{n-k+1}^{(n)}\Big{(}\boldsymbol{\theta}_{x}\boldsymbol{ \theta}_{y}c_{ij}^{k}-\boldsymbol{\theta}_{x}\boldsymbol{\theta}_{y}c_{ij}^{k -1}\Big{)}+\frac{2\gamma_{n}}{\kappa}f_{ij}^{n-\frac{1}{2}},\end{split} \tag{4.1}\] for \((i,j)\in\bar{\Lambda}\) and \(1\leq n\leq N\), where \(\gamma_{n}=\frac{\tau_{K}}{2(1+\tau a_{1}^{(n)})}=\mathcal{O}(\tau)\). In order to implement the ADI method, we need to add a small perturbation, which depends on time levels. (I) For \(n=1\), we define \[Q_{ij}^{1}=\gamma_{1}^{2}\boldsymbol{\eta}_{x}\boldsymbol{\eta}_{y}\left(c_{ij }^{1}-c_{ij}^{0}\right)=\frac{\tau^{3}\kappa^{2}}{4\left(1+\tau a_{1}^{(1)} \right)^{2}}\boldsymbol{\eta}_{x}\boldsymbol{\eta}_{y}\boldsymbol{\delta}_{t} c_{ij}^{\frac{1}{2}},\quad(i,j)\in\bar{\Lambda}.\] Adding the term \(Q_{ij}^{1}\) on the left side of (4.1), we can get \[(\boldsymbol{\theta}_{x}-\gamma_{1}\boldsymbol{\eta}_{x})(\boldsymbol{\theta }_{y}-\gamma_{1}\boldsymbol{\eta}_{y})\tilde{c}_{ij}^{1}=(\boldsymbol{\theta} _{x}+\gamma_{1}\boldsymbol{\eta}_{x})(\boldsymbol{\theta}_{y}+\gamma_{1} \boldsymbol{\eta}_{y})c_{ij}^{0}+\frac{2\gamma_{1}}{\kappa}f_{ij}^{\frac{1}{2} },\quad(i,j)\in\bar{\Lambda}. \tag{4.2}\] Therefore, the ADI-QSC-\(L1^{+}\) scheme for \(n=1\) is implemented as following two steps: _Step_ 1. For each \(0\leq j\leq M_{y}+1\), solve the following one-dimensional linear systems in the \(x\) direction for \(\tilde{c}_{ij}^{1,*}\) \[(\boldsymbol{\theta}_{x}-\gamma_{1}\boldsymbol{\eta}_{x})\tilde{c}_{ij}^{1,*}= (\boldsymbol{\theta}_{x}+\gamma_{1}\boldsymbol{\eta}_{x})(\boldsymbol{\theta} _{y}+\gamma_{1}\boldsymbol{\eta}_{y})c_{ij}^{0}+\frac{2\gamma_{1}}{\kappa}f_{ ij}^{\frac{1}{2}},\quad 0\leq i\leq M_{x}+1. \tag{4.3}\] _Step_ 2. For each \(0\leq i\leq M_{x}+1\), solve the following one-dimensional linear systems in the \(y\) direction for \(\tilde{c}_{ij}^{1}\) \[(\boldsymbol{\theta}_{y}-\gamma_{1}\boldsymbol{\eta}_{y})\tilde{c}_{ij}^{1}= \tilde{c}_{ij}^{1,*},\quad 0\leq j\leq M_{y}+1. \tag{4.4}\] (II) For the case \(n\geq 2\), if we add the similar perturbation \(Q_{ij}^{n}=\gamma_{n}^{2}\boldsymbol{\eta}_{x}\boldsymbol{\eta}_{y}\big{(}c_{ ij}^{n}-c_{ij}^{n-1}\big{)}=\mathcal{O}(\tau^{3})\) as \(Q_{ij}^{1}\) in the case (I), which is equivalent to add \(\mathcal{O}(\tau^{2})\) term to the truncation error of the QSC-\(L1^{+}\) scheme. Although adding such a small perturbation will not change the convergence order, some fundamental numerical tests show that the observation error increase obviously. Thus, in this part, we aim to seek for a special small perturbation which has higher order. We consider the term \[Q_{ij}^{n}=\gamma_{n}^{2}\boldsymbol{\eta}_{x}\boldsymbol{\eta}_{y}\left(c_{ij }^{n}-2c_{ij}^{n-1}+c_{ij}^{n-2}\right)=\frac{\tau^{2}\kappa^{2}}{4\left(1+ \tau a_{1}^{(n)}\right)^{2}}\boldsymbol{\eta}_{x}\boldsymbol{\eta}_{y}\left(c _{ij}^{n}-2c_{ij}^{n-1}+c_{ij}^{n-2}\right),\quad(i,j)\in\bar{\Lambda}.\] It can be verified that \[\begin{split}&\gamma_{n}^{2}\frac{\partial^{4}}{\partial x^{2} \partial y^{2}}\left[u(x,y,t_{n})-2u(x,y,t_{n-1})+u(x,y,t_{n-2})\right]\\ &=\frac{\tau^{2}\gamma_{n}^{2}\partial^{6}}{\partial x^{2} \partial y^{2}\partial t^{2}}u(x,y,t_{n-\frac{1}{2}})-\frac{\tau^{3}\gamma_{n} ^{2}}{2}\cdot\frac{\partial^{7}}{\partial x^{2}\partial y^{2}\partial t^{3}}u (x,y,\rho_{2}),\quad\rho_{2}\in(t_{n-1},t_{n})\end{split} \tag{4.5}\] Recalling the regularity assumption of the solution, we have \(\left\|\frac{\partial^{3}}{\partial\alpha^{3}}u(\rho_{2})\right\|_{\mathcal{X}} \leq C_{22}t^{-\alpha(0)-1}\) for \(t_{2}\leq t\leq t_{N}\). We can see that both the two terms on the right hand side of (4.5) is equivalent to \(\mathcal{O}\!\left(\tau^{4-\alpha(0)}\right)\) when \(t\) is near the initial time point. Adding the term \(Q_{ij}^{n}\) on the left side of (4.1), we can get for \(n\geq 2\) that \[\begin{split}&\left(\mathbf{\theta}_{x}-\gamma_{n}\mathbf{\eta}_{x}\right) \left(\mathbf{\theta}_{y}-\gamma_{n}\mathbf{\eta}_{y}\right)\tilde{c}_{ij}^{n}\\ &=\mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{c}_{ij}^{n-1}+\gamma_{n} \left(\mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{\theta}_{x}\right)\tilde{c }_{ij}^{n-1}+2\gamma_{n}^{2}\mathbf{\eta}_{x}\mathbf{\eta}_{y}\tilde{c}_{ij}^{n-1}- \gamma_{n}^{2}\mathbf{\eta}_{x}\mathbf{\eta}_{y}\tilde{c}_{ij}^{n-2}\\ &\quad-\frac{2\gamma_{n}}{\kappa}\sum_{k=1}^{n-1}a_{n-k+1}^{(n)} \left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{c}_{ij}^{k}-\mathbf{\theta}_{x}\mathbf{ \theta}_{y}\tilde{c}_{ij}^{k-1}\right)+\frac{2\gamma_{n}}{\kappa}f_{ij}^{n- \frac{1}{2}},\quad(i,j)\in\bar{\Lambda}.\end{split} \tag{4.6}\] Compared with (4.1), scheme (4.6) has two extra terms, which is of the order \(\mathcal{O}\!\left(\tau^{3-\alpha(0)}\right)\) near the initial time. The implementation of the ADI-QSC-\(L1^{+}\) scheme for \(n\geq 2\) is similar to the case (I). Some numerical experiments show that the ADI-QSC-\(L1^{+}\) scheme (4.6) preserves almost the same observation error as the QSC-\(L1^{+}\) scheme, but the CPU time is effectively reduced in the ADI framework. Next, we present the stability and convergence of the ADI-QSC-\(L1^{+}\) scheme (4.2) and (4.6). **Theorem 4.1**.: _The ADI-QSC-\(L1^{+}\) scheme (4.2) and (4.6) are unconditionally stable. Moreover, we denote by \(u^{n}=\left\{u^{n}(\xi_{i}^{x},\xi_{j}^{y}),\ (i,j)\in\bar{\Lambda}\right\}\) the true solution of the problem (2.1)-(2.3) and \(u_{h}^{n}=\left\{u_{h}^{n}(\xi_{i}^{x},\xi_{j}^{y}),\ (i,j)\in\bar{\Lambda}\right\}\) the numerical solution by the ADI-QSC-\(L1^{+}\) scheme (4.2) and (4.6) at the collocation points. Then, there exist a constant \(C_{23}\) such that_ \[\left\|u^{n}-u_{h}^{n}\right\|\leq C_{23}\left(\tau^{\min\left\{3-\alpha^{*}- \alpha(0),2\right\}}+\Delta x^{2}+\Delta y^{2}\right),\quad n=1,2,\ldots,N.\] **Proof.** The stability of the ADI-QSC-\(L1^{+}\) scheme can be proved by a similar routine in the proof of Theorem 3.1. In fact, the ADI-QSC-\(L1^{+}\) scheme can be expressed in the equivalent form \[b_{1}^{(1)}\!\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{1}-\mathbf{\theta}_{x}\bm {\theta}_{y}c_{ij}^{0}\right)+\frac{\tau^{3}\kappa^{2}}{4b_{1}^{(1)}}\mathbf{\eta} _{x}\mathbf{\eta}_{y}\delta_{t}c_{ij}^{\frac{1}{2}}=\tau\kappa\!\left(\mathbf{\eta}_{x }\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{\theta}_{x}\right)\!c_{ij}^{\frac{1}{2}}+ \tau f_{ij}^{\frac{1}{2}} \tag{4.7}\] for \(n=1\), and \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\!\left(\mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{k}- \mathbf{\theta}_{x}\mathbf{\theta}_{y}c_{ij}^{k-1}\right)+\frac{\tau^{2}\kappa^{2}}{4b _{1}^{(n)}}\mathbf{\eta}_{x}\mathbf{\eta}_{y}\!\left(c_{ij}^{n}-2c_{ij}^{n-1}+c_{ij}^{ n-2}\right)=\tau\kappa\!\left(\mathbf{\eta}_{x}\mathbf{\theta}_{y}+\mathbf{\eta}_{y}\mathbf{ \theta}_{x}\right)\!c_{ij}^{n-\frac{1}{2}}+\tau f_{ij}^{n-\frac{1}{2}} \tag{4.8}\] for \(n\geq 2\). Next, we give further discussion based on the different value of \(n\). (I) For \(n=1\), equation (4.7) has the same form as equation (3.2) with \(n=1\), excepting the second term on the left hand side of (4.7). We use the technique in Lemma 3.8 to deal with this term as \[\begin{split}\left(\mathbf{\eta}_{x}\mathbf{\eta}_{y}\delta_{t}c^{\frac{1 }{2}},\mathbf{\theta}_{x}\mathbf{\theta}_{y}c^{1}\right)=&\!\left(\mathbf{ \delta}_{t}\mathbf{\vartheta}_{x}\mathbf{\vartheta}_{y}c^{\frac{1}{2}},\mathbf{\theta}_{x} \mathbf{\theta}_{y}\mathbf{\vartheta}_{x}\mathbf{\vartheta}_{y}c^{1}\right)\\ =&\frac{1}{\tau}\!\left(\mathbf{\zeta}_{x}\mathbf{\zeta}_{y} \mathbf{\vartheta}_{x}\mathbf{\vartheta}_{y}c^{1},\mathbf{\zeta}_{x}\mathbf{\zeta}_{y}\mathbf{ \vartheta}_{x}\mathbf{\vartheta}_{y}c^{1}\right)-\frac{1}{\tau}\!\left(\mathbf{\zeta}_{ x}\mathbf{\zeta}_{y}\mathbf{\vartheta}_{x}\mathbf{\vartheta}_{y}c^{0},\mathbf{\zeta}_{x}\mathbf{\zeta}_{y} \mathbf{\vartheta}_{x}\mathbf{\vartheta}_{y}c^{1}\right)\!,\end{split}\] where \(\mathbf{\zeta}_{x}^{2}=\mathbf{\theta}_{x}\) and \(\mathbf{\zeta}_{y}^{2}=\mathbf{\theta}_{y}\). Then using the relation \(2ab=a^{2}+b^{2}-(a-b)^{2}\), we can get the following estimate \[\left(\eta_{x}\eta_{y}\delta_{t}c^{\frac{1}{2}},\theta_{x}\theta_{y}c ^{1}\right)\] \[=\frac{1}{\tau}\left\|\zeta_{x}\zeta_{y}\boldsymbol{\vartheta}_{x} \boldsymbol{\vartheta}_{y}c^{1}\right\|^{2}+\frac{1}{2\tau}\left[\left\|\zeta_{ x}\zeta_{y}\boldsymbol{\vartheta}_{x}\boldsymbol{\vartheta}_{y}c^{1}-\zeta_{x} \zeta_{y}\boldsymbol{\vartheta}_{x}\boldsymbol{\vartheta}_{y}c^{0}\right\|^{2 }-\left\|\zeta_{x}\zeta_{y}\boldsymbol{\vartheta}_{x}\boldsymbol{\vartheta}_{y}c ^{1}\right\|^{2}-\left\|\zeta_{x}\zeta_{y}\boldsymbol{\vartheta}_{x} \boldsymbol{\vartheta}_{y}c^{0}\right\|^{2}\right]\] \[\geq\frac{1}{2\tau}\left\|\zeta_{x}\zeta_{y}\boldsymbol{\vartheta} _{x}\boldsymbol{\vartheta}_{y}c^{1}\right\|^{2}-\frac{1}{2\tau}\left\|\zeta_{ x}\zeta_{y}\boldsymbol{\vartheta}_{x}\boldsymbol{\vartheta}_{y}c^{0}\right\|^{2}.\] Then, we use the similar technique in Theorem 3.1 and Theorem 3.2 to get \[\left\|u^{1}-u_{h}^{1}\right\|^{2}\leq C_{24}\left(\tau^{\min\left\{3-\alpha^ {*}-\alpha(0),2\right\}}+\Delta x^{2}+\Delta y^{2}\right)^{2}.\] (II) For \(n\geq 2\), we can rewrite (4.8) as \[\sum_{k=1}^{n}b_{n-k+1}^{(n)}\left(\theta_{x}\theta_{y}c_{ij}^{k}-\theta_{x} \theta_{y}c_{ij}^{k-1}\right)=\tau\kappa\left(\eta_{x}\boldsymbol{\theta}_{y}+ \boldsymbol{\eta}_{y}\boldsymbol{\theta}_{x}\right)c_{ij}^{n-\frac{1}{2}}+ \tau s_{ij}^{n-\frac{1}{2}}, \tag{4.9}\] where \[s_{ij}^{n-\frac{1}{2}}=f_{ij}^{n-\frac{1}{2}}-\frac{\tau\kappa^{2}}{4b_{1}^{(n )}}\eta_{x}\eta_{y}\left(c_{ij}^{n}-2c_{ij}^{n-1}+c_{ij}^{n-2}\right). \tag{4.10}\] Recalling that the second term on the right hand of (4.10) is of the order \(\mathcal{O}(\tau^{3-\alpha(0)})\), we use the similar routine in Theorem 3.1 and Theorem 3.2 to get \[\left\|u^{n}-u_{h}^{n}\right\|^{2}\leq C_{25}\tau\sum_{k=1}^{n}\left\|s^{k- \frac{1}{2}}\right\|_{c}^{2}\leq C_{26}\left(\tau^{\min\left\{3-\alpha^{*}- \alpha(0),2\right\}}+\Delta x^{2}+\Delta y^{2}\right)^{2}.\] Combining the case (I) and (II), the proof of theorem is completed. \(\blacksquare\) ## 5 Acceleration techniques Computational efficiency of the numerical schemes for FPDEs has always been concerned. In this section, we consider two kinds of techniques to accelerate the implementation of numerical schemes. One is the fast computation based on the ESA technique along the time direction, the other is the optimal QSC method from the view of space direction. ### Fast computation in time direction We can see that the time discretization of Caputo fractional differential operator involves the numerical solutions at all previous time levels, the computation is extremely expensive for long-time simulations. In order to reduce the computational cost, we employ ESA technique to accelerate the evaluation of the \(L1^{+}\) scheme for variable-order FPDEs. The main purpose is to approximate the singular kernel \(t^{-\beta}\) of the Caputo fractional differential operator on the interval \([\tau,T]\) efficiently. **Lemma 5.1** ([41]).: _At any time instant \(t_{n}\), for \(\tilde{\alpha}_{n}\in[\alpha_{*},\alpha^{*}]\subset(0,1)\), \(t\in[t_{n-1},t_{n}]\), \(s\in[0,t_{n-2}]\) and the expected accuracy \(0<\epsilon\leq 1/e\), if we choose constants \(h\), \(\overline{N}\) and \(\underline{N}\) as_ \[h =\frac{2\pi}{\log 3+\alpha^{*}\log(\cos 1)^{-1}+\log\epsilon^{-1}},\quad\underline{N}=\left\lceil\frac{1}{h}\frac{1}{\alpha_{*}}(\log\epsilon+ \log\Gamma(1+\alpha^{*}))\right\rceil,\] \[\overline{N} =\left\lfloor\frac{1}{h}\left(\log\frac{T}{\Delta t}+\log\log \epsilon^{-1}+\log\alpha_{*}+2^{-1}\right)\right\rfloor,\] _then the quantity \(\left(\frac{t-s}{T}\right)^{-\tilde{\alpha}_{n}}\) can be approximated by_ \[\left\lfloor\left(\frac{t-s}{T}\right)^{-\tilde{\alpha}_{n}}-\sum_{r= \underline{N}+1}^{\overline{N}}\varpi^{(n,r)}e^{\frac{-(r^{2})\partial_{n- 0}}{T}}\right\rfloor\leq\left(\frac{t-s}{T}\right)^{-\tilde{\alpha}_{n}}\epsilon,\] _where the quadrature exponents and weights are given by_ \[\lambda^{(r)}=e^{rh},\quad\varpi^{(n,r)}=\frac{he^{\tilde{\alpha}_{n}rh}}{ \Gamma\left(\tilde{\alpha}_{n}\right)}.\] Now for any \(v\in\mathfrak{T}\), the non-local term \(\bar{\delta}_{t}^{\tilde{\alpha}_{n}}v(t_{n-\frac{1}{2}})\) with \(n\geq 3\) defined in (2.11) for the QSC-\(L1^{+}\) scheme can be decomposed as \[\begin{split}\bar{\delta}_{t}^{\tilde{\alpha}_{n}}v\left(t_{n- \frac{1}{2}}\right)&=\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_{0 }^{t_{n-2}}\partial_{t}\Pi v(s)\;\omega_{1-\tilde{\alpha}_{n}}(t-s)dsdt+\frac {1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_{t_{n-2}}^{t}\partial_{t}\Pi v(s)\;\omega _{1-\tilde{\alpha}_{n}}(t-s)dsdt\\ &:=I_{\tau}^{t_{0},t_{n-2}}\left(t_{n-\frac{1}{2}}\right)+I_{\tau }^{t_{n-2},t}\left(t_{n-\frac{1}{2}}\right).\end{split} \tag{5.1}\] First, for the local term in (5.1), it can be computed directly as \[I_{\tau}^{t_{n-2},t}\left(t_{n-\frac{1}{2}}\right)=\sum_{k=n-1}^{n}a_{n-k+1}^ {(n)}\left(v^{k}-v^{k-1}\right), \tag{5.2}\] where the coefficients \(a_{1}^{(n)}\) and \(a_{2}^{(n)}\) can be found in (2.12). Second, based on the definition of \(\omega_{1-\beta}(t)\) for the singular kernel, we get the non-local term as \[I_{\tau}^{t_{0},t_{n-2}}\left(t_{n-\frac{1}{2}}\right)=\frac{T^{-\tilde{ \alpha}_{n}}}{\tau\Gamma(1-\tilde{\alpha}_{n})}\int_{t_{n-1}}^{t_{n}}\int_{0}^ {t_{n-1}}\partial_{t}\Pi v(s)\left(\frac{t-s}{T}\right)^{-\tilde{\alpha}_{n} }dsdt.\] According to Lemma 5.1, the term \(\left(\frac{t-s}{T}\right)^{-\tilde{\alpha}_{n}}\) in the integral can be approximated. Then we have \[\begin{split} I_{\tau}^{t_{0},t_{n-2}}\left(t_{n-\frac{1}{2}} \right)&\approx\frac{T^{-\tilde{\alpha}_{n}}}{\tau\Gamma(1- \tilde{\alpha}_{n})}\int_{t_{n-1}}^{t_{n}}\int_{0}^{t_{n-2}}\partial_{t}\Pi v(s )\sum_{r=\underline{N}+1}^{\overline{N}}\varpi^{(n,r)}e^{-\lambda^{(r)}\frac{ t-s}{T}}dsdt\\ &=\frac{T^{-\tilde{\alpha}_{n}}}{\tau\Gamma(1-\tilde{\alpha}_{n})} \int_{t_{n-1}}^{t_{n}}\int_{0}^{t_{n-2}}\partial_{t}\Pi v(s)\sum_{r=\underline{ N}+1}^{\overline{N}}\varpi^{(n,r)}e^{-\lambda^{(r)}\frac{t-s-2}{T}}e^{- \lambda^{(r)}\frac{t_{n-2}-s}{T}}dsdt\\ &:=\frac{T^{-\tilde{\alpha}_{n}}}{\tau\Gamma(1-\tilde{\alpha}_{n} )}\sum_{r=\underline{N}+1}^{\overline{N}}\varpi^{(n,r)}b^{(n,r)}V^{(n,r)}, \end{split} \tag{5.3}\] where \[b^{(n,r)}=\int_{t_{n-1}}^{t_{n}}e^{-\lambda^{(r)}\frac{z-t_{n-2}}{T}}dt,\quad V^{( n,r)}=\int_{0}^{t_{n-2}}\partial_{t}\Pi\text{v}(s)e^{-\lambda^{(r)}\frac{z-s}{T}}ds, \quad r=\underline{N}+1,\underline{N}+2,\cdots,\overline{N}.\] We note that \(V^{(2,r)}=0\) and \(V^{(n,r)}\) can be got recursively by \[\begin{split} V^{(n,r)}&=\int_{0}^{t_{n-3}} \partial_{t}\Pi\text{v}(s)e^{-\lambda^{(r)}\frac{z-s}{T}}ds+\int_{t_{n-3}}^{t_{ n-2}}\partial_{t}\Pi\text{v}(s)e^{-\lambda^{(r)}\frac{z-s}{T}}ds\\ &=e^{-\lambda^{(r)}\frac{s}{T}}V^{(n-1,r)}+\frac{T}{\lambda^{(r)} \tau}\left(1-e^{-\lambda^{(r)}\frac{s}{T}}\right)\left(v^{n-2}-v^{n-3}\right). \end{split} \tag{5.4}\] Taking expressions (5.2) and (5.3) into (5.1), we can obtain the following fast computational version of \(L1^{+}\) formula \[\bar{\partial}_{t}^{\bar{\alpha}_{n}}\text{v}\left(t_{n-\frac{1}{2}}\right)= \sum_{k=n-1}^{n}a_{n-k+1}^{(n)}\left(v^{k}-v^{k-1}\right)+\frac{T^{-\bar{ \alpha}_{n}}}{\tau\Gamma(1-\bar{\alpha}_{n})}\sum_{r=\underline{N}+1}^{ \overline{N}}\varpi^{(n,r)}b^{(n,r)}V^{(n,r)}. \tag{5.5}\] We use the fast evaluation (5.5) for the variable-order fractional operator to take the place of the \(L1^{+}\) formula in the ADI-QSC-\(L1^{+}\) scheme (4.6), which results in an improved numerical scheme, say the ADI-QSC-F\(L1^{+}\) scheme. In fact, when \(n=1\) and \(2\), we still employ (4.2) and (4.6) to simulate model (2.1). When \(n\geq 3\), the ADI-QSC-F\(L1^{+}\) scheme can be implemented as follows: _Step 1_. For each \(0\leq j\leq M_{y}+1\), we solve the following one-dimensional linear systems in the \(x\) direction for \(\tilde{c}_{ij}^{n,*}\) \[\begin{split}&\left(\mathbf{\theta}_{x}-\gamma_{n}\mathbf{\eta}_{x} \right)\tilde{c}_{ij}^{n,*}\\ =&\left(1-\frac{b_{2}^{(n)}}{b_{1}^{(n)}}\right) \mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{c}_{ij}^{n-1}+\frac{b_{2}^{(n)}}{b_{1}^{( n)}}\mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{c}_{ij}^{n-2}+\gamma_{n}(\mathbf{\eta}_{x}\mathbf{ \theta}_{y}+\mathbf{\eta}_{y}\mathbf{\theta}_{x})\tilde{c}_{ij}^{n-1}+2\gamma_{n}^{2} \mathbf{\eta}_{x}\mathbf{\eta}_{y}\tilde{c}_{ij}^{n-1}-\gamma_{n}^{2}\mathbf{\eta}_{x}\mathbf{ \eta}_{y}\tilde{c}_{ij}^{n-2}\\ &-\frac{2\gamma_{n}T^{-\bar{\alpha}_{n}}}{\tau\kappa\Gamma(1-\bar {\alpha}_{n})}\sum_{r=\underline{N}+1}^{\overline{N}}\varpi^{(n,r)}b^{(n,r)} \mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{V}_{ij}^{(n,r)}+\frac{2\gamma_{n}}{ \kappa}f_{ij}^{n-\frac{1}{2}},\quad 0\leq i\leq M_{x}+1,\end{split} \tag{5.6}\] where \[\tilde{V}^{(n,r)}=e^{-\lambda^{(r)}\frac{s}{T}}\tilde{V}^{(n-1,r)}+\frac{T}{ \lambda^{(r)}\tau}\left(1-e^{-\lambda^{(r)}\frac{s}{T}}\right)\left(c^{n-2}-c ^{n-3}\right),\] which can be computed similarly with (5.4). _Step 2_. For each \(0\leq i\leq M_{x}+1\), solving the following one-dimensional linear systems in the \(y\) direction for \(\tilde{c}_{ij}^{n}\) \[(\mathbf{\theta}_{y}-\gamma_{n}\mathbf{\eta}_{y})\tilde{c}_{ij}^{n}=\tilde{c}_{ij}^{n,* },\quad 0\leq j\leq M_{y}+1. \tag{5.7}\] With the expected accuracy \(\epsilon\leq\mathcal{O}(\tau^{\min\{3-\alpha^{*}-\alpha(0),2\}})\), the fast scheme requires \(\mathcal{O}(n\log^{2}n)\) computational cost to approximate the variable-order Caputo fractional derivative. Therefore, for the ADI-QSC-F\(L1^{+}\) scheme (5.6)-(5.7) for model (2.1), the computational cost is reduced from \(\mathcal{O}(M_{x}M_{y}N^{2})\) to \(\mathcal{O}(M_{x}M_{y}N\log^{2}N)\). Moreover, with the fast evaluation scheme (5.5), the storage requirement is reduced from for \(\mathcal{O}(M_{x}M_{y}N)\) of the ADI-QSC-\(L1^{+}\) scheme to \(\mathcal{O}(M_{x}M_{y}\log^{2}N)\) for the ADI-QSC-F\(L1^{+}\) scheme. ### Optimal QSC method In this subsection, we consider the acceleration in space domain. In fact, the standard QSC method can be improved by introducing high order perturbations, which leads to the optimal QSC method with fourth-order spatial convergence order. Therefore, we assume the solution of model (2.1)-(2.3) satisfies \(u(x,y,\cdot)\in C^{6}(\bar{\Omega})\). Therefore, we introduce the perturbation \(\mathcal{P}_{Sx}\) as \[\mathcal{P}_{Sx}\tilde{c}_{ij}=\frac{1}{24\Delta x^{2}}\begin{cases}0,&i=0,\\ -11\tilde{c}_{1j}+16\tilde{c}_{2j}-14\tilde{c}_{3j}+6\tilde{c}_{4j}-\tilde{c} _{5j},&i=1,\\ -5\tilde{c}_{1j}+6\tilde{c}_{2j}-4\tilde{c}_{3j}+\tilde{c}_{4j},&i=2,\\ \tilde{c}_{i-2,j}-4\tilde{c}_{i-1,j}+6\tilde{c}_{ij}-4\tilde{c}_{i+1,j}+\tilde{ c}_{i+2,j},&i=3,\ldots,M_{x}-2,\\ -5\tilde{c}_{M_{x},j}+6\tilde{c}_{M_{x}-1,j}-4\tilde{c}_{M_{x}-2,j}+\tilde{c} _{M_{x}-3,j},&i=M_{x}-1,\\ -11\tilde{c}_{M_{x},j}+16\tilde{c}_{M_{x}-1,j}-14\tilde{c}_{M_{x}-2,j}+6\tilde {c}_{M_{x}-3,j}-\tilde{c}_{M_{x}-4,j},&i=M_{x},\\ 0,&i=M_{x}+1,\end{cases}\] and the perturbation \(\mathcal{P}_{Sy}\) can be defined similarly. Specially, the derivation of the expressions of \(\mathcal{P}_{Sx}\) and \(\mathcal{P}_{Sy}\) can be found in [4; 10] for detail. We only need to take the perturbations \(\mathcal{P}_{Sx}\) and \(\mathcal{P}_{Sy}\) together with \(\mathbf{\eta}_{x}\) and \(\mathbf{\eta}_{y}\), respectively, for improving the accuracy of spatial approximation. Taking the optimal QSC framework into the ADI-QSC-\(L1^{+}\) scheme, we can get the optimal ADI-QSC-\(L1^{+}\) scheme. Without loss of generality, we consider the optimal ADI-QSC-\(FL1^{+}\) scheme with fast computation directly as follows. (I) For \(n=1\), \[\begin{split}&\Big{[}\mathbf{\theta}_{x}-\gamma_{1}(\mathbf{\eta}_{x}+ \mathcal{P}_{Sx})\Big{]}\Big{[}\mathbf{\theta}_{y}-\gamma_{1}(\mathbf{\eta}_{y}+ \mathcal{P}_{Sy})\Big{]}\tilde{c}_{ij}^{1}\\ &=\Big{[}\mathbf{\theta}_{x}+\gamma_{1}(\mathbf{\eta}_{x}+\mathcal{P}_{Sx })\Big{]}\Big{[}\mathbf{\theta}_{y}+\gamma_{1}(\mathbf{\eta}_{y}+\mathcal{P}_{Sy}) \Big{]}c_{ij}^{0}+\frac{2\gamma_{1}}{\kappa}f_{ij}^{\frac{1}{2}},\quad(i,j) \in\bar{\Lambda}.\end{split} \tag{5.8}\] (II) For \(n=2\), (III) For \(n\geq 3\), \[\begin{split}&\Big{[}\mathbf{\theta}_{x}-\gamma_{n}(\mathbf{\eta}_{x}+ \mathcal{P}_{Sx})\Big{]}\Big{[}\mathbf{\theta}_{y}-\gamma_{n}(\mathbf{\eta}_{y}+ \mathcal{P}_{Sy})\Big{]}\tilde{c}_{ij}^{n}\\ &=\theta_{x}\mathbf{\theta}_{y}\tilde{c}_{ij}^{n-1}+\gamma_{n}\Big{[} (\mathbf{\eta}_{x}+\mathcal{P}_{Sx})\mathbf{\theta}_{y}+(\mathbf{\eta}_{y}+\mathcal{P}_{Sy })\mathbf{\theta}_{x}\Big{]}\tilde{c}_{ij}^{n-1}\\ &\quad+2\gamma_{n}^{2}(\mathbf{\eta}_{x}+\mathcal{P}_{Sx})(\mathbf{\eta}_ {y}+\mathcal{P}_{Sy})\tilde{c}_{ij}^{n-1}-\gamma_{n}^{2}(\mathbf{\eta}_{x}+ \mathcal{P}_{Sx})(\mathbf{\eta}_{y}+\mathcal{P}_{Sy})\tilde{c}_{ij}^{n-2}\\ &\quad-\frac{2\gamma_{n}T^{-\tilde{\alpha}_{n}}}{\tau\kappa\Gamma (1-\tilde{\alpha}_{n})}\sum_{r=\overline{N}+1}^{\overline{N}}\varpi^{(n,r)}b ^{(n,r)}\mathbf{\theta}_{x}\mathbf{\theta}_{y}\tilde{V}_{ij}^{(n,r)}+\frac{2\gamma_{n }}{\kappa}f_{ij}^{n-\frac{1}{2}},\ (i,j)\in\bar{\Lambda}.\end{split} \tag{5.10}\] The optimal ADI-QSC-\(FL1^{+}\) scheme can achieve fourth-order accuracy in space, which means that we can get a desired accuracy with much less mesh grids. ## 6 Numerical Experiments In this section, we consider numerical experiments to numerically support the accuracy and efficiency of schemes developed in this paper. All schemes are programmed in Matlab R2018b, and implemented on a Windows server with Intel(R) Xeon(R) E5-2650 CPU @ 2.30 GHz. We consider model (2.1)-(2.3) in the space domain \(\Omega=(0,1)\times(0,1)\) and time interval \([0,1]\), and choose four different variable time fractional order \[\alpha_{0}(t) =0.45-0.3t,\quad\alpha_{1}(t)=0.4+0.5(1-t)-\frac{1}{4\pi}\left[sin( 2\pi(1-t))\right],\] \[\alpha_{2}(t) =0.8-0.5(1-t),\quad\alpha_{3}(t)=\left|3(t-0.5)^{2}-0.2\right|+0.3.\] **Example 6.1.** We choose the diffusivity coefficient \(\kappa=1\), the initial date and the source function as \[u^{0}(x,y)=\sin x\sin y,\quad f\left(x,y,t\right)=0.\] The true solution of model (2.1)-(2.3) is unknown. We first fix the values of \(M_{x}=M_{y}\) big enough to investigate the temporal errors and convergence orders. We compute numerical solutions \(u_{h}^{n}(x,y)\) on a coarse time mesh with size \(\tau\), then we refine the time mesh with sizes \(\tau/2\). The resulting errors in the discrete \(L_{2}\) -norm at time \(t_{n}\) can be calculated on the coarse mesh as \[Err^{2}:=\Delta x\Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{y}+1}\left|u_{h}^ {n}(\xi_{i}^{x},\xi_{j}^{y})-u_{h}^{2n}(\xi_{i}^{x},\xi_{j}^{y})\right|^{2}.\] We show the observing errors and the temporal convergence orders at the time instance near the initial time and at time instance \(t=T\), respectively, in Table 1 and Table 2. We can see that, near the initial time, the fractional index \(\alpha_{0}(t)\) satisfies the assumptions in Corollary 3.1 and the local temporal convergence order is preserved. But the fractional indices \(\alpha_{1}(t)\) and \(\alpha_{3}(t)\) do not satisfy the assumptions, and the local temporal convergence orders are about \(O(\tau^{3-\alpha^{*}-\alpha(0)})\). On the other hand, the singularity near the initial time almost do not affect the convergence order at the final time point, which reaches a satisfied second-order. Next, we choose the values of \(N\) big enough. Using the similar routine by the coarse and fine space mesh, we can get computing errors and convergence orders in Table 3, which fit the theoretical spatial convergence orders well. **Example 6.2.** We choose the diffusivity coefficient \(\kappa=1\), the initial date and the source function as \[u^{0}(x,y)=\sin x\sin y,\] \[f\left(x,y,t\right)=\left[3t^{2}+\frac{\Gamma(4)}{\Gamma(4-\alpha( t))}t^{3-\alpha(t)}+2\pi^{2}\left(1+t^{3}\right)\right]\sin(\pi x)\sin(\pi y),\] such that the true solution of model (2.1)-(2.3) is \[u(x,y,t)=\left(1+t^{3}\right)\sin(\pi x)\sin(\pi y).\] The error is measured in the discrete \(L_{2}\) -norm as \[Err^{2}:=\Delta x\Delta y\sum_{i=0}^{M_{x}+1}\sum_{j=0}^{M_{x}+1}\left|u_{h}^{ n}(\xi_{i}^{x},\xi_{j}^{y})-u^{n}(\xi_{i}^{x},\xi_{j}^{y})\right|^{2}.\] We first fix \(N=2^{11}\), and the observation errors and spatial convergence orders of the QSC-\(L1^{+}\) scheme, the ADI-QSC-\(L1^{+}\) scheme and the ADI-QSC-\(F\!L1^{+}\) scheme are shown in Table 4. Then, we fix \(M_{x}=M_{y}=2^{11}\), and the temporal convergence orders are shown in Table 5. It can be seen that, all the three schemes have second-order convergence orders in both space and time, which conform the theoretical convergence orders in Remark 3.3. Due to the high convergence order, the optimal ADI-QSC-\(L1^{+}\) scheme is considered separately. we fix \(N=2^{17}\) to observe the spatial convergence orders in Table 6, and fix \(M_{x}=M_{y}=2^{5}\) to get the temporal convergence orders in Table 7. The results show that the optimal ADI-QSC-\(L1^{+}\) scheme has fourth-order convergence orders in space, which is consistent with the theoretical results. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{QSC-\(L1^{+}\)} & \multicolumn{3}{c}{ADI-QSC-\(L1^{+}\)} & ADI-QSC-\(FL1^{+}\) \\ \hline & \(\Delta x=\Delta y\) & \(Err\) & \(Order\) & \(Err\) & \(Order\) & \(Err\) & \(Order\) \\ \hline \hline \(2^{-4}\) & 1.42e-03 & — & 1.42e-03 & — & 1.42e-03 & — \\ \(\alpha_{1}(t)\) & \(2^{-5}\) & 3.55e-04 & 2.00 & 3.55e-04 & 2.00 & 3.55e-04 & 2.00 \\ & \(2^{-6}\) & 8.89e-05 & 1.99 & 8.89e-05 & 1.99 & 8.88e-05 & 1.99 \\ & \(2^{-7}\) & 2.23e-05 & 1.99 & 2.23e-05 & 1.99 & 2.22e-05 & 2.00 \\ \hline \(\alpha_{2}(t)\) & \(2^{-4}\) & 1.41e-03 & — & 1.41e-03 & — & 1.41e-03 & — \\ & \(2^{-5}\) & 3.53e-04 & 1.99 & 3.53e-04 & 1.99 & 3.53e-04 & 1.99 \\ & \(2^{-6}\) & 8.84e-05 & 1.99 & 8.84e-05 & 1.99 & 8.84e-05 & 1.99 \\ & \(2^{-7}\) & 2.22e-05 & 1.99 & 2.22e-05 & 1.99 & 2.22e-05 & 1.99 \\ \hline \hline & \multicolumn{3}{c}{\(\approx 2.00\)} & \multicolumn{3}{c}{\(\approx 2.00\)} & \multicolumn{3}{c}{\(\approx 2.00\)} \\ \hline \end{tabular} \end{table} Table 4: Errors and spatial convergence orders of proposed schemes for Example 6.2, with \(N=2^{11}\) \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{QSC-\(L1^{+}\)} & \multicolumn{3}{c}{ADI-QSC-\(L1^{+}\)} & ADI-QSC-\(FL1^{+}\) \\ \hline & \(\tau\) & \(Err\) & \(Order\) & \(Err\) & \(Order\) & \(Err\) & \(Order\) \\ \hline \hline & \(2^{-4}\) & 1.42e-03 & — & 2.11e-03 & — & 2.02e-03 & — \\ \(\alpha_{1}(t)\) & \(2^{-5}\) & 3.54e-04 & 2.00 & 4.46e-04 & 2.24 & 4.26e-04 & 2.24 \\ & \(2^{-6}\) & 8.86e-05 & 1.99 & 1.00e-04 & 2.15 & 9.65e-05 & 2.14 \\ & \(2^{-7}\) & 2.22e-05 & 1.99 & 2.37e-05 & 2.07 & 2.28e-05 & 2.08 \\ \hline \(\alpha_{2}(t)\) & \(2^{-4}\) & 1.40e-03 & — & 1.95e-03 & — & 1.94e-03 & — \\ & \(2^{-5}\) & 3.51e-04 & 1.99 & 4.24e-04 & 2.20 & 4.22e-04 & 2.20 \\ & \(2^{-6}\) & 8.80e-05 & 1.99 & 9.76e-05 & 2.11 & 9.73e-05 & 2.11 \\ & \(2^{-7}\) & 2.21e-05 & 1.99 & 2.33e-05 & 2.06 & 2.33e-05 & 2.06 \\ \hline & \multicolumn{3}{c}{\(\approx 2.00\)} & \multicolumn{3}{c}{\(\approx 2.00\)} & \multicolumn{3}{c}{\(\approx 2.00\)} \\ \hline \end{tabular} \end{table} Table 5: Errors and temporal convergence orders of proposed schemes for Example 6.2, with \(M_{x}=M_{y}=2^{11}\) ADI-QSC-\(FL1^{+}\) scheme. For the first three schemes, the observation errors and CPU time can be found in Table 8. We can see that, the ADI method and the ESA technique can improve the efficiency greatly. For example, for \(M_{x}=M_{y}=N=2^{11}\) in the case of \(\alpha_{2}(t)\), ADI method can reduce the CPU time from 100197 seconds to 58852 seconds, and the fast evaluation can further reduce the CPU time to 26782 seconds, while almost preserving the same accuracy. For the last two optimal QSC based schemes, the observation errors and CPU time can be found in Table 9. It can be seen that, the fourth spatial convergence order by the optimal QSC method allows us to employ much more sparse meshes which will save enormous computational cost, though higher smooth assumption for the solution is required. ## 7 Conclusions In this paper, we develop the QSC-\(L1^{+}\) scheme for the variable-order TF-MID equation in two dimensional space domain. The scheme is proved to be unconditionally stable and convergent with accuracy \(\mathcal{O}(\tau^{\min\{3-\alpha^{*}-\alpha(0),2\}}+\Delta x^{2}+\Delta y^{2})\), for proper assumptions on \(\alpha(t)\). Based on the QSC-\(L1^{+}\) scheme, we design a novel ADI framework to achieve the ADI-QSC-\(L1^{+}\) scheme, and we also analyze its unconditional stability and convergence. The numerical experiments show that the results fit well with the theoretical analysis, even \(\alpha(t)\) do not satisfy the restrictions. Then \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{QSC-\(L1^{+}\)} & \multicolumn{3}{c}{ADI-QSC-\(L1^{+}\)} & ADI-QSC-\(FL1^{+}\) \\ \hline \hline \(\tau=\Delta x=\Delta y\) & \(Err\) & \(Time(Sec.)\) & \(Err\) & \(Time(Sec.)\) & \(Err\) & \(Time(Sec.)\) \\ \hline \hline \(\alpha_{1}(t)\) & \(2^{-8}\) & 2.22e-05 & 72.4 & 2.26e-05 & 9.3 & 2.20e-05 & 13.2 \\ \(\alpha_{1}(t)\) & \(2^{-9}\) & 5.54e-06 & 719.1 & 5.59e-06 & 215.8 & 5.47e-06 & 202.8 \\ & \(2^{-10}\) & 1.39e-06 & 7449.6 & 1.39e-06 & 3145.4 & 1.30e-06 & 1930.1 \\ & \(2^{-11}\) & 3.46e-07 & 98969 & 3.47e-07 & 59383 & 2.29e-07 & 22452 \\ \hline \(\alpha_{2}(t)\) & \(2^{-8}\) & 2.20e-05 & 73.8 & 2.24e-05 & 18.7 & 2.24e-05 & 15.4 \\ \(\alpha_{2}(t)\) & \(2^{-9}\) & 5.51e-06 & 735.7 & 5.55e-06 & 212.4 & 5.55e-06 & 237.6 \\ & \(2^{-10}\) & 1.38e-06 & 7579.7 & 1.38e-06 & 2962.3 & 1.38e-06 & 2298.2 \\ & \(2^{-11}\) & 3.45e-07 & 100197 & 3.46e-07 & 58852 & 3.45e-07 & 26782 \\ \hline \(\alpha_{3}(t)\) & \(2^{-8}\) & 2.21e-05 & 74.8 & 2.24e-05 & 9.4 & 2.24e-05 & 16.1 \\ \(\alpha_{3}(t)\) & \(2^{-9}\) & 5.52e-06 & 730.5 & 5.56e-06 & 209.5 & 5.53e-06 & 237.3 \\ & \(2^{-10}\) & 1.38e-06 & 7544.0 & 1.39e-06 & 3036.6 & 1.37e-06 & 2322.8 \\ & \(2^{-11}\) & 3.45e-07 & 100328 & 3.46e-07 & 59004 & 3.24e-07 & 27161 \\ \hline \hline \end{tabular} \end{table} Table 8: Errors and CPU time of proposed schemes for Example 6.3 \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{QSC-\(L1^{+}\)} & \multicolumn{3}{c}{ADI-QSC-\(L1^{+}\)} & ADI-QSC-\(FL1^{+}\) \\ \hline \hline \(\tau=\Delta x=\Delta y\) & \(Err\) & \(Time(Sec.)\) & \(Err\) & \(Time(Sec.)\) & \(Err\) & \(Time(Sec.)\) \\ \hline \hline \(\alpha_{1}(t)\) & \(2^{-8}\) & 2.22e-05 & 72.4 & 2.26e-05 & 9.3 & 2.20e-05 & 13.2 \\ \(\alpha_{1}(t)\) & \(2^{-9}\) & 5.54e-06 & 719.1 & 5.59e-06 & 215.8 & 5.47e-06 & 202.8 \\ & \(2^{-10}\) & 1.39e-06 & 7449.6 & 1.39e-06 & 3145.4 & 1.30e-06 & 1930.1 \\ & \(2^{-11}\) & 3.46e-07 & 98969 & 3.47e-07 & 59383 & 2.29e-07 & 22452 \\ \hline \(\alpha_{2}(t)\) & \(2^{-8}\) & 2.20e-05 & 73.8 & 2.24e-05 & 18.7 & 2.24e-05 & 15.4 \\ \(\alpha_{2}(t)\) & \(2^{-9}\) & 5.51e-06 & 735.7 & 5.55e-06 & 212.4 & 5.55e-06 & 237.6 \\ & \(2^{-10}\) & 1.38e-06 & 7579.7 & 1.38e-06 & 2962.3 & 1.38e-06 & 2298.2 \\ & \(2^{-11}\) & 3.45e-07 & 100197 & 3.46e-07 & 58852 & 3.45e-07 & 26782 \\ \hline \(\alpha_{3}(t)\) & \(2^{-8}\) & 2.21e-05 & 74.8 & 2.24e-05 & 9.4 & 2.24e-05 & 16.1 \\ \(\alpha_{3}(t)\) & \(2^{-9}\) & 5.52e-06 & 730.5 & 5.56e-06 & 209.5 & 5.53e-06 & 237.3 \\ & \(2^{-10}\) & 1.38e-06 & 7544.0 & 1.39e-06 & 3036.6 & 1.37e-06 & 2322.8 \\ & \(2^{-11}\) & 3.45e-07 & 100328 & 3.46e-07 & 59004 & 3.24e-07 & 27161 \\ \hline \hline \end{tabular} \end{table} Table 7: Errors and temporal convergence orders of optimal ADI-QSC-\(L1^{+}\) for Example 6.2, \(M_{x}=M_{y}=2^{5}\) we employ the fast evaluation based on ESA technique to obtain the ADI-QSC-\(FL1^{+}\) scheme, which can reduce the computation cost greatly. Furthermore, the optimal QSC method is also applied to get the optimal ADI-QSC-\(FL1^{+}\) scheme, the numerical results show that the higher order schemes lead to much better computational efficiency. **Funding** The work of J. Liu was supported in part by the Shandong Provincial Natural Science Foundation (Nos. ZR2021MA020, ZR2020MA039), the Fundamental Research Funds for the Central Universities (Nos. 22CX03016A, 20CX05011A), and the Major Scientific and Technological Projects of CNPC under Grant (No. ZD2019-184-001). The work of H. Fu was supported in part by the National Natural Science Foundation of China (Nos. 11971482, 12131014), the Fundamental Research Funds for the Central Universities (No. 202264006), and by the OUC Scientific Research Program for Young Talented Professionals. **Data availability** Enquiries about data availability should be directed to the authors. ## Declarations **Conflict of interest** The authors declare that they have no conflict of interest. ## Appendix A Estimate of \(r_{1,n}\) Based on the definition of \(r_{1,n}\), we have \[r_{1,n} =\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ We can obtain from Lemma 2.1 that \[\left|r_{1,n}\right| \leq Q_{0}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\int_{0}^{t} \left[\omega_{1-a(t)}(t-s)-\omega_{1-\tilde{a}_{n}}(t-s)\right]dsdt\right|\] \[=Q_{0}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\left[\frac{t^{1-a (t)}}{\Gamma\left(2-\alpha(t)\right)}-\frac{t^{1-\tilde{a}_{n}}}{\Gamma\left( 2-\tilde{\alpha}_{n}\right)}\right]dt\right|\] \[=Q_{0}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\frac{\Gamma \left(2-\tilde{\alpha}_{n}\right)t^{1-a(t)}-\Gamma(2-\alpha(t))t^{1-\tilde{a}_ {n}}}{\Gamma(2-\alpha(t))\Gamma\left(2-\tilde{\alpha}_{n}\right)}dt\right|\] \[\leq Q_{0}Q_{1}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\Gamma \left(2-\tilde{\alpha}_{n}\right)\left(t^{1-\alpha(t)}-t^{1-\tilde{a}_{n}} \right)dt\right|+Q_{0}Q_{1}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}t^{1- \tilde{a}_{n}}[\Gamma\left(2-\tilde{\alpha}_{n}\right)-\Gamma(2-\alpha(t))]dt \right|,\] where \(\Gamma(x)\) is bounded when \(x\in(1,2)\). By Taylor's expansion, we can get \[t^{1-\alpha(t)}\big{|}_{\xi=t} =t^{1-\tilde{a}_{n}}-t^{1-\tilde{a}_{n}}(\ln t)\alpha^{\prime}(t_ {n-\frac{1}{2}})\left(t-t_{n-\frac{1}{2}}\right)\] \[\quad+\frac{1}{2}\left[t^{1-\alpha(\eta_{1})}(\ln t)^{2}\left( \alpha^{\prime}\left(\eta_{1}\right)\right)^{2}-t^{1-\alpha(\eta_{1})}(\ln t) \alpha^{\prime\prime}(\eta_{1})\right]\left(t-t_{n-\frac{1}{2}}\right)^{2},\] and \[\Gamma(2-\alpha(t))= \Gamma\left(2-\tilde{\alpha}_{n}\right)-\Gamma^{\prime}\left(2- \tilde{\alpha}_{n}\right)\alpha^{\prime}(t_{n-\frac{1}{2}})\left(t-t_{n-\frac {1}{2}}\right)\] \[+\frac{1}{2}\left[-\Gamma^{\prime\prime}\left(2-\alpha\left(\eta _{2}\right)\right)\left(\alpha^{\prime}\left(\eta_{2}\right)\right)^{2}+ \Gamma^{\prime}\left(2-\alpha\left(\eta_{2}\right)\right)\alpha^{\prime\prime }\left(\eta_{2}\right)\right]\left(t-t_{n-\frac{1}{2}}\right)^{2},\] where \(\eta_{1}\) and \(\eta_{2}\) are both between \(t\) and \(t_{n-\frac{1}{2}}\). Thus, we have \[\left|r_{1,n}\right| \leq Q_{0}Q_{1}Q_{2}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}} \left[t^{1-\alpha(t)}-t^{1-\tilde{a}_{n}}\right]dt\right|+Q_{0}Q_{1}Q_{3}\left| \frac{1}{\tau}\int_{t_{n-1}}^{t_{n}}\left[\Gamma(2-\alpha(t))-\Gamma\left(2- \tilde{\alpha}_{n}\right)\right]dt\right|\] \[\leq Q_{0}Q_{1}Q_{2}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}} \left[Q_{4}\left(t-t_{n-\frac{1}{2}}\right)+Q_{5}\left(t-t_{n-\frac{1}{2}} \right)^{2}\right]dt\right|\] \[\quad+Q_{0}Q_{1}Q_{3}\left|\frac{1}{\tau}\int_{t_{n-1}}^{t_{n}} \left[Q_{6}\left(t-t_{n-\frac{1}{2}}\right)+Q_{7}\left(t-t_{n-\frac{1}{2}} \right)^{2}\right]dt\right|,\] Since \(\int_{t_{n-1}}^{t_{n}}\left(t-t_{n-\frac{1}{2}}\right)dt=0\) and \(\int_{t_{n-1}}^{t_{n}}\left(t-t_{n-\frac{1}{2}}\right)^{2}dt=\tau^{3}\), we can get \(\left|r_{1,n}\right|=O\left(\tau^{2}\right)\). ## Appendix B Estimate of \(r_{2,n}\) Similar to the routine in Refs. [36; 47], we can get the following proof. By exchanging the order of integration and using integration by parts on each subinterval, we can get \[r_{2,n} =\frac{1}{\tau}\left[\int_{0}^{t_{n}}\int_{0}^{t}\omega_{1-\tilde{ a}_{n}}(t-s)\partial_{s}\partial v(s)dsdt-\int_{0}^{t_{n-1}}\int_{0}^{t}\omega_{1- \tilde{a}_{n}}(t-s)\partial_{s}\partial v(s)dsdt\right] \tag{12}\] \[=\frac{1}{\tau}\int_{t_{0}}^{t_{n}}\omega_{1-\tilde{a}_{n}}\left( t_{n}-s\right)\partial v\left(s\right)ds-\frac{1}{\tau}\int_{t_{0}}^{t_{n-1}} \omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s\right)\theta v\left(s\right)ds.\] (I) For \(1\leq n\leq 3\), we have from (2.8) and (12) that \[\left|r_{2,n}\right| \leq\frac{1}{\tau}\sum_{k=1}^{n-1}\int_{t_{k-1}}^{t_{k}}\left[ \omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s\right)-\omega_{1-\tilde{a}_{n}}\left(t _{n}-s\right)\right]\left|\theta v\left(s\right)\right|ds+\frac{1}{\tau}\int_{t _{n-1}}^{t_{n}}\omega_{1-\tilde{a}_{n}}\left(t_{n}-s\right)\left|\theta v\left(s \right)\right|ds \tag{13}\] \[\leq Q_{8}\sum_{k=1}^{n-1}\left(t_{k}^{1-\alpha(0)}-t_{k-1}^{1- \alpha(0)}\right)\tau^{1-\tilde{a}_{n}}+Q_{9}\left(t_{n}^{1-\alpha(0)}-t_{n-1}^ {1-\alpha(0)}\right)\tau^{1-\tilde{a}_{n}}=Q_{10}t_{n}^{1-\alpha(0)}\tau^{1- \tilde{a}_{n}}\leq Q_{11}t_{n}^{-\tilde{a}_{n}-\alpha(0)}\tau^{2}.\] (II) For \(n\geq 4\), we set \(n_{0}=\left\lceil\frac{n}{2}\right\rceil\) so that \(\frac{n}{2}\leq n_{0}\leq\frac{n}{2}+1\) and \(n\geq n_{0}+2\). According to (B.1), we now split \(r_{2,n}=r_{2,n}^{1}+r_{2,n}^{2}+r_{2,n}^{3}\), where \[r_{2,n}^{1} =\frac{1}{\tau}\sum_{k=1}^{n_{0}}\int_{t_{k-1}}^{t_{k}}\left[ \omega_{1-\tilde{a}_{n}}\left(t_{n}-s\right)-\omega_{1-\tilde{a}_{n}}\left(t_{ n-1}-s\right)\right]\theta v\left(s\right)ds,\] \[r_{2,n}^{2} =\frac{1}{\tau}\int_{t_{n_{0}}}^{t_{n_{0}+1}}\omega_{1-\tilde{a}_ {n}}\left(t_{n}-s\right)\theta v\left(s\right)ds,\] (B.3) \[r_{2,n}^{3} =\frac{1}{\tau}\sum_{k=n_{0}+1}^{n-1}\left[\int_{t_{k}}^{t_{k+1} }\omega_{1-\tilde{a}_{n}}\left(t_{n}-s\right)\theta v\left(s\right)ds-\int_{t _{k-1}}^{t_{k}}\omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s\right)\theta v\left(s \right)ds\right].\] Since \(0\leq\omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s\right)-\omega_{1-\tilde{a}_{n}} \left(t_{n}-s\right)\leq Q_{12}\tau\left(t_{n-1}-t_{n_{0}}\right)^{-\tilde{a} _{n}-1}\leq Q_{13}\tau n^{-\tilde{a}_{n}-1}_{n}\) for \(t_{0}\leq s\leq t_{n_{0}}\), we can get from (2.8) that \[\left|r_{2,n}^{1}\right|\leq Q_{14}\tau^{2}t_{n}^{-\tilde{a}_{n}-1}\sum_{k=1} ^{n_{0}}\left(t_{k}^{1-\alpha\left(0\right)}-t_{k-1}^{1-\alpha\left(0\right)} \right)=Q_{14}\tau^{2}t_{n}^{-\alpha_{n}-1}t_{n_{0}}^{1-\alpha\left(0\right)} \leq Q_{15}t_{n}^{-\alpha_{n}-\alpha\left(0\right)}\tau^{2}.\] (B.4) Similarly, we can obtain \[\left|r_{2,n}^{2}\right|\leq Q_{16}\tau\left(t_{n_{0}+1}^{1-\alpha\left(0 \right)}-t_{n_{0}+1}^{1-\alpha\left(0\right)}\right)\left(t_{n}-t_{n_{0}+1} \right)^{-\tilde{a}_{n}}\leq Q_{17}\tau^{2}t_{n_{0}}^{-\alpha\left(0\right)}t_ {n}^{-\tilde{a}_{n}}\leq Q_{18}t_{n}^{-\alpha_{n}-\alpha\left(0\right)}\tau^{2}.\] (B.5) According to that \[\int_{t_{k}}^{t_{k+1}}\omega_{1-\tilde{a}_{n}}\left(t_{n}-s\right)\left(s-t_{ k}\right)\left(s-t_{k+1}\right)ds=\int_{t_{k-1}}^{t_{k}}\omega_{1-\tilde{a}_{n}} \left(t_{n-1}-s\right)\left(s-t_{k-1}\right)\left(s-t_{k}\right)ds,\] we can rewrite \(r_{2,n}^{3}\) as \(r_{2,n}^{3}=\frac{1}{\tau}\sum_{k=n_{0}+1}^{n-1}\left(t_{n}^{k}-\widetilde{ \eta}_{n}^{k}\right)\), where \[\eta_{n}^{k} =\int_{t_{k}}^{t_{k+1}}\omega_{1-\tilde{a}_{n}}\left(t_{n}-s \right)\left[\theta v\left(s\right)-\frac{\partial_{t}^{2}v\left(t_{k}\right) }{2}\left(s-t_{k}\right)\left(s-t_{k+1}\right)\right]ds,\] \[\widetilde{\eta}_{n}^{k} =\int_{t_{k-1}}^{t_{k}}\omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s \right)\left[\theta v\left(s\right)-\frac{\partial_{t}^{2}v\left(t_{k}\right) }{2}\left(s-t_{k-1}\right)\left(s-t_{k}\right)\right]ds.\] When \(n_{0}+1\leq k\leq n-1\) and \(t_{k}\leq s\leq t_{k+1}\), \(\theta v\left(s\right)=\frac{\partial_{t}^{2}v\left(\tilde{\phi}_{k}\right) }{2}\left(s-t_{k}\right)\left(s-t_{k+1}\right)\), where \(\hat{\rho}_{k}\in\left(t_{k},t_{k+1}\right)\). Thus, there exists \(\hat{\zeta}_{k}\in\left(t_{k},\hat{\rho}_{k}\right)\) such that \[\left|\eta_{n}^{k}\right|\leq\frac{1}{2}\int_{t_{k}}^{t_{k+1}}\omega_{1-\tilde {a}_{n}}\left(t_{n}-s\right)\left|\partial_{t}^{3}v\left(\hat{\zeta}_{k}\right) \right|\left(\hat{\rho}_{k}-t_{k}\right)\left(s-t_{k}\right)ds\leq Q_{19}\tau ^{3}t_{k}^{-1-\alpha\left(0\right)}\int_{t_{k}}^{t_{k+1}}\omega_{1-\tilde{a}_ {n}}\left(t_{n}-s\right)ds.\] Similarly, we have \[\left|\tilde{\eta}_{n}^{k}\right|\leq Q_{20}\tau^{3}t_{k-1}^{-1-\alpha\left(0 \right)}\int_{t_{k-1}}^{t_{k}}\omega_{1-\tilde{a}_{n}}\left(t_{n-1}-s\right)ds, \quad n_{0}+1\leq k\leq n-1.\] Based on that \(t_{k}\geq t_{k-1}\geq t_{n_{0}}\geq\frac{1}{2}t_{n}\) when \(n_{0}+1\leq k\leq n-1\), we can get \[\left|r_{2,n}^{3}\right|\leq\frac{1}{\tau}\sum_{k=n_{0}+1}^{n-1}\left|\eta_{n}^{k }-\tilde{\eta}_{n}^{k}\right|\leq Q_{21}\tau^{2}t_{n}^{-1-\alpha\left(0\right)} \left[\left(t_{n}-t_{n_{0}+1}\right)^{1-\tilde{a}_{n}}+\left(t_{n-1}-t_{n_{0}} \right)^{1-\tilde{a}_{n}}\right]\leq Q_{22}t_{n}^{-\tilde{a}_{n}-\alpha\left(0 \right)}\tau^{2}.\] (B.6) By (B.4), (B.5) and (B.6), we obtain \(\left|r_{2,n}\right|\leq Q_{23}t_{n}^{-\tilde{a}_{n}-\alpha\left(0\right)}\tau^ {2}\) for \(n\geq 4\). The proof is completed.
2310.15209
DeepOrientation: convolutional neural network for fringe pattern orientation map estimation
Fringe pattern based measurement techniques are the state-of-the-art in full-field optical metrology. They are crucial both in macroscale, e.g., fringe projection profilometry, and microscale, e.g., label-free quantitative phase microscopy. Accurate estimation of the local fringe orientation map can significantly facilitate the measurement process on various ways, e.g., fringe filtering (denoising), fringe pattern boundary padding, fringe skeletoning (contouring/following/tracking), local fringe spatial frequency (fringe period) estimation and fringe pattern phase demodulation. Considering all of that the accurate, robust and preferably automatic estimation of local fringe orientation map is of high importance. In this paper we propose novel numerical solution for local fringe orientation map estimation based on convolutional neural network and deep learning called DeepOrientation. Numerical simulations and experimental results corroborate the effectiveness of the proposed DeepOrientation comparing it with the representative of the classical approach to orientation estimation called combined plane fitting/gradient method. The example proving the effectiveness of DeepOrientation in fringe pattern analysis, which we present in this paper is the application of DeepOrientation for guiding the phase demodulation process in Hilbert spiral transform. In particular, living HeLa cells quantitative phase imaging outcomes verify the method as an important asset in label-free microscopy.
Maria Cywinska, Mikolaj Rogalski, Filip Brzeski, Krzysztof Patorski, Maciej Trusiak
2023-10-23T14:36:03Z
http://arxiv.org/abs/2310.15209v1
# DeepOrientation: convolutional neural network for fringe pattern orientation map estimation ###### Abstract Fringe pattern based measurement techniques are the state-of-the-art in full-field optical metrology. They are crucial both in macroscale, e.g., fringe projection profilometry, and microscale, e.g., label-free quantitative phase microscopy. Accurate estimation of the local fringe orientation map can significantly facilitate the measurement process on various ways, e.g., fringe filtering (denoising), fringe pattern boundary padding, fringe skeletoning (contouring/following/tracking), local fringe spatial frequency (fringe period) estimation and fringe pattern phase demodulation. Considering all of that the accurate, robust and preferably automatic estimation of local fringe orientation map is of high importance. In this paper we propose novel numerical solution for local fringe orientation map estimation based on convolutional neural network and deep learning called DeepOrientation. Numerical simulations and experimental results corroborate the effectiveness of the proposed DeepOrientation comparing it with the representative of the classical approach to orientation estimation called combined plane fitting/gradient method. The example proving the effectiveness of DeepOrientation in fringe pattern analysis, which we present in this paper is the application of DeepOrientation for guiding the phase demodulation process in Hilbert spiral transform. In particular, living HeLa cells quantitative phase imaging outcomes verify the method as an important asset in label-free microscopy. Phase measurements; Fringe orientation map; Fringe direction map; Convolutional neural network; Supervised learning; Full-field optical measurements; Spatially self-similar patterns; Hilbert spiral transform; Phase demodulation ## 1 Introduction The full-field optical measurement techniques, such as interferometry [1-3], holographic microscopy [4-6], fringe projection [7,8] or moire technique [9], are considered to be highly accurate, non-invasive and fast ones. In all mentioned techniques the measurement result is received in the form of a fringe pattern (interferogram/hologram/moiregram), where the phase function (or less frequently amplitude function) stores information about studied specimen. For that reason, the whole process resulting in information retrieval from recorded fringe pattern can be divided into two steps: opto-electronic measurement leading to capturing the fringe pattern and numerical processing leading to the fringe pattern phase map calculation. In general, recorded fringe pattern can be described as: \[I(x,y)=a(x,y)+b(x,y)cos\big{(}\varphi(x,y)\big{)}+n(x,y), \tag{1}\] where \(a(x,y)\) describes background intensity, \(n(x,y)\) represents noise, \(b(x,y)\) and \(\varphi(x,y)\) denote amplitude and phase modulation (measurand), respectively. There are generally two main classes of algorithms enabling phase map demodulation, i.e., multi- and single-frame methods. The first one is known as the most accurate, but difficult to apply in the case of studying transient events or performing measurement in an unstable environment, as generally large number of frames is needed (3+). Because of that the development of single-frame algorithms is needed and important. The Fourier transform (FT) method [10] is a well-known representative of such a technique but it has limitations in terms of the carrier spatial frequency and global spectrum filtering. The FT localized relatives, such as the windowed Fourier transform (WFT) [11], continuous wavelet transform (CWT) [12] and empirical wavelet transform [13], or other approaches including spatial carrier phase-shifting (SCPS) [14], and regularized phase tracking [15], are generally very capable but require a set of parameters to be fixed. They can be computationally and algorithmically demanding, and exhibit characteristic errors (e.g., the CWT method introduces errors in areas of strong phase gradients correctable for an especially tailored numerical scheme). Other solutions escaping so-called off-axis interferogram regime are Kramers-Kronig relation [16], Riesz transform approach [17, 18], Hilbert Phase Microscopy [19-21] or two-frame Hilbert transform approach [22]. The approaches based on Hilbert spiral transform (HST) [23-25] enable the single-frame phase analysis in the widest range of fringe pattern carrier frequencies, however they do need the fringe orientation map for guiding the phase demodulation process. It is to be highlighted that the fringe orientation map is essential in various fringe processing and analysis tasks, where it enables or greatly enhances the calculations. The examples are: fringe filtering (denoising) [26-43], fringe pattern boundary padding [41, 44], fringe skeletoning (contouring/following/tracking) [27, 29, 32, 33, 36, 37, 39, 40, 44, 45, 46], local fringe spatial frequency (fringe period) estimation [30, 34, 47, 48] and fringe pattern phase demodulation [28, 30, 32, 36, 38, 47, 48]. To be precise we would like to introduce the concept of local fringe direction (LFD) map and explain the difference between local direction and orientation maps. The LFD map (\(\beta(x,y)\)) stores the information about the azimuth of vector locally normal to fringes as well as its direction (e.g., up or down for vertical azimuth). It is a modulo \(2\pi\) indicator, therefore. The LFD map cannot be calculated in the straightforward way from recorded pattern as carrier fringes with opposite directions visually are the same. The quantity, which we can calculate directly from the fringe pattern is called fringes orientation (FO) [60] and it is a modulo \(\pi\) indicator. It stores the information only about the azimuth of the vector locally normal to fringes. To move from the fringes orientation to fringes direction one needs to apply the unwrapping procedure (with the use of phase unwrapping algorithms [61]). The difference between the phase unwrapping and fringe orientation unwrapping procedures is the need of multiplying by 2 the modulo \(\pi\) steps, dividing the resultant unwrapped map by 2 and bringing it down to the range of LFD map, i.e., modulo \(2\pi\). From the definition, in which \(\beta(x,y)\) is the map of angles between vector locally normal to fringes and x axis, fringes orientation can be estimated as arctangent of the orthogonal spatial derivatives of phase function: \[\tan\left(\beta(x,y)\right)=\frac{\partial\varphi(x,y)}{\partial x}\big{/} \frac{\partial\varphi(x,y)}{\partial y},\quad 0\leq\beta(x,y)<2\pi, \tag{2}\] \[FO(x,y)=arctan\left(\frac{\partial\varphi(x,y)}{\partial x}\big{/}\frac{ \partial\varphi(x,y)}{\partial y}\right),0\leq FO(x,y)<\pi. \tag{3}\] At this point it can be clearly seen that the local fringe direction map estimation is not an easy task since (1) it requires two-steps calculations and (2) the phase function needed for precise orientation calculation is encoded in the fringe pattern in the argument of cosine function, and simply it is not directly accessible in experimental reality. For that reason the orientation map cannot be calculated from the definition in the measurement reality. Instead of estimating the orthogonal spatial derivatives of phase function one can estimate the intensity gradients of the recorded fringe pattern. In the case of prefiltered fringe pattern (with uniform background, contrast and minimized noise) the intensity gradient vector has the same direction as phase gradient vector. That way the orientation map can be calculated directly from the orthogonal derivatives of the fringe pattern intensities, which is a working principle of gradient methods [39, 45, 57, 62]. Another solution called plane fit method [31] is based on the fitting a plane polynomial (within a given window) to the gray levels of local fringes. The zero-direction derivative of the fitted plane is defined as the local fringe orientation (FO). The combined method uses both the plane-fit algorithm and gradient method [36]. Firstly the local phase gradients are approximated by plane-fitting to fringes and then those gradients are used to estimate FO. Nevertheless, the use of gradient and plane-fit algorithms requires careful adjusting of calculation window size, which is connected with the trade-off between the noise resistance (gained in the case of big window size) and higher resolution (achieved for small window size). In order to determine the local fringes orientation spin filters [26,28,29,32,33] and binary sign-maps [27,29] may be also used. Since in the experimental reality we are always dealing with the presence of noise some regularized methods [30,41,49,49,50,51,52] were proposed to smooth the estimated orientation maps. Other exemplifying approaches to the local fringes orientation map estimation are connected with the use of 2D energy operators [58], accumulate differences [34], Fourier transform [42], Windowed Fourier Transform [57], Principal Component Analysis [46,56] and two frame methods, e.g., optical flow [63]. However, currently proposed methods do not provide a satisfactory robustness of the fringe orientation estimation and may struggle when applying to more complex fringes (with higher local orientation variability and intensity noise). The results provided by the classical approaches strongly depend on the choice of the specific algorithm parameters. To address these issues, we propose a new, fast and robust method for fringe orientation map estimation based on convolutional neural network (CNN) called DeepOrientation. The neural networks are highly capable numerical tools for finding the relationship between their input and output signals, even though this relationship is complicated or even impossible to define analytically [64]. Additionally, the convolution is a basic operation to describe imaging process, so the CNN is an obvious choice for the task developed in this paper. CNNs were already successfully adapted in the fringe pattern analysis at different stages, i.e., conducting fringe pattern filtration [65-68], defining the optimal window for Fourier transform approach [69-71], performing phase extraction [72-76], phase unwrapping [77-82] and local fringe density map estimation [83]. Inspired by their success we decided to apply CNN to the FO map estimation. In the literature there is a neural network-based solution for fringe pattern orientation estimation [84], but it is specialized to the electronic speckle pattern interferometry (ESPI) fringe patterns. The construction of the output definition of the neural network training dataset determines that the maximum achievable accuracy is the one of the gradient method [39,62] with denoising. Considering that CNN itself is approaching the output labels with some level of error the limit defined by denoised version of gradient method not only cannot be surpassed but also reached. Since in our approach the output will be defined using the definition of the FO map from known simulated phase function the proposed DeepOrientation is a standalone and versatile solution. Additionally, in our approach input data size is preserved by DeepOrientation architecture so FO map is estimated in every pixel without reducing the analysis resolution. The paper is structured as follows. Section 2 introduces the issue of determining fringe orientation using convolutional neural network. Section 3 contains numerical evaluation of the proposed novel neural network-based technique for the local fringe pattern orientation estimation using experimental and simulated data comparing it with the combined plane-fit/gradient method (CPFG) [36]. Section 4 contains the application of DeepOrientation to HST-based fringe pattern phase estimation comparing the obtained results with the reference TPS-based phase maps. Section 5 concludes the paper. ## 2 DeepOrientation-based fringe orientation map estimation Facing the numerical task of transforming data input into the sought output, the solution may be found by analytic definition of the searched relationship. Naturally, this approach is connected with the full understanding of analyzed data and is mathematically solid. On the other hand, in many cases the straightforward definition of the relationship between data input and sought output may not be easy or even possible. As in the case of FO map estimation the simple definition of the relationship between the input intensity of the fringe pattern and the output orientation map is not possible since the fringe orientation by definition can be calculated from orthogonal derivatives of phase function and phase function is hidden in the intensity distribution of fringe pattern. Deep learning approach opens new possibilities for the development of algorithms solving the numerical problems one can encounter during scientific research. Deep neural networks during the supervised learning process can be taught to map the searched relationship without the need of its analytical definition. The relationship itself is defined by neural network layers parameters and algorithmic solution resolved that way works as a "black box". We can put new, unseen before by the network data instances and receive the corresponding outputs without the need of manually defining any parameter values, which is a meaningful advancement over majority of classical analytical methods. Nevertheless, because of this "black box" property neural network-based solutions raised legitimate concerns among the metrology community to use them to directly define the measurement output. For that reason, in our work, we are highlighting the use of neural network not to fully replace the mathematically sound phase estimation solutions (e.g., via HST method) but to support them. The example which is going to be discussed in this paper is the use of DeepOrientation to support the HST technique. Even if there could be some neural network-based artifacts introduced within the retrieved FO map they should not jeopardize the final HST-based phase demodulation result, as shown in our previous studies [85]. ### Definition of the training dataset DeepOrientation network training is performed using especially tailored, simulated dataset. We decided to simulate training dataset with the uniform background modulation and without any intensity noise. That assumption was made based on the existence of robust fringe pattern filtering (denoising and detrending) algorithms [86, 87, 24, 88, 89]. Therefore, in experimental reality, well-filtered fringe patterns may be obtained. In general, the local fringe direction map is more interesting (and informative) for fringe pattern analysis and for that reason its direct estimation by neural network may seem like the most attractive solution. Nevertheless, in the case of carrier fringe pattern the fringe with the direction difference equal to \(\pi\) visually appear the same, which would be confusing for the convolutional neural network during the learning process. The process of DeepOrientation training dataset preparation is presented in Fig. 1. Using the known simulated phase function the fringe orientation map matching the simulated input fringe pattern may be calculated by the definition from orthogonal derivatives of simulated phase function (Eq. 3). The important aspect to mention at this point is the fact that in some applications (e.g., HST phase demodulation) FO map in the form of modulo \(\pi\) needs to be further unwrapped to its modulo \(2\pi\) form - local fringe direction map. To be able to correctly perform the unwrapping procedure the step value equal to \(\pi\) must be preserved. The CNN due to the multiple convolution operations performed one after another will blur out the crucial discontinuity lines in fringe orientation map. This effect can be slightly minimized but never fully eradicated. For that reason, FO map cannot be set directly as the DeepOrientation output, because it would make the unwrapping to local fringe direction map impossible. Now the first idea, which may come to mind is to use the known phase function orthogonal derivatives as the DeepOrientation training data output. The approach although seems very attractive is a troublesome one for the neural network learning process, because of the evenness of the cosine function. With the change of sign of the phase function the signs of its orthogonal derivatives also change while the cosines of both phase functions visually are the same. For that reason, the interpretation of the data would be confusing for neural network. Instead, another idea was formulated. The orientation angle in any point of fringe orientation map can be described in the complex form using vectorial notation. The troublesome discontinuities of the fringe orientation map can be removed by encoding it in the abovementioned way - in the form of two 2D matrixes of cosine and sine functions of the orientation angle. Since the local fringe orientation (FO) map is the modulo \(\pi\) indicator thus in order to use the full periodicity of sine and cosine functions the doubled fringe orientation map was encoded in their argument: \[FO(x,y)=\frac{arg(\cos(2FO(x,y))+i+\sin{(2FO(x,y))})}{2}. \tag{4}\] Thus, two maps of \(\cos(2FO)\) and \(\sin(2FO)\) define the neural network output. DeepOrientation inputs (I(x,y), see exemplary fringe patterns in Fig. 1) were generated as in (Eq. 5): \[I(x,y)=\cos(\varphi_{obj}(x,y)+\varphi_{carrier}(x,y))\,, \tag{5}\] where \(\varphi_{obj}(x,y)\) is the object phase function simulated as a sum of dozens (up to 50) 2D Gaussian kernels, each one with random standard deviation and \((x,y)\) location, \(\varphi_{carrier}(x,y)\) is the factor that generates carrier fringes with random orientation (\(\theta\)) and period (\(T\)): \[\varphi_{carrier}(x,y)=x\,\frac{\cos(\theta)\,2\pi}{\tau}+\,y\,\frac{\sin( \theta)\,2\pi}{\tau}. \tag{6}\] ### Proposed network architecture The DeepOrientation network architecture schematically presented in Fig. 2 was inspired by the work [72] and already successfully adaptation to somewhat similarly challenging task of local fringe density map estimation [84]. DeepOrientation data input is a grayscale image, in other words one-channel 2D matrix. The network architecture is built by convolutional layers and residual blocks. It is divided into different paths where the input image dimensionality is changed by the maxpooling layers. By the end of each path the results are upsampled to match the input image height and width and then results from all paths are concatenated to define the input for final convolutional layer. The last convolutional layer defines the DeepOrientation data output to have two-channels with height and width matched to the input image. During further analysis two parameters will be adjusted to optimize the network architecture and adapt it to the specific task of FO map estimation: number of paths and number of filters in convolutional layers (including those building the residual blocks). Increasement of those two parameters makes the network architecture more complex. Because in our approach the training dataset is simple and was used for grasping the general relationship between the fringe pattern and underlying orientation map it was crucial to prevent the network from overfitting to the trained data. In order to do that the residual blocks with skip connections were chosen. Training process was performed on a training dataset containing 2400 512x512 px images. During the training, the mini batch size was equal to 1 and initial learning rate was \(10^{-4}\). Learning rate was updated each 5 epochs and reduced by the factor of 5 to help the loss function get out of local minima. The ADAM optimizer was used as a solver for training network and the mean-squared-error function was used as the loss function. Learning process lasted for 30 epochs, which was enough for the networks to train since no significant further decrease of loss function was observed afterwards. Networks were trained on a computer with AMD Ryzen 9 5900X 12-Core 3.70 GHz processor and NVIDIA GeForce RTX 3080 graphics card with 12 GB of memory, that allowed to train a single network in the time between 200 and 2000 minutes, depending on the architecture complexity. It is worth to highlight that this time-consuming training process needs to be performed only once for a given architecture. After the training, networks can reconstruct the orientation of a 512x512 px fringe pattern image in less Figure 1: Training and working principle of DeepOrientation convolutional neural network. than a second. Considering available memory on our GPU, networks with bigger number of filters and paths could only be trained with a mini batch size equal to 1. To keep the learning process consistent among all networks we used the same mini batch size for all trainings. ### Influence of the neural network architecture complexity on the learning accuracy In a pursuit to find the optimal neural network architecture for DeepOrientation two parameters were considered - number of paths with different downsampling and number of filters in convolutional layers. Increase of each of those parameters caused the increase of the neural network architecture complexity. In total 24 different configurations were tested with the number of paths varying from 2 to 5 and the number of filters (per path) varying from 30 to 130 with the step of 20, which as can be seen in Fig. 3. Our study allowed to understand general relationships between the network complexity, accuracy and calculation time. The performance of developed neural networks was tested with the use of two datasets with different definition of data instances. The dataset called validation set (600 512x512 px images) was used to test the performance of neural networks during training and is of the same origin as training dataset. Second dataset called test set is also based on simulations (Eq. 5), but the object phase functions included there were simulated in a completely different manner in order to validate the generalization ability of proposed DeepOrientation network. Test set consisted of 5 different \(\varphi_{obj}(x,y)\) functions: (1) a 2D function with 3 maxima and 2 minima (simulated using MATLAB 'peaks' function obtained by translating and scaling Gaussian distributions), (2) a group of 5 HeLa cells with shapes that were close to spherical, (3) a group of 2 HeLa cells with oblong shapes, (4) a blurred binary mask of human hand and (5) a group of 23 grains of rice. For each of those functions, there were generated a 140 fringe patterns with different carrier fringes period and orientation, and with different fringes curvature (introduced by changing the dynamic range of the \(\varphi_{obj}(x,y)\) function). Exemplary test set image may be seen in Fig. 4(a). Choosing the optimal neural network architecture for the specific task of local fringe pattern orientation map estimation is a complex issue, which needs to be carefully analyzed. The training strategy picked for DeepOrientation was based on the assumption of the simple simulated training dataset (without noise, background and amplitude modulation). Subsequently trained network is supposed to work for a wide range of fringe pattern characteristics, where phase function may not necessarily be describable the same way as phase functions included in training dataset. For that reason, we need to be especially careful to not introduce overfitting in wider sense that during the standard neural network training. Even if the neural network is not overfitted in the sense of being able to successfully analyze the data, which was introduced during the training, it can still 'overfit' assuming that all data outside the Figure 2: Scheme of the developed DeepOrientation convolutional neural network architecture. training dataset is of the same characteristics and origin (shape of fringes, optical measurement method used and studied object type). In other words, we want to find the solution leading to the estimation of the FO map from the cosine pattern, but without the strong restriction that the phase function needs to be describable the way proposed in training dataset simulation. In Fig. 3 the results of the performance analysis for different levels of neural network architecture complexity are presented. Looking at the curves in Fig. 3(a) estimated with the use of a validation dataset one can notice that with the increase of filters number adding the extra paths does not influence the results accuracy. For the filters number greater than 90 all neural networks achieved similar accuracy regardless the number of paths. Nevertheless, it needs to be highlighted that with the increase of the architecture complexity the neural network ability to fit to the training dataset increases. As it was just discussed with the chosen training strategy, we do not want to fit perfectly only to the training dataset. Observing the Fig. 3(a) curves estimated for the test dataset the first aspect one can notice is the increase of the RMSE value, which is perfectly understandable since the origin of the test data is different than training dataset (as it would be in different experimental realities - setups, objects) and some of the data included in the test dataset featured higher phase gradients than the validation dataset. It can be clearly seen in error maps presented in Fig. 4, in which the highest errors are visible around the edges of HeLa cells where phase gradients are the highest. Nevertheless, the error values are still on the reasonable level especially considering the main planned application of DeepOrientation network, which is to support HST-based phase estimation. Despite the obvious change in the error values the test curves shape also changed in comparison with the validation curves. The minimum RMSE was achieved for the neural network with two paths and 110 filters, therefore this configuration was chosen for the final DeepOrientation architecture. Two paths architecture limits the complexity of possible neural network input-output relationship preventing too strong fitting to the training dataset structure, while 110 filters grant that the network architecture is complex enough to capture the general relationship (since for that number of filters there was no noticeable error difference obtained on validation dataset for different number of paths). The detailed error analysis of the neural networks' outputs generated by exemplary fringe pattern from test dataset is presented in Fig. 4. One can notice that in general with the increase of neural network complexity, either implemented by increasing the number of filters or paths, the presented error maps become darker, which indicates that mean error value is decreasing. On the other hand, error map estimated for DeepOrientation architecture (i.e., 110 filters and 2 paths) has lower errors in the regions of high phase gradient (see circular cell fragment visible at the bottom). Presented error maps are estimated as absolute value of difference between the sine of known, ground truth doubled FO map and sine output of neural networks. We demonstrate the results connected only with sine output, because maps estimated for cosine output are complementary and do not contribute new information to the discussion. Figure 3: The performance of neural network architectures with different level of complexity trained to estimate fringe pattern orientation maps: (a) the mean RMSE values calculated on validation and test datasets and (b) calculation time of single data instance. Additional factor, which was considered while choosing the DeepOrientation network architecture was calculation time. From the algorithm's user perspective, one of the most important information is to know how long it would take to process their data. For that reason, in Fig. 3(b) the time needed for the calculations of the single data instance was presented. Reported calculation times were estimated with the use of typical computing unit represented by personal laptop (Intel Core i7-7700HQ 2.80 GHz processor and NVIDIA GeForce GTX 1060 graphics card). Obtained values confirm that unnecessary augmentation of neural network architecture complexity is undesirable. ## 3 Numerical evaluation of DeepOrientation The analysis comparing our proposed DeepOrientation approach with classical CPFG method [35] using simulated data is presented in Fig. 5 and using experimental data in Figs. 6 and 7. Since the local orientation maps consist of the angle information, in order to preserve its periodic nature, we introduced the orientation error (OE) as: \[OE=\sqrt{\frac{1}{N_{x}N_{y^{-1}}}\sum_{x=1}^{N_{x}}\sum_{y=1}^{N_{y}}\left[\sin \left(FO(x,y)-FO_{ref}(x,y)\right)-\mu\right]^{2}}, \tag{7}\] where \(N_{x}\) and \(N_{y}\) are image size, \(FO_{ref}(x,y)\) is a reference local fringe orientation map and \(\mu\) is mean of \(\sin\left(FO(x,y)-FO_{ref}(x,y)\right)\). In other words orientation error may be considered as modified RMSE, where the straightforward difference between retrieved map and its ground truth was replaced by the sine of that difference. The orientation error converges to 0 if the Figure 4: Error analysis of developed neural networks. (a) Analyzed fringe pattern from test dataset; (b) underlying phase function; ground truth outputs of DeepOrientation neural network: (c) sine and (d) cosine of 2FO; (e) ground truth FO map and (f) its unwrapped version: local fringe direction map; (g) error maps of sin(2FO) output for all analyzed neural network architectures. \(FO(x,y)-FO_{ref}(x,y)\) is equal to an integer multiple of \(\pi\), which is a desirable feature since orientation map is in the form of modulo \(\pi\). ### Comparison of DeepOrientation with classical approach on simulated data The fringe pattern series used for analysis in Fig. 5 were simulated according to the (Eq. 5) and (Eq. 6), where T=14, \(\theta\) = 0 and \(\varphi_{obj}(x,y)\) is described by Matlab peaks function with dynamic range controlled by multiplication by \(a\) coefficient varying from 0 to 10. In the case of CPFG method the parameter, which needs to be set is the size of the window in which the orientation angle will be estimated. The smaller the window size, the greater the accuracy of local orientation estimation. Nevertheless, small window size is not immune to the noise presence and for that reason in many cases it is recommended to set the bigger window sizes. Since the DeepOrientation works on prefiltered data in order to provide a fair comparison between two algorithms throughout the paper we are going to use the prefiltered data also for the classical approach. This can be considered as novel modification of CPFG aimed at its automation (no need for tailoring the window size) and increasement of robustness via unsupervised variational image decomposition (uVID) fringe prefiltering [87] and HST-based fringes normalization [23]. For that reason the window size can be chosen arbitrarily small so the value 2 was used in all presented cases. We have tested the CPFG accuracy using different window sizes and in majority of cases (if the denoising was correctly performed) the window Fig. 5: Comparison of the performance of DeepOrientation approach and classical one (CPFG [35]) using simulated fringe patterns. (a) The orientation errors of both methods calculated for different levels of phase modulation, (b), (c), (d) exemplary fringe patterns with high (a=10), medium (a=5) and low (a=0) phase modulation, respectively, (e), (f), (g) noisy versions of fringe patterns from (b), (c), (d), respectively, (h), (i), (j) the ground truth FO maps for (b), (c), (d), respectively, orientation error maps estimated by (k), (l), (m) DeepOrientation and (n), (o), (p) CPFG method for (b), (c), (d), respectively and orientation error maps estimated by (q), (r), (s) DeepOrientation and (t), (u), (w) CPFG method for (e), (f), (g), respectively. size equal to 2 provided the best results. It can be seen that for low level of phase modulation (\(a<1\)) CPFG method provides higher accuracy of the retrieved local orientation maps. As it is shown in Figs. 5(d), 5(j), 5(m) and 5(p) DeepOrientation-based results have a small fringe-like error, while for such simple cases and perfectly fitted window size classical CPFG approach provides error-free result. Nevertheless, with the increase of phase modulation level (and therefore complication of the fringe pattern shape itself) the predominance of DeepOrientation approach is clearly visible. It is also worth to mention that the orientation errors values presented in Fig. 5(a) were calculated after neglecting the border effects, which are obvious in the case of CPFG method even in the case of small window size. Additionally, DeepOrientation is more resistant to noise errors than CPFG method, which can be clearly see in Fig. 5(a). If there is noise present as in the case of Figs. 5(e)-5(g), where the Gaussian noise of std=0.1 was added to the data from Figs. 5(a)-5(d), DeepOrientation provides smoother orientation maps than CPFG method with smallest window size. The CPFG method error could be minimized by adjusting the window size and match the DeepOrientation accuracy, which shows how troublesome and crucial parameter's adjusting could be for a classical method. Experimental verification of the accuracy of DeepOrientation-based local fringe orientation map estimation The performance of proposed DeepOrientation solution was also tested using the experimentally recorded fringe patterns and compared with classical, well-developed solution represented by CPFG method [35]. All analyzed experimentally recorded data was prefiltered with the use of uVID [87] (where the noise part of the decomposition is estimated with the use of BM3D) and normalized in 0-1 range with the use of HST approach [23] before calculating the orientation map either with the use of the DeepOrientation or the CPFG. The first real-life example we have chosen contains complicated, low frequency fringe patterns recorded during the temporal phase shifting (TPS) study of glass plate in Twyman-Green interferometer; fringe patterns are presented in Figs. 6(a)-6(e). Having the complete TPS series we were able to precisely calculate the reference phase map since the TPS algorithm (as the multi-frame fringe pattern analysis algorithm) is the most accurate phase demodulation method, especially in the case of sparse closed fringes. Using this reference phase map and the definition of the FO map (Eq. 3) the reference FO map was calculated and can be seen in Fig. 6(p). One can notice that presented FO map is very noisy. It is due to the fact that 5-frames TPS algorithm is not fully resistant to the presence of noise and unfiltered intensity noise is transferred to the retrieved phase map. The noise effect is further amplified in the case of FO map estimation because of the needed numerical gradients calculation. For that reason, the denoised (using block-matching 3D denoising (BM3D) algorithm [86] on every analyzed intensity frame) version of estimated FO map is presented in Fig. 6(r) and that map will be further deployed as the reference for estimating the orientation error values. As it can be clearly seen analyzing the orientation error values shown in Table 1 in all cases (for all single-shot fringe pattern frames) the DeepOrientation provided better results than the CPFG method. Additionally, comparing the DeepOrientation results (Fig. 6(f)-6(j)) and the classical approach results (Fig. 6(k)-6(o)) the first ones have better preserved edges (on the modulo \(\pi\) steps), which is especially important as one of the planned use of DeepOrientation is a support for single-fringe-pattern HST-base phase estimation. The reason is that FO map unwrapping procedure [61] needs a clear, well-preserved steps values to provide a correct unwrapping. To evaluate DeepOrientation on the biological data, Fig. 7, we collected 10, phase-shifted interferograms of a group of HeLa cells on a Linnik interferometer [90]. Similarly as above, we used the TPS method aided with BM3D denoising [86] to reconstruct cells phase, which was then used to obtain reference FO map, Fig. 7(b). Next, we prefiltered one of the collected interferograms with the uVID algorithm and obtained orientation maps with DeepOrientation, Fig. 7(c), and CPFG, Fig. 7(d), algorithms. Both methods returned results that were close to the reference map with orientation error equal to 0.1843 for Fig. 7(c), 0.1925 for Fig. 7(d), 0.1191 for Fig. 7(g), 0.1579 for Fig. 7(h), 0.1672 for Fig. 7(k) and 0.1916 for Fig. 7(l). However, as can be observed on a zoomed parts of the reconstructed maps (Figs. 7(f)-7(h) and 7(j)-(l)), the CPFG reconstruction has some unexpected orientation jumps along the fringe profile, whereas DeepOrientation reconstruction is much smoother. This indicates that DeepOrientation is more robust to fringe patterns being transferred to the orientation map than the CPFG method. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}{c} \begin{tabular}{c} \begin{tabular} \end{tabular} \\ \end{tabular} & 1 & 2 & 3 & 4 & 5 \\ \hline DeepOrientation & 0.1627 & 0.1562 & 0.1722 & 0.1802 & 0.1764 \\ \hline CPFG & 0.1684 & 0.1628 & 0.1764 & 0.1893 & 0.1806 \\ \hline \end{tabular} \end{table} Table 1: **Numerical analysis of the accuracy of estimated results from Fig. 6.** Figure 6: Experimentally recorded TPS series of interferograms with phase shift equal to \(\pi\)/2: (a-e) subsequent interferograms, (f-j) FO maps calculated by DeepOrientation, (k-o) FO maps calculated by CPFG method [35], (p) FO map calculated from TPS estimated phase function, (r) FO map calculated from TPS estimated phase function with BM3D denoising. Figure 7: One of the recorded fringe pattern images of the HeLa cells (a), reference local orientation map obtained from the TPS retrieved phase (b), reconstructed local orientation maps from the single prefiltered fringe pattern image with the use of DeepOrientation (c) and CPFG (d) methods. Zoomed parts of the (a)-(d) images inside red (e)-(h) and green (i)-(l) boxes. The influence of DeepOrientation onto the accuracy of the HST-based single-shot fringe-pattern phase estimation The one of possible applications of DeepOrientation is guiding the phase demodulation process for Hilbert spiral transform [23]. As a result of HST the quadrature fringe function is obtained with phase shift equal to 0.5\(\pi\) introduced between input \(s(x,y)\) and output \(s_{H}(x,y)\). The important thing worth to emphasize is that HST needs a zero mean value signal as an input, therefore successful fringe pattern background removal is of the essence. Additionally, it is recommended to minimize the intensity noise for the retrieved phase map quality improvement. Therefore, the HST input signal can be described as: \[s(x,y)=b(x,y)cos\big{(}\varphi(x,y)\big{)}, \tag{8}\] and then output signal follows as: \[s_{H}(x,y)=-b(x,y)sin\big{(}\varphi(x,y)\big{)}. \tag{9}\] Finally, the phase function can be calculated as: \[\varphi(x,y)=\tan^{-1}\left(\frac{s_{H}(x,y)}{s(x,y)}\right). \tag{10}\] Using the HST nomenclature [23] the quadrature function can be described as: \[s_{H}(x,y)=-iexp[-\beta(x,y)]F^{-1}\big{\{}S(u,v)F\big{\{}S(x,y)\big{\}}\big{\}}, \tag{11}\] where \(F\) denotes Fourier transform, \(F^{-1}\) denotes inverse Fourier transform, \(S(u,v)\) is spiral phase function defined in spatial frequencies \((u,v)\) domain and \(\beta(x,y)\) is LFD map. The LFD map is instrumental as it guides the phase demodulation process. It is especially important in the case of very complicated, overlapping fringe pattern spectrum. Correct LFD map helps to avoid sign ambiguity errors in closed (concentric) fringe pattern phase demodulation. We would like to highlight that the DeepOrientation is not employed here to directly determine the phase function, the outcome of the optical measurement. The use of neural network to replace the mathematically rigorous phase estimation algorithmic derivation may raise legitimate metrological concerns. For that reason, in our work the HST phase calculations are only supported by DeepOrientation neural network, which constitutes our novel approach. DeepOrientation allows the estimation of the FO map, which afterwards is unwrapped [61] to local fringe direction map and used to guide the HST-driven phase estimation process. To prove that DeepOrientation is a valuable tool in terms of aiding HST algorithm with phase retrieval, Fig. 8, we collected a 3 data series consisting of 5 phase-shifted interferograms of HeLa cells, exemplifying one shown in Fig. 8(a), LSEC cells, exemplifying one presented in Fig. 8(e), and phase test target, exemplifying one depicted in Fig. 8(i). Figure 8: One of the recorded interferograms of HeLa cells (a), LSEC cells (e) and phase test target (i). Reconstructed reference phase maps from TPS data (b),(f),(j), reconstructed phase maps by HST (c),(g),(k) and reconstructed local fringe direction maps with the use of DeepOrientation algorithm (d),(h),(l). Phase maps are given in range 0-18 (b),(c), 0-6 (f),(g) and 0-8 (j),(k) rad. Next, from those interferograms we retrieved the reference phase maps with the use of TPS algorithm aided with BM3D method, Figs. 8(b), 8(f) and 8(j), respectively. After that, from each data series, we filtered a single interferogram with the uVID algorithm [87], which was then provided to DeepOrientation to reconstruct local fringe pattern orientation map. Those maps were then unwrapped with the use of phase unwrapping algorithm presented in [61] to obtain local fringe direction maps, Figs. 8(d), 8(h) and 8(l). At the end, filtered fringe patterns along with obtained fringe direction maps were supplied to the HST algorithm to reconstruct phase maps, Figs. 8(c), 8(g) and 8(k). One can noticed that HST-based results estimated with the use of single-frame approach compare favorably with the highly accurate multi-frame approach. To be exact the RMSE for HST-based results is equal to 0.0132 rad for Fig. 8(c), 0.0132 rad for Fig. 8(g) and 0.0521 rad for Fig. 8(k). This fact corroborated DeepOrientation guided HST for quantitative phase imaging of living biosamples and challenging technical objects. ## 5 Conclusions In this paper, we have proposed an accurate, robust, and fast numerical solution for the local fringe orientation map estimation called DeepOrientation based on neural networks and deep learning. The fringe patterns themselves are the example of ideal data for neural network training process. Even if the underlying phase function varies drastically between different measurements, fringe patterns generally have a similar structure as most of them can be described by a spatially self-similar cosine function. That makes the learning process easier, and we have shown that reliable network parameters can be learned based on a relatively small training dataset, not highly diverse in the meaning of phase function characteristic. The DeepOrientation works well even for the data, where underlying phase function significantly differs from the ones included in the training dataset, due to general self-similarity of all fringe patterns. The validity and effectiveness of the DeepOrientation were corroborated both on simulated and experimental data and compared favorably with the classical approach. It should be noted that once the DeepOrientation training is finished, the parameters do not need to be further adjusted, as the trained network generalizes sufficiently. We have provided a solution, which was tested on a wide range of fringe pattern and can be used on the new fringe data instances without additional adjusting or retraining. Additionally, DeepOrientation fills the gap in the search for increasingly accurate fringe pattern analysis tools. As it was shown it can be successfully employed for guidance of single-shot phase demodulation process in Hilbert spiral transform and there are plenty of other possible applications for it [26-59]. ### Funding This work has been partially funded by the National Science Center Poland (OPUS 2020/37/B/ST7/03629 and PRELUDIUM 2021/41/N/ST7/04057). Studies were funded by FOTECH-1 project granted by Warsaw University of Technology under the program Excellence Initiative: Research University (ID-UB). MC work was supported by the Foundation for Polish Science (FNP) and by the Polish National Agency for Academic Exchange under the Iwanowska programme. ### Disclosures The author declares no conflicts of interest. ### Data Availability. Data may be obtained from the authors upon reasonable request. Trained DeepOrientation model is made freely available in Ref. [91].
2306.10497
Explicit formulas for matrices associated to ladder, circular ladder, and Mobius ladder graphs
We give explicit formulas for resistance distance matrices and Moore-Penrose inverses of incidence and Laplacian matrices of ladder, circular ladder, and M\"{o}bius ladder graphs. As a result, we compute the Kirchhoff index of these graphs and give new combinatorial formulas for the number of their spanning trees.
Ali Azimi, Mohammad Farrokhi Derakhshandeh Ghouchan
2023-06-18T08:20:43Z
http://arxiv.org/abs/2306.10497v1
# Explicit formulas for matrices associated to ladder, circular ladder, and Mobius ladder graphs ###### Abstract. We give explicit formulas for resistance distance matrices and Moore-Penrose inverses of incidence and Laplacian matrices of ladder, circular ladder, and Mobius ladder graphs. As a result, we compute the Kirchhoff index of these graphs and give new combinatorial formulas for the number of their spanning trees. Key words and phrases:Moore-Penrose inverse, resistance distance, Kirchhoff index, ladder graph, circular ladder graph, Mobius ladder graph 2000 Mathematics Subject Classification: Primary 05C20, 05C50, Secondary 15A09 ## 1. introduction The incidence matrix \(Q(\Gamma)\) of an oriented graph \(\Gamma\) with vertex-set \(V(\Gamma)=\{v_{1},\ldots,v_{n}\}\) and edge-set \(E(\Gamma)=\{e_{1},\ldots,e_{m}\}\) is an \(n\times m\) matrix defined as follows: The rows and columns of \(Q\) are indexed by vertices and edges of \(\Gamma\), respectively. The \((i,j)\) entry of \(Q\) is \(0\) if the vertex \(v_{i}\) and the edge \(e_{j}\) are not incident, otherwise it is \(1\) or \(-1\) depending on whether \(e_{j}\) starts from or ends at \(v_{i}\). The Laplacian matrix \(L(\Gamma)\) of \(\Gamma\), that is actually equal to \(Q(\Gamma)Q^{\prime}(\Gamma)\), is an \(n\times n\) matrix whose rows and columns are indexed by vertices of \(\Gamma\). The \((i,j)\) entry of \(L(\Gamma)\) is equal to the degree of the vertex \(v_{i}\) if \(i=j\), and it is \(-1\) or \(0\) if the vertices \(v_{i}\) and \(v_{j}\) are adjacent or non-adjacent, respectively. The Moore-Penrose inverse of an \(m\times n\) real matrix \(A\), denoted by \(A^{+}\), is the (unique) \(n\times m\) real matrix that satisfies the following equations \[AA^{+}A=A,\quad A^{+}AA^{+}=A^{+},\quad(AA^{+})^{\prime}=AA^{+},\quad(A^{+}A )^{\prime}=A^{+}A\] (see [9, 10]). The Moore-Penrose inverse of the incidence matrix of a graph was first studied by Ijira in [17]. We refer the interested reader to [1, 2, 3] for previous studies on the Moore-Penrose inverses of matrices associated to various classes of graphs. Incidence matrices and their inverses, such as the Moore-Penrose inverses, are important tools used in analysis of graphs. Let \(\Gamma\) be a simple connected graph. The resistance distance between two vertices \(v_{i}\) and \(v_{j}\) can be computed via the formula \[r(v_{i},v_{j})=L^{+}_{v_{i},v_{i}}+L^{+}_{v_{j},v_{j}}-L^{+}_{v_{i},v_{j}}-L^ {+}_{v_{j},v_{i}},\] where \(L^{+}_{v_{i},v_{j}}\) denotes the \((i,j)\)-entry of the Moore-penrose inverse \(L^{+}\) of \(L=L(\Gamma)\). The notion of resistance distance between vertices of a graph is introduced by Klein and Randic in 1993 [19] as the effective resistance between the given vertices (computed by Ohm's law) when a battery is attached between them assuming that all edges are unit resistors. Beside many applications in physics, chemistry, and Introduction Let \(\Lambda\) be a finite set of positive integers and \(n\geq 2\). A _set_\(\Lambda\) is a set of The equations (4) and (5) can be obtained simply from the Binet s formulas. Now, we prove equation (6). Let \(b_{n}:=\sum_{i=1}^{n}a_{i}a_{n+1-i}\) for all \(n\geq 1\). First observe that \[b_{n}=\sum_{i=1}^{n}a_{i}a_{n+1-i}=\sum_{i=1}^{n}(4a_{i}a_{n-i}-a_{i}a_{n-1-i}),\] which implies that \[b_{n}-4b_{n-1}+b_{n-2}=a_{n}-a_{n-1} \tag{7}\] On the other hand, by equation (3), \[b_{n}-b_{n-2} =a_{1}a_{n}+((a_{2}a_{n-1}-a_{1}a_{n-2})+\cdots+(a_{n-1}a_{2}-a_{n -2}a_{1}))+a_{n}a_{1}\] \[=2a_{n}+(n-2)(a_{n}-a_{n-1})\] so that \[b_{n}-b_{n-2}=na_{n}-(n-2)a_{n-1}. \tag{8}\] Summing up the equations (8) when \(n\) is replaced by \(3,\ldots,n\), it follows that \[b_{n}+b_{n-1}=na_{n}+s_{n-1}. \tag{9}\] Now, by solving the system of linear equations (7), (8), and (9) for \(b_{n}\), \(b_{n-1}\), and \(b_{n-2}\), the result follows. ## 3. ladder graph Let \(L_{n}\) denote the ladder graph of order \(2n\). Suppose vertices along the top and bottom of \(L_{n}\), as depicted in Fig. 1, are labeled \(u_{1}^{+},\ldots,u_{n}^{+}\) and \(u_{1}^{-},\ldots,u_{n}^{-}\), respectively. Also, suppose the edges of \(L_{n}\) are oriented as \(e_{i}^{\varepsilon}=(u_{i}^{\varepsilon},u_{i+1}^{\varepsilon})\) for \(i=1,\ldots,n-1\) with \(\varepsilon=\pm 1\), and \(f_{i}=(u_{i}^{+},u_{i}^{-})\) for \(i=1,\ldots,n\), where \(f_{i}\) are the spokes. Let \(S(\Gamma)\) denote the set of all spanning trees of the underlying graph of \(\Gamma\) and \(s(\Gamma):=\#S(\Gamma)\). The graph obtained from \(\Gamma\) by contracting edge \(e\) is denoted by \(\Gamma/e\). It is known from [20, p.35] that \(s(L_{n})=4s(L_{n-1})-s(L_{n-2})\) for all \(n\geq 2\). Since, \(s(L_{0})=0\) and \(s(L_{1})=1\), we observe that \(s(L_{n})=s_{n}\) for all \(n\geq 0\). Let \(Q=[q_{u,e}]\) be the incidence matrix of \(L_{n}\) with the orientation given in Fig. 1 whose rows and columns are indexed by ordered sets \[V(L_{n})=\{u_{1}^{+},\ldots,u_{n}^{+},u_{1}^{-},\ldots,u_{n}^{-}\}\] and \[E(L_{n})=\{f_{1},\ldots,f_{n},e_{1}^{+}\ldots,e_{n-1}^{+},e_{1}^{-}\ldots,e_{n -1}^{-}\},\] Figure 1. Ladder graph \(L_{n}\) respectively. Also, let \[H=\begin{pmatrix}B&-B\\ C&D\\ D&C\end{pmatrix}, \tag{10}\] be the \((3n-2)\times 2n\) matrix defined as follows: The rows and columns of \(H\) are indexed by ordered sets \(E(L_{n})\) and \(V(L_{n})\), respectively. Let \(B=[b_{i,j}]\) be the \(n\times n\) symmetric matrix given by \(b_{i,j}=a_{i}a_{n-j+1}/2s_{n}\), and \(C=[c_{i,j}]\) and \(D=[d_{i,j}]\) be the \((n-1)\times n\) matrices given by \[c_{i,j}=\begin{cases}\delta_{i,j}-\frac{i}{2n}-\frac{a_{n-j+1}s_{i}}{2s_{n}},& i\leq j,\\ \frac{1}{2}-\frac{i}{2n}+\frac{a_{j}s_{n-i}}{2s_{n}},&i>j,\end{cases}\] and \[d_{i,j}=\begin{cases}-\frac{i}{2n}+\frac{a_{n-j+1}s_{i}}{2s_{n}},&i\leq j,\\ \frac{1}{2}-\frac{i}{2n}-\frac{a_{j}s_{n-i}}{2s_{n}},&i>j,\end{cases}\] respectively. In the following series of lemmas, we provide the machinery to prove that \(H\) is the Moore-Penrose inverse of \(Q\). **Lemma 3.1**.: \(H\mathbf{1}=0\)_._ Proof.: It is obvious from the definition that \((C+D)\mathbf{1}=0\). Thus \[H\mathbf{1}=\begin{bmatrix}(B-B)\mathbf{1}\\ (C+D)\mathbf{1}\\ (D+C)\mathbf{1}\end{bmatrix}=0,\] as required **Lemma 3.2**.: \(QH=I-\frac{1}{2n}J\)_._ Proof.: Let \(\delta^{\prime}_{i,j}:=1-\delta_{i,j}\). For \(1\leq i\leq j\leq n\), we have \[(QH)_{u_{i}^{+},u_{j}^{+}} =b_{i,j}-\delta^{\prime}_{i,1}c_{i-1,j}+\delta^{\prime}_{i,n}c_{ i,j}\] \[=\frac{a_{i}a_{n-j+1}}{2s_{n}}-\delta^{\prime}_{i,1}\left(-\frac{ i-1}{2n}-\frac{a_{n-j+1}s_{i-1}}{2s_{n}}\right)+\delta^{\prime}_{i,n}\left( \delta_{i,j}-\frac{i}{2n}-\frac{a_{n-j+1}s_{k}}{2s_{n}}\right)\] \[=\delta_{i,j}-\frac{1}{2n},\] by using equation (1) when \(i=n\). Also, an analogous argument yields \[(QH)_{u_{j}^{+},u_{i}^{+}}=(QH)_{u_{i}^{-},u_{j}^{-}}=(QH)_{u_{j}^{-},u_{i}^{- }}=\delta_{i,j}-\frac{1}{2n}.\] Similarly, by discussing the cases where \(i=1\), \(i=n\), and \(i\neq 1,n\) and either \(i\leq j\), \(i=j+1\), and \(i>j+1\), we obtain \[(QH)_{u_{i}^{+},u_{j}^{-}}=(QH)_{u_{i}^{-},u_{j}^{+}}=-b_{i,j}-\delta^{\prime} _{1,i}d_{i-1,j}+\delta^{\prime}_{i,n}d_{i,j}=-\frac{1}{2n}\] for all \(1\leq i,j\leq n\). The proof is complete. **Lemma 3.3**.: \(HQ\) _is a symmetric matrix._ Proof.: Let \(1\leq i,j\leq n\). Since \(B\) is a symmetric matrix, \[(HQ)_{f_{i},f_{j}}=2b_{i,j}=2b_{j,i}=(HQ)_{f_{j},f_{i}}.\] From the definition of \(H\), we get \[(HQ)_{e_{i}^{+},f_{j}}=-(HQ)_{e_{i}^{-},f_{j}}=c_{i,j}-d_{i,j}=\begin{cases} \delta_{i,j}-\frac{a_{n-j+1}s_{i}}{s_{n}},&i\leq j,\\ \frac{a_{j}s_{n-i}}{s_{n}},&i>j,\end{cases}\] and \[(HQ)_{f_{j},e_{i}^{+}}=-(HQ)_{f_{j},e_{i}^{-}}=b_{j,i}-b_{j,i+1}=\begin{cases} \frac{a_{i}a_{n-j+1}-a_{i+1}a_{n-j+1}}{2s_{n}},&i<j,\\ \frac{a_{j}a_{n-i+1}-a_{j}a_{n-i}}{2s_{n}},&i\geq j.\end{cases}\] Utilizing equations (1), (2), and (4) and a simple case-by-case analysis, it yields \((HQ)_{e_{i}^{+},f_{j}}=(HQ)_{f_{j},e_{i}}\) and \((HQ)_{e_{i}^{-},f_{j}}=(HQ)_{f_{j},e_{i}^{-}}\). On the other hand, \[(HQ)_{e_{i}^{+},e_{j}^{+}}=c_{i,j}-c_{i,j+1}=\begin{cases}\delta_{i,j}+\frac{( a_{n-j}-a_{n-j+1})s_{i}}{2s_{n}},&i\leq j,\\ \frac{(a_{j}-a_{j+1})s_{n-i}}{2s_{n}},&i>j+1,\\ -\frac{1}{2}+\frac{a_{j}s_{n-i}+a_{n-i}s_{i}}{2s_{n}},&i=j+1,\end{cases}\] \[(HQ)_{e_{i}^{+},e_{j}^{-}}=d_{i,j}-d_{i,j+1}=\begin{cases}\frac{(a_{n-j+1}-a_{ n-j})s_{i}}{2s_{n}},&i\leq j,\\ \frac{1}{2}-\frac{a_{j}s_{n-i}+a_{n-j}s_{i}}{2s_{n}},&i=j+1,\\ \frac{(a_{j+1}-a_{j})s_{n-i}}{2s_{n}},&i>j+1,\end{cases}\] \[(HQ)_{e_{j}^{-},e_{i}^{+}}=c_{j,i}-c_{j,i+1}=\begin{cases}\frac{(a_{i}-a_{i+1 })s_{n-j}}{2s_{n}},&j>i+1,\\ -\frac{1}{2}+\frac{a_{i}s_{n-j}+a_{n-i}s_{i}}{2s_{n}},&j=i+1,\\ \frac{(a_{n-i}-a_{n-i+1})s_{j}}{2s_{n}},&i\geq j\end{cases}\] and \[(HQ)_{e_{i}^{-},e_{j}^{-}}=d_{i,j}-d_{i,j+1}=\begin{cases}\frac{(a_{n-j+1}-a_ {n-j})s_{i}}{2s_{n}},&i\leq j,\\ \frac{1}{2}-\frac{a_{j}s_{n-i}+a_{n-j}s_{i}}{2s_{n}},&i=j+1,\\ \frac{(a_{j+1}-a_{j})s_{n-i}}{2s_{n}},&i>j+1,\end{cases}\] from which the result follows. **Theorem 3.4**.: _Let \(L_{n}\) be a ladder graph with incident matrix \(Q\). Then \(Q^{+}=H\)._ Proof.: By Lemmas 3.2 and 3.3, \(QH\) and \(HQ\) are symmetric. On the other hand, \(QHQ=Q\) for \(\mathbf{1}^{\prime}Q=0\). Since \(HQH=H\) by Lemmas 3.1 and 3.2, it follows that \(H=Q^{+}\). The following lemma will be used frequently in the reminder of this section. **Lemma 3.5** ([4, Corollary 3.2]).: _Let \(\Gamma\) be a connected graph and \(P\) be a path of length \(d\) between vertices \(u,v\in V(\Gamma)\). If all edges in \(P\) have the same direction from \(u\) to \(v\) then,_ \[r(u,v)=\sum_{e\in E(P)}(q^{+}_{e,u}-q^{+}_{e,v}),\] _where \(r(u,v)\) is the resistance distance between vertices \(u\) and \(v\)._ **Corollary 3.6**.: _If \(\mathcal{B}=2s(L_{n})B\). Then \(\mathcal{B}\) has eigenvalue \(s(L_{n})\) with associated eigenvector \(\mathbf{1}\). Also, the diagonal entries of \(\mathcal{B}\) satisfy the equations_ \[\mathcal{B}_{i,i}=s(L_{n}/f_{i})\] _for \(i=1,\ldots,n\)._ Proof.: By [3, Lemma 2.1] \[\sum_{i=1}^{n}q^{+}_{f_{i},u^{+}_{j}}=1-\frac{n}{2n}=\frac{1}{2},\] On the other hand, by Theorem 3.4, \[\sum_{i=1}^{n}q^{+}_{f_{i},u^{+}_{j}}=\frac{a_{i}\sum_{i\leq j}a_{n-j+1}+a_{n-i +1}\sum_{i>j}a_{j}}{2s(L_{n})}\] so that \[s(L_{n})=a_{i}\sum_{i\leq j}a_{n-j+1}+a_{n-i+1}\sum_{i>j}a_{j}=\sum_{j=1}^{n}b_ {i,j},\] that is \(\mathcal{B}\mathbf{1}=s(L_{n})\mathbf{1}\). Now, from Lemma 3.5 and Theorem 3.4, and the definition of the resistance distance between two vertices, we get \[\frac{s(L_{n}/f_{i})}{s(L_{n})}=r(u^{+}_{i},u^{-}_{i})=2q^{+}_{f_{i},u^{+}_{i} }=\frac{a_{i}a_{n-i+1}}{s(L_{n})}\] (see [6, p. 133]). Therefore \[s(L_{n}/f_{i})=a_{i}a_{n-i+1}=\mathcal{B}_{i,i}, \tag{11}\] as required. In [11] the author uses circus reduction to obtain the resistance distance and Kirchhoff index of ladder graphs. Here we use the Moore-Penrose inverses of Laplacian matrices of ladder graphs to drive new formulas for resistance distance and Kirchhoff index of ladder graphs. **Theorem 3.7**.: _Let \(R=[r(u,v)]\) be the resistance distance matrix of the ladder graph \(L_{n}\) of order \(2n\). Then_ \[r(u^{\varepsilon}_{i},u^{\varepsilon^{\prime}}_{j})=\frac{d}{2}-\varepsilon \varepsilon^{\prime}\frac{a_{i}a_{n-j+1}}{2s_{n}}+\alpha(i,j),\] _where \(d=d(u^{+}_{i},u^{+}_{j})\), \(\varepsilon,\varepsilon^{\prime}=\pm 1\), and \(\alpha(i,j)=(s(L_{n}/f_{i})+s(L_{n}/f_{j}))/4s_{n}\)._ Proof.: Let \(d=j-i=d(u^{+}_{i},u^{+}_{j})\) and \(P:e^{+}_{i},\ldots,e^{+}_{j-1}\) be the shortest path between \(u^{+}_{i}\) and \(u^{+}_{j}\), where \(1\leq i<j\leq n\). Then \[\sum_{k=i}^{j-1}q^{+}_{e^{+}_{k},u^{+}_{i}}=\sum_{k=i}^{j-1}c_{k,i}=c_{i,i}+ \sum_{k=i+1}^{j-1}c_{k,i},\] Thus \[\sum_{k=i}^{j-1}q^{+}_{e^{+}_{k},u^{+}_{i}}=\left(1-\frac{i}{2n}-\frac{a_{n-i +1}s_{i}}{2s_{n}}\right)+\sum_{k=i+1}^{j-1}\left(\frac{1}{2}-\frac{k}{2n}+ \frac{a_{i}s_{n-k}}{2s_{n}}\right).\] By equations (2) and (3), \[\sum_{k=i}^{j}q_{e_{k}^{+},u_{i}^{+}}^{+} =\frac{d+1}{2}-\frac{id}{2n}-\frac{d(d-1)}{4n}+\frac{a_{i}(a_{n-i}- a_{n-j+1})-a_{n-i+1}(a_{i+1}-a_{i})}{4s_{n}}\] \[=\frac{d+1}{2}-\frac{id}{2n}-\frac{d(d-1)}{4n}+\frac{a_{n}-a_{n+1 }+a_{i}a_{n-i+1}-a_{i}a_{n-j+1}}{4s_{n}}\] \[=\frac{d+1}{2}-\frac{id}{2n}-\frac{d(d-1)}{4n}+\frac{-2s_{n}+s(L_ {n}/f_{i})-a_{i}a_{n-j+1}}{4s_{n}}\] \[=\frac{d}{2}-\frac{id}{2n}-\frac{d(d-1)}{4n}-\frac{a_{i}a_{n-j+1} }{4s_{n}}+\frac{s(L_{n}/f_{i})}{4s_{n}}.\] The same argument for the vertex \(u_{j}^{+}\) yields \[\sum_{k=i}^{j-1}q_{e_{k}^{+},u_{j}^{+}}^{+}=\sum_{k=i}^{j-1}c_{k,j} =-\frac{id}{2n}-\frac{d(d-1)}{4n}+\frac{a_{n-j+1}(a_{i}-a_{d+i})} {4s_{n}}\] \[=-\frac{id}{2n}-\frac{d(d-1)}{4n}+\frac{a_{i}a_{n-j+1}}{4s_{n}}- \frac{s(L_{n}/f_{j})}{4s_{n}}.\] Therefore, \[r(u_{i}^{+},u_{j}^{+})=\frac{d}{2}-\frac{a_{i}a_{n-j+1}}{2s_{n}}+\frac{s(L_{n} /f_{i})+s(L_{n}/f_{j})}{4s_{n}}.\] From the block structure of \(H\) it follows that \(r(u_{i}^{-},u_{j}^{-})=r(u_{i}^{+},u_{j}^{+})\). Finally, from the equality \[\sum_{k=i}^{j-1}q_{e_{k}^{+},u_{j}^{-}}^{+}=\sum_{k=i}^{j-1}d_{k,j} =\sum_{k=i}^{j-1}\left(-\frac{k}{2n}+\frac{a_{n-j+1}s_{k}}{2s_{n}}\right)\] \[=-\frac{id}{2n}-\frac{d(d-1)}{4n}-\frac{a_{i}a_{n-j+1}}{4s_{n}}+ \frac{s(L_{n}/f_{j})}{4s_{n}}\] we obtain \[r(u_{j}^{-},u_{i}^{+})=r(u_{i}^{+},u_{j}^{-}) =\sum_{k=i}^{j-1}q_{e_{k}^{+},u_{i}^{+}}^{+}+b_{j,i}-\sum_{k=i}^{j -1}q_{e_{k}^{+},u_{j}^{-}}^{+}+b_{j,j}\] \[=\frac{d}{2}+\frac{a_{i}a_{n-j+1}}{2s_{n}}+\frac{s(L_{n}/f_{i})+s (L_{n}/f_{j})}{4s_{n}},\] as required. Utilizing Theorem 3.7, we can compute the Kirchhoff index of ladder graphs. **Corollary 3.8**.: _For any \(n\geq 1\),_ \[Kf(L_{n})=\frac{n^{2}}{3}\left(n+1+\frac{a_{n}}{s_{n}}\right).\] Proof.: We have \[Kf(L_{n}) =\sum_{\{u,v\}\subseteq V(L_{n})}r(u,v)\] \[=\frac{4n\sum_{i=1}^{n}s(L_{n}/f_{i})}{4s_{n}}+2Kf(P_{n})\] \[=\frac{n\sum_{i=1}^{n}a_{i}a_{n+1-i}}{s_{n}}+\frac{n(n^{2}-1)}{3}\] from which the result follows by equation (6). ## 4. Circular ladder graph In this section, we compute the Moore-Penrose inverse of incidence matrices of circular ladder graphs. Accordingly, we obtain the resistance distance matrix and Kirchhoff index of circular ladders. In what follows, \(CL_{n}\) denotes the circular ladder graph of order \(2n\) and \(u_{1}^{+},\ldots,u_{n}^{+}\) and \(u_{1}^{-},\ldots,u_{n}^{-}\) stand for the vertices of the internal and external cycles of \(CL_{n}\), respectively, as in Fig. 2. Also, \(u_{i+kn}^{\varepsilon}\) denotes the vertex \(u_{i}^{\varepsilon}\) for all \(1\leq i\leq n\), integer \(k\), and \(\varepsilon=\pm 1\). Let \(\Gamma\) be a planar graph embedded on a shaded cylinder as in Fig. 3. A Jordan path is a smooth simple path starting from outer border and ending at the inner border such that it meets every face and edge at most once and never paths through vertices (see [20]). Removing edges crossed by a Jordan path \(\mathcal{J}\) results in a graph whose spanning trees depend only on \(\mathcal{J}\). On the other hand, every spanning tree of \(\Gamma\) determines a Jordan path. Therefore, the set of all spanning trees of \(\Gamma\) can be partitioned into sets \(S_{\mathcal{J}}(\Gamma)\) where each set \(S_{\mathcal{J}}(\Gamma)\) corresponds to spanning trees associated to a Jordan path \(\mathcal{J}\). Let \(s_{\mathcal{J}}(\Gamma):=\#S_{\mathcal{J}}(\Gamma)\). **Theorem 4.1**.: _Let \(CL_{n}\) be the circular ladder graph of order \(2n\). If \(f\) is a spoke, then_ \[s(CL_{n}/f)=ns(L_{n}).\] Proof.: Consider a natural embedding of circular ladder \(CL_{n}\) on a cylinder as in Fig. 3 and assume without loss of generality that \(f=u_{1}^{+}u_{1}^{-}\). We have two (equivalent up to reflection) classes of Jordan paths: Figure 2. Circular ladder \(CL_{n}\) 1. Jordan paths \(\mathcal{J}^{1}_{i,j}\) through edges \(u^{+}_{i}u^{+}_{i+1},u^{+}_{i+1}u^{-}_{i+1},\ldots,u^{+}_{j-1}u^{-}_{j-1},u^{-}_{ j-1}u^{-}_{j}\) for \(1\leq i<j\leq n+1\); 2. Jordan paths \(\mathcal{J}^{2}_{i,j}\) through edges \(u^{+}_{j}u^{+}_{j-1},u^{+}_{j-1}u^{-}_{j-1},\ldots,u^{+}_{i+1}u^{-}_{i+1},u^{-} _{i}u^{-}_{i+1}\) for \(1\leq i<j\leq n+1\). Note that the Jordan paths in parts (1) and (2) coincide when \(j=i+1\). Also \(s_{\mathcal{J}^{1}_{i,j}}(CL_{n}/f)=s_{\mathcal{J}^{2}_{i,j}}(CL_{n}/f)\). Utilizing the above Jordan paths in conjunction with equation (11), yields \[s(CL_{n}/f)= 2\sum_{1\leq i<j\leq n+1}s_{\mathcal{J}^{1}_{i,j}}(CL_{n}/f)-\sum _{i=1}^{n}s_{\mathcal{J}^{1}_{i,i+1}}(CL_{n}/f)\] \[= 2\sum_{1\leq i<j\leq n+1}s(L_{n+1+i-j}/f_{i})-\sum_{i=1}^{n}s(L_ {n}/f_{i})\] \[= 2\sum_{1\leq i<j\leq n+1}a_{i}a_{n+2-j}-\sum_{i=1}^{n}a_{i}a_{n+ 1-i}\] \[= 2\sum_{2\leq j\leq n+1}s_{j-1}a_{n+2-j}-\sum_{i=1}^{n}a_{i}a_{n+ 1-i}\] \[= 2\sum_{1\leq i\leq n}a_{i}s_{n+1-i}-\sum_{i=1}^{n}a_{i}a_{n+1-i},\] where \(L_{s}/f_{t}\) denotes the contraction of the \(t\)'s spoke of \(L_{s}\) from right or left. Let \(x_{n}:=\sum_{i=1}^{n}a_{i}a_{n+1-i}\) and \(y_{n}:=\sum_{i=1}^{n}a_{i}s_{n+1-i}\) for all \(n\geq 1\). Clearly, \(y_{n}-y_{n-1}=x_{n}\). Also, from equation (4), it follows that \(y_{n}+y_{n-1}=ns_{n}\). Thus \(2y_{n}=ns_{n}+x_{n}\), which implies that \[s(CL_{n}/f)=2y_{n}-x_{n}=ns_{n},\] as required. **Theorem 4.2**.: _Let \(CL_{n}\) be the circular ladder graph of order \(2n\). Then_ \[s(CL_{n})=\frac{1}{2}n(a_{n+1}+a_{n}-2).\] Proof.: Analogous to the proof of Theorem 4.1, we consider two families of (equivalent up to reflection) Jordan paths given by \(\mathcal{J}^{1}_{i,j}\) and \(\mathcal{J}^{2}_{i,j}\) as in Theorem 4.1 where Figure 3. Jordan path \(i\) is behind \(j\) regarding clockwise rotations. From the rotational symmetry of \(CL_{n}\) and the equality of \(\mathcal{J}^{1}_{i,j}\) and \(\mathcal{J}^{2}_{i,j}\) for consecutive values of \(i\) and \(j\), it follows that \[s(CL_{n})=2n\sum_{i=1}^{n}s(L_{i})-ns(L_{n})=\frac{1}{2}n(a_{n+1}-a_{n}-2)\] by equation (2). In what follows, \(e_{i}^{+}\), \(e_{i}^{-}\), and \(f_{i}\) stand for directed edges \((u_{i}^{+},u_{i+1}^{+})\), \((u_{i}^{-},u_{i+1}^{-})\), and \((u_{i}^{+},u_{i}^{-})\) for \(i=1,\ldots,n\). Accordingly, we consider the orientation of \(CL_{n}\) whose directed edges are \(e_{i}^{+}\), \(e_{i}^{-}\), and \(f_{i}\) for \(i=1,\ldots,n\). **Theorem 4.3**.: _Let \(CL_{n}\) be the oriented circular ladder graph of order \(2n\) with incidence matrix \(Q\). The Moore-Penrose inverse \(Q^{+}=[q^{+}_{e,u}]\) of \(Q\) is given by_ \[q^{+}_{e_{i+}^{+},u_{i}^{+}} =-\varepsilon\varepsilon^{\prime}\frac{2a_{t+1}-a_{t}}{4}\cdot \frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}+\varepsilon\varepsilon^{\prime}\frac{a_{ t+1}+\varepsilon\varepsilon^{\prime}1}{4}-\frac{2t+1}{4n},\] \[\varepsilon q^{+}_{f_{i+t},u_{i}^{+}} =\frac{a_{t+1}+a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}- 2}-\frac{a_{t+1}-a_{t}}{4}\] _for all \(1\leq i\leq n\), \(t=0,\ldots,n-1\), and \(\varepsilon,\varepsilon^{\prime}=\pm 1\)._ Proof.: Let \(d=d(f_{i},u_{j})\). From the symmetry of graph, it follows that \(q^{+}_{f_{i},u_{i}^{-}}=-q^{+}_{f_{i},u_{i}^{+}}\). Hence \[2q^{+}_{f_{i},u_{i}^{+}}=r(u_{i}^{+},u_{i}^{-})=\frac{s(CL_{n}/f_{i})}{s(CL_{n })}=\frac{s_{n}}{a_{n+1}+a_{n}-2}\] by Lemma 3.5 and Theorems 4.1 and 4.2. Consider the edge cut-set \(\{e_{j}^{+},f_{j+1},e_{j+1}^{+}\}\), where \(j=i+t\) with \(0\leq t<n\). From [3, Lemma 2.1], it follows that \[q^{+}_{e_{j+1}^{+},u_{i}^{+}}=\delta_{i+1,j}-\frac{1}{2n}-q^{+}_{f_{j+1},u_{i }^{+}}+q^{+}_{e_{j}^{+},u_{i}^{+}}. \tag{12}\] Analogously, by considering the edge cut-set \(\{e_{j}^{-},f_{j+1},e_{j+1}^{-}\}\), we obtain \[q^{+}_{e_{j+1}^{-},u_{i}^{+}}=-\frac{1}{2n}+q^{+}_{f_{j+1},u_{i}^{+}}+q^{+}_{e _{j}^{-},u_{i}^{+}}. \tag{13}\] for all \(j\geq i\). If we replace \(j\) by \(i-1\), and use the equalities \[q^{+}_{e_{i-1}^{+},u_{i}^{+}}=-q^{+}_{e_{i}^{+},u_{i}^{+}}\quad\text{and}\quad q ^{+}_{e_{i-1}^{-},u_{i}^{+}}=-q^{+}_{e_{i}^{-},u_{i}^{+}}\] obtained from the symmetry of the graph, then we get the initial values \[q^{+}_{e_{i}^{+},u_{i}^{+}}=\frac{1}{2}\left(1-\frac{1}{2n}-q^{+}_{f_{i},u_{i }^{+}}\right)\quad\text{and}\quad q^{+}_{e_{i}^{-},u_{i}^{+}}=\frac{1}{2}\left( \frac{-1}{2n}+q^{+}_{f_{i},u_{i}^{+}}\right).\] On the other hand, the incidence vector of the cycle induced by \(\{u_{j}^{+},u_{j+1}^{+},u_{j}^{-},u_{j+1}^{-}\}\) lies in the left null space of \(Q^{+}\), from which we obtain \[q^{+}_{f_{j+1},u_{i}^{+}}-q^{+}_{e_{j}^{-},u_{i}^{+}}-q^{+}_{f_{j},u_{i}^{+}}+ q^{+}_{e_{j}^{+},u_{i}^{+}}=0\] so that \[q^{+}_{f_{j+1},u_{i}^{+}}=q^{+}_{e_{j}^{-},u_{i}^{+}}+q^{+}_{f_{j},u_{i}^{+}}-q ^{+}_{e_{j}^{+},u_{i}^{+}} \tag{14}\] for all \(j\geq i\). Let \[C:=\begin{pmatrix}2&-1&-1\\ -1&2&1\\ -1&1&1\end{pmatrix}\quad\text{and}\quad D:=-\frac{1}{2n}\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}.\] If \(X_{j}\) denotes the column matrix \((q^{+}_{e_{j}^{+},u_{i}^{+}}\ q^{+}_{e_{j}^{-},u_{i}^{+}}\ q^{+}_{f_{j},u_{i}^{ +}})^{T}\) for \(i\leq j<i+n\), then from the equalities (13) and (14), it follows that \[X_{j+1}=CX_{j}+D\] for all \(i\leq j<i+n\). Using induction on \(t\geq 0\), one can prove that \[C^{t}=\begin{pmatrix}\frac{a_{t+1}+1}{2}&-\frac{a_{t+1}-1}{2}&-s_{t}\\ -\frac{a_{t+1}-1}{2}&\frac{a_{t+1}+1}{2}&s_{t}\\ -s_{t}&s_{t}&a_{t}\end{pmatrix}\] and \[I+C+\cdots+C^{t-1}=\begin{pmatrix}\frac{s_{t}+t}{2}&-\frac{s_{t}-t}{2}&-\frac {a_{t}-1}{2}\\ -\frac{s_{t}-t}{2}&\frac{s_{t}+t}{2}&\frac{a_{t}-1}{2}\\ -\frac{a_{t}-1}{2}&\frac{a_{t}-1}{2}&s_{t-1}+1\end{pmatrix}.\] Now, from the equality \[X_{i+t}=C^{t}X_{i}+(I+C+\cdots+C^{t-1})D,\] it follows that \[q^{+}_{e_{i+t}^{+},u_{i}^{+}} =-\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2 }+\frac{a_{t+1}+1}{4}-\frac{2t+1}{4n},\] \[q^{+}_{e_{i+t},u_{i}^{+}} =\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2 }-\frac{a_{t+1}-1}{4}-\frac{2t+1}{4n},\] \[q^{+}_{f_{i+t},u_{i}^{+}} =\frac{a_{t+1}+a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2 }-\frac{a_{t+1}-a_{t}}{4}\] for all \(t=0,\ldots,n-1\). Finally, the result follows by applying the same argument or using the symmetry of graph for vertices \(u_{1}^{-},\ldots,u_{n}^{-}\). **Corollary 4.4**.: _If \(n=2k+1\), then_ \[s(CL_{n})=ns(L_{n})\left(\frac{s(L_{k+1})+s(L_{k})}{s(L_{k+1})-s(L_{k})}\right)\] _and if \(n=2k\), then_ \[s(CL_{n})=ns(L_{n})\left(\frac{s(L_{k+1})+2s(L_{k})+s(L_{k-1})}{s(L_{k+1})-s( L_{k-1})}\right).\] Proof.: We know that \(s(CL_{n}/f_{i})/s(CL_{n})=r(u_{i}^{+},u_{i}^{-})=2q^{+}_{f_{i},u_{i}^{+}}\). Thus \[s(CL_{n})=ns(L_{n})\cdot\frac{a_{n+1}+a_{n}-2}{a_{n+1}-a_{n}}.\] Using Binet's formulas, one can easily show that \[\frac{a_{n+1}+a_{n}-2}{a_{n+1}-a_{n}}=\frac{s_{k+1}+s_{k}}{s_{k+1}-s_{k}}\quad \text{or}\quad\frac{s_{k+1}+2s_{k}+s_{k-1}}{s_{k+1}-s_{k}}\] according to \(n=2k+1\) or \(n=2k\), respectively. The result follows. **Corollary 4.5**.: _Let \(1\leq i\leq n\), \(0\leq t<n\), and \(\varepsilon,\varepsilon^{\prime}=\pm 1\). Then_ \[r(u_{i}^{\varepsilon},u_{i+t}^{\varepsilon^{\prime}})=-\varepsilon\varepsilon^{ \prime}\frac{a_{t+1}+a_{t}-\varepsilon\varepsilon^{\prime}2}{4}\cdot\frac{a_{n +1}-a_{n}}{a_{n+1}+a_{n}-2}+\varepsilon\varepsilon^{\prime}\frac{a_{t+1}-a_{t }}{4}+\frac{t}{2}-\frac{t^{2}}{2n}.\] Proof.: First observe that \(r(u_{i}^{-},u_{i+t}^{-})=r(u_{i}^{+},u_{i+t}^{+})=r(u_{0}^{+},u_{t}^{+})\) by Theorem 4.3 and Lemma 3.5. We have \[\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{+}}^{+}=-\frac{a_{t+1}+a_{t}-2}{8}\cdot \frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}+\frac{a_{t+1}-a_{t}}{8}+\frac{t}{4}- \frac{t^{2}}{4n}\] and \[\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{-}}^{+}=\frac{a_{t+1}+a_{t}-2}{8}\cdot \frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}-\frac{a_{t+1}-a_{t}}{8}+\frac{t}{4}- \frac{t^{2}}{4n}.\] Also, \[\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{t}^{+}}^{+}=\sum_{s=0}^{t-1}q_{e_{n+s-t}^{+},u _{0}^{+}}^{+}=-\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{+}}^{+}\] and \[\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{t}^{-}}^{+}=\sum_{s=0}^{t-1}q_{e_{n+s-t}^{+},u _{0}^{-}}^{+}=-\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{-}}^{+}\] by symmetry of the graph. Let \(1\leq i\leq n\) and \(0\leq t<n\). Then, by Lemma 3.5, \[r(u_{i}^{+},u_{i+t}^{+}) =r(u_{0}^{+},u_{t}^{+})=2\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{+}}^{+}\] \[=-\frac{a_{t+1}+a_{t}-2}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n }-2}+\frac{a_{t+1}-a_{t}}{4}+\frac{t}{2}-\frac{t^{2}}{2n}\] and \[r(u_{i}^{+},u_{i+t}^{-}) =r(u_{0}^{+},u_{t}^{-})=\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{+}}^{ +}-\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{t}^{-}}^{+}+q^{+}(f_{t},u_{0}^{+})-q^{+}(f_ {t},u_{t}^{-})\] \[=\sum_{s=0}^{t-1}q_{e_{s}^{+},u_{0}^{+}}^{+}+\sum_{s=0}^{t-1}q_{e _{s}^{+},u_{0}^{-}}^{+}+q^{+}(f_{t},u_{0}^{+})+q^{+}(f_{0},u_{0}^{+})\] \[=\frac{a_{t+1}+a_{t}+2}{4}\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}- \frac{a_{t+1}-a_{t}}{4}+\frac{t}{2}-\frac{t^{2}}{2n},\] as required. **Corollary 4.6**.: _For every \(n\geq 3\), we have_ \[Kf(CL_{n})=n^{2}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}+\frac{1}{6}n(n^{2}-1)\] It is known from [19, Theorem F] that \(Kf(\Gamma)=n\mathrm{tr}(L^{+}(\Gamma))\) for every graph \(\Gamma\) of order \(n\), where \(L(\Gamma)\) denotes the Laplacian matrix of \(\Gamma\). Also, from the definition, we know that \[r(u,v)=L_{u,u}^{+}+L_{v,v}^{+}-L_{u,v}^{+}-L_{v,u}^{+}\] for all \(u,v\in V(\Gamma)\), where \(L^{+}\) denotes the Moore-Penrose inverse of \(L=L(\Gamma)\) and \(r(u,v)\) is the resistance distance between vertices \(u\) and \(v\) of \(\Gamma\). Utilizing the fact that circular ladders are vertex transitive graphs, we obtain the following result immediately. **Corollary 4.7**.: _Let \(L:=L(CL_{n})\) be the Laplacian matrix of \(CL_{n}\) and \(L^{+}=[l_{uv}^{+}]\) be its Moore-Penrose inverse. Then_ \[l_{u_{i}^{\varepsilon},u_{i+t}^{\varepsilon^{\prime}}}=\varepsilon\varepsilon^{ \prime}\frac{a_{t+1}+a_{t}}{8}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}-2}- \varepsilon\varepsilon^{\prime}\frac{a_{t+1}-a_{t}}{8}-\frac{t}{4}+\frac{t^{2 }}{4n}+\frac{1}{24}\left(n-\frac{1}{n}\right)\] _for all \(1\leq i\leq n\), \(0\leq t<n\), and \(\varepsilon,\varepsilon^{\prime}=\pm 1\)._ ## 5. Mobius ladder graph Following the same techniques used in the previous section, we shall compute the Moore-Penrose inverse of incidence matrices of Mobius ladder graphs. Likewise, we obtain the resistance distance matrix and Kirchhoff index of Mobius ladders. In what follows, \(M_{n}\) denotes the Mobius ladder graph of order \(2n\) and \(u_{1},\ldots,u_{n}\) and \(v_{1},\ldots,v_{n}\) stand for the vertices of the external and internal cycles of \(M_{n}\), respectively, as in Fig. 4(a). Also, we may set \(u_{i+n}:=v_{i}\) for all \(i=1,\ldots,n\), and that \(u_{i+2nk}:=u_{i}\) for all \(1\leq i\leq 2n\) and integers \(k\). In what follows, \(f_{i}:=u_{i}v_{i}\) (\(1\leq i\leq n\)) is the \(i\)-th spoke and \(e_{i}=u_{i}u_{i+1}\) is the \(i\)-th edge of the \(2n\)-cycle \(u_{1},\ldots,u_{2n}\) (\(1\leq i\leq 2n\)), see the second drawing in Fig. 4(b). **Theorem 5.1**.: _Let \(M_{n}\) be the Mobius ladder graph of order \(2n\) and \(Q\) be its incidence matrix. Then the Moore-Penrose inverse of \(Q=[q_{e,v}^{+}]\) is given by_ \[q_{e_{i+t},u_{i}}^{+} =-\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+ 2}+\frac{a_{t+1}+1}{4}-\frac{2t+1}{4n},\] \[q_{e_{n+i+t},u_{i}}^{+} =\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+ 2}-\frac{a_{t+1}-1}{4}-\frac{2t+1}{4n},\] \[\varepsilon_{i+t}q_{f_{i+t},u_{i}}^{+} =\frac{a_{t+1}+a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+2} -\frac{a_{t+1}-a_{t}}{4}\] _for all \(t=0,\ldots,n-1\)._ Figure 4. Möbius ladder graph \(M_{n}\) Proof.: First observe that \(s(M_{n}/f_{i})=s(CL_{n}/f_{i})\) and that by [20, p. 41] \[s(M_{n})=s(CL_{n})+2n=\frac{n}{2}(a_{n+1}+a_{n}+2).\] Therefore, by Lemma 3.5 and Theorem 4.1, \[q^{+}_{f_{i},u_{i}}=-q^{+}_{f_{i},v_{i}}=\frac{r(u_{i},v_{i})}{2}=\frac{s(M_{n} /f_{i})}{2s(M_{n})}=\frac{s_{n}}{a_{n+1}+a_{n}+2}.\] Consider the edge cut-set \(\{e_{j},f_{j+1},e_{j+1}\}\). From [3, Lemma 2.1], it follows that \[q^{+}_{e_{j+1},u_{i}}=\delta_{i,j+1}-\frac{1}{2n}+q^{+}_{e_{j},u_{i}}-\varepsilon _{j+1}q^{+}_{f_{j+1},u_{i}}, \tag{15}\] where \(\varepsilon_{j}=(-1)^{[(j-1)/n]}\). Since \(q^{+}_{e_{i},u_{i}}=-q^{+}_{e_{i-1},u_{i}}\) by the symmetry of the graph, equation (15) yields the initial values \[q^{+}_{e_{i},u_{i}}=\frac{1}{2}\left(1-\frac{1}{2n}-\varepsilon_{i}q^{+}_{f_{ i},u_{i}}\right)\] for all \(i=1,\ldots,2n\). Likewise, from \(f_{n+i}=f_{i}\) and \(q^{+}_{e_{n+i},u_{i}}=-q^{+}_{e_{n+i-1},u_{i}}\) it follows that \[q^{+}_{e_{n+i},u_{i}}=\frac{1}{2}\left(-\frac{1}{2n}+\varepsilon_{i}q^{+}_{f_{ i},u_{i}}\right)\] for all \(i=1,\ldots,2n\). Notice that the values of \(q^{+}_{e_{i+i},u_{i}}\) are independent of \(i\) by the symmetry of the graph, so we may restrict ourselves to \(i\) with \(1\leq i\leq n\) and assume \(\varepsilon_{i}=1\). On the other hand, the incidence vector of the cycle \(u_{j},u_{j+1},u_{n+j},u_{n+j+1}\) lies in the left null space of \(Q^{+}\), from which it follows that \[q^{+}_{e_{j},u_{i}}+\varepsilon_{j+1}q^{+}_{f_{j+1},u_{i}}-q^{+}_{e_{n+j},u_{ i}}-\varepsilon_{j}q^{+}_{f_{j},u_{i}}=0\] or \[\varepsilon_{j+1}q^{+}_{f_{j+1},u_{i}}=q^{+}_{e_{n+j},u_{i}}-q^{+}_{e_{j},u_{ i}}+\varepsilon_{j}q^{+}_{f_{j},u_{i}}. \tag{16}\] Let \[C:=\begin{pmatrix}2&-1&-1&0\\ -1&2&0&-1\\ -1&1&1&0\\ 1&-1&0&1\end{pmatrix}\quad\text{and}\quad D:=-\frac{1}{2n}\begin{pmatrix}1\\ 1\\ 0\\ 0\end{pmatrix}.\] If \(X_{j}\) denotes the column matrix \((q^{+}_{e_{j},u_{i}}\ q^{+}_{e_{n+j},u_{i}}\ \varepsilon_{j}q^{+}_{f_{j},u_{i}}\ \varepsilon_{n+j}q^{+}_{f_{n+j},u_{i}})^{T}\) for \(i\leq j<i+n\), then from the equalities (15) and (16), it follows that \[X_{j+1}=CX_{j}+D\] for all \(i\leq j<n+i-1\). Using induction on \(t\geq 0\), one can prove that \[C^{t}=\begin{pmatrix}\frac{a_{t+1}+1}{2}&-\frac{a_{t+1}-1}{2}&-\frac{s_{t}+t }{2}&\frac{s_{t}-t}{2}\\ -\frac{a_{t+1}-1}{2}&\frac{a_{t+1}+1}{2}&\frac{s_{t}-t}{2}&-\frac{s_{t}+t}{2} \\ -s_{t}&s_{t}&\frac{a_{t}+1}{2}&-\frac{a_{t}-1}{2}\\ s_{t}&-s_{t}&-\frac{a_{t}-1}{2}&\frac{a_{t}+1}{2}\end{pmatrix}\] and \[I+C+\cdots+C^{t-1}=\begin{pmatrix}\frac{s_{t}+t}{2}&-\frac{s_{t}-t}{2}&-\frac{a_{t} +t^{2}-t-1}{4}&\frac{a_{t}-t^{2}+t-1}{4}\\ -\frac{s_{t}-t}{2}&\frac{s_{t}+t}{2}&\frac{a_{t}-t^{2}+t-1}{4}&-\frac{a_{t}+t^ {2}-t-1}{4}\\ -\frac{a_{t}-1}{2}&\frac{a_{t}-1}{2}&\frac{s_{t}-1+t+1}{2}&-\frac{s_{t-1}-(t-1 )}{2}\\ \frac{a_{t}-1}{2}&-\frac{a_{t}-1}{2}&-\frac{s_{t-1}-(t-1)}{2}&\frac{s_{t-1}+t+1 }{2}\end{pmatrix}.\] Now, from the equality \[X_{i+t}=C^{t}X_{i}+(I+C+\cdots+C^{t-1})D,\] it follows that \[q^{+}_{e_{i+t},u_{i}} =-\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+ 2}+\frac{a_{t+1}+1}{4}-\frac{2t+1}{4n},\] \[q^{+}_{e_{n+i+t},u_{i}} =\frac{2a_{t+1}-a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+ 2}-\frac{a_{t+1}-1}{4}-\frac{2t+1}{4n},\] \[\varepsilon_{i+t}q^{+}_{f_{i+t},u_{i}} =\frac{a_{t+1}+a_{t}}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+2} -\frac{a_{t+1}-a_{t}}{4}\] for all \(t=0,\ldots,n-1\), as required. **Corollary 5.2**.: _Let \(1\leq i\leq 2n\) and \(0\leq t\leq n\). Then_ \[r(u_{i},u_{i+t})=-\frac{a_{t+1}+a_{t}-2}{4}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a _{n}+2}+\frac{a_{t+1}-a_{t}}{4}+\frac{t}{2}-\frac{t^{2}}{2n}.\] Proof.: The result follows from Lemma 3.5 and Theorem 5.1 in conjunction with the fact that \[\sum_{j=0}^{t-1}q^{+}_{e_{i+j},u_{i+t}}=-\sum_{j=0}^{t-1}q^{+}_{e_{i+j},u_{i}}\] by the symmetry of the graph. Theorem 5.2 in conjunction with equation (5) yields **Corollary 5.3**.: _For every \(n\geq 3\), we have_ \[Kf(CL_{n})=n^{2}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1}+a_{n}+2}+\frac{1}{6}n(n^{2}- 1).\] Finally, the Moore-Penrose inverse of the Laplacian matrix of \(M_{n}\) can be obtained analogous to Corollary 4.7. **Corollary 5.4**.: _Let \(L:=L(M_{n})\) be the Laplacian matrix of \(M_{n}\) and \(L^{+}=[l^{+}_{uv}]\) be its Moore-Penrose inverse. Then_ \[l^{+}_{u_{i},u_{i+t}}=\frac{a_{t+1}+a_{t}}{8}\cdot\frac{a_{n+1}-a_{n}}{a_{n+1} +a_{n}+2}-\frac{a_{t+1}-a_{t}}{8}-\frac{t}{4}+\frac{t^{2}}{4n}+\frac{1}{24} \left(n-\frac{1}{n}\right)\] _for all \(1\leq i\leq 2n\) and \(0\leq t<n\)._
2310.16700
Streamlining Knowledge Graph Construction with a façade: The SPARQL Anything project
What should a data integration framework for knowledge engineers look like? Recent research on Knowledge Graph construction proposes the design of a fa\c{c}ade, a notion borrowed from object-oriented software engineering. This idea is applied to SPARQL Anything, a system that allows querying heterogeneous resources as-if they were in RDF, in plain SPARQL 1.1, by overloading the SERVICE clause. SPARQL Anything supports a wide variety of file formats, from popular ones (CSV, JSON, XML, Spreadsheets) to others that are not supported by alternative solutions (Markdown, YAML, DOCx, Bibtex). Features include querying Web APIs with high flexibility, parametrised queries, and chaining multiple transformations into complex pipelines. In this paper, we describe the design rationale and software architecture of the SPARQL Anything system. We provide references to an extensive set of reusable, real-world scenarios from various application domains. We report on the value-to-users of the founding assumptions of its design, compared to alternative solutions through a community survey and a field report from the industry.
Luigi Asprino, Enrico Daga, Justin Dowdy, Paul Mulholland, Aldo Gangemi, Marco Ratta
2023-10-25T15:18:17Z
http://arxiv.org/abs/2310.16700v1
# Streamlining Knowledge Graph Construction with a facade: The SPARQL Anything project ###### Abstract What should a data integration framework for knowledge engineers look like? Recent research on Knowledge Graph construction proposes the design of a _facade_, a notion borrowed from object-oriented software engineering. This idea is applied to SPARQL Anything, a system that allows querying heterogeneous resources _as-if_ they were in RDF, in plain SPARQL 1.1, by overloading the SERVICE clause. SPARQL Anything supports a wide variety of file formats, from popular ones (CSV, JSON, XML, Spreadsheets) to others that are not supported by alternative solutions (Markdown, YAML, DOCx, Bibte). Features include querying Web APIs with high flexibility, parametrised queries, and chaining multiple transformations into complex pipelines. In this paper, we describe the design rationale and software architecture of the SPARQL Anything system. We provide references to an extensive set of reusable, real-world scenarios from various application domains. We report on the value-to-users of the founding assumptions of its design, compared to alternative solutions through a community survey and a field report from the industry. ## 1 Introduction What should a data integration framework for knowledge engineers look like? Approaches can transform the non-RDF data sources on the basis of specific ontologies, designed to represent content from popular formats (e.g. Any23). Alternatively, solutions could offer a mapping language (e.g. RML) which reuses components of format-specific query languages (e.g. JSONPath). In addition, systems could extend SPARQL, and incorporate features of pre-existing query languages for each one of the original formats (e.g. Xpath for XML), allowing users to perform mappings within SPARQL queries (SPARQL Generate). In this resource paper, we present a system whose design principle comes from the notion of facade, borrowed from object-oriented software engineering. A facade is a generic interface that aims at hiding the internal complexity of a class, exposing behaviours that better fit the task at hand. This idea, originally introduced by (Daga et al., 2021), is applied to SPARQL Anything, a system that allows querying heterogeneous resources as-if they were in RDF. SPARQL anything does not change the SPARQL 1.1 specification but injects new behaviour by overloading the SERVICE operator with a custom IRI-schema. In (Daga et al., 2021), authors introduced the idea of applying facades for semantic lifting, presented a formalisation of the approach in predicate logic, and showed how it is applicable to a variety of formats, while bringing important benefits in terms of usability and extendibility. In (Asprino et al., 2023), it is demonstrated how Facade-X components are enough to be applicable to whatever is generated by a formal grammar (which is the most common tool for describing data formats), and it can in theory be applied to relational databases as well. In this resource paper, we describe the current version of SPARQL Anything and how it was applied to real-world knowledge graph construction pipelines in the context of various projects, including two EU funded, H2020 projects - SPICE34(Daga et al., 2022) and Polifonia35, and several projects of a US-based IT company active in the healthcare sector. We discuss the value-to-users of our proposition in two ways. First, we report on a survey which evaluates the benefits and opportunities of this approach, compared to alternative solutions, from the user perspective. Second, we provide a field report, presenting first-hand feedback from the industry sector by one of the authors. Footnote 34: Social Cohesion, Participation, and Inclusion for Cultural Engagement [http://spice-h2020.eu](http://spice-h2020.eu), SPICE is an EU-funded project whose aim is applying a knowledge graph perspective to data exchange and reuse in cultural heritage Footnote 35: Polifonia - A digital harmoniser for Musical Cultural Heritage: [https://polifonia-project.eu](https://polifonia-project.eu) The following section illustrates the design principles of facade-based data access (Daga et al., 2021), describing Facade-X, the generic meta-model implemented by our system (Section 2). Section 3 describes the main features of the SPARQL Anything system, the currently supported formats, and additional features that allow the development of RDF construction pipelines from heterogeneous data files. An extensive set of case studies are reported in Section 4. Finally, Section 5 reports on feedback from our community of users. Related work is considered in Section 8. Finally, we conclude the paper in Section 9. ## 2 Facade-based data access In this Section, we illustrate the idea behind facade-based data access. We rely on the notion of facade as "an object that serves as a front-facing interface masking more complex underlying or structural code"36. Applied to our problem, a facade acts as a generic meta-model allowing (a) to inform the development of transformers from an open-ended set of formats, and (b) to generate RDF content in a consistent and predictable way. Footnote 36: [https://en.wikipedia.org/wiki/Facade_pattern](https://en.wikipedia.org/wiki/Facade_pattern) (accessed, 19/04/2021) The Facade-X meta-model introduced in (Daga et al., 2021) was designed by selecting a small set of primitive data structures: typing, key-value maps, and sequences. Facade-X defines two types of objects: containers and values. Containers can be typed, and one container in the dataset is always of type root (the only primitive specified by Facade-X). Values can have any datatype. Containers include a set of unique slots, either labelled as strings or as integer numbers. A slot is filled by another container or by a value. An RDF specification of Facade-X uses the following namespaces and associated preferred prefixes: ``` @prefix rdf: <[http://www.w3.org/1999/02/22-rdf-syntax-ns#](http://www.w3.org/1999/02/22-rdf-syntax-ns#)>. @prefix rdfs: <[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>. @prefix xsd: <[http://www.w3.org/2001/XMLSchema#](http://www.w3.org/2001/XMLSchema#)>. @prefix fx: <[http://sparql.xyz/facade-x/ns/](http://sparql.xyz/facade-x/ns/)>. # for fx:ro ot @prefix xyz: <[http://sparql.xyz/facade-x/data/](http://sparql.xyz/facade-x/data/)>. # for the properties ``` The first is used to express the primitive fx:root. String slots are RDF properties generated with the xyz: namespace, where the local name is supposed to be the string labelling the slot in the data source (for example, a JSON property)37. Instead, integer slots (sequences) are represented with instances of rdf:ContainerMembershipProperty: rdf:_1, rdf:_2,... rdf:_n. Facade-X uses containers rather than rdf:List since we know that this representation is more efficient to deal with in SPARQL (Daga, Merono-Penuela, and Motta, 2021). Finally, values are rdf:Literal, while containers can be either IRIs or blank nodes (the specification does not enforce the use of of either, leaving both options to the Facade-X engineer, including the possibility of switch between the two with a tool option). With these set of components, a Facade-X _software_ engineer is supposed to design connectors to an open-ended set of resource types, leaving to the _knowledge_ engineer (Newell et al. 1982) the freedom of accessing those data sources as-if they were RDF. In what follows, we show how to apply a single RDF abstraction to an extensive set of file formats (including some that are served to the SPARQL users for the very first time). **CSV**. A CSV file is a resource, identifiable by a URI, which contains text organised as an ordered sequence of rows (newline separated), which in turn contains an ordered sequence of data fields (separated by a delimiter). Rows are ordered; therefore, this case of containment can be represented as an ordered sequence (with container membership properties). What about data values within a row? We observe how CSV data may have an optional "header", where the first line is the list of field names. When this happens, we can use the property component and generate an RDF property reusing the field name, and minting an IRI with the xyz: namespace. Otherwise, we can consider the values on each row as another sequence, and fallback to the ordered sequence component. This is an example from the Tate Gallery open data38: Footnote 38: [https://github.com/attegallery/collection](https://github.com/attegallery/collection) ``` id,accessionnumber,title,.. 1035,A00001,"AFigureBowing",.. - ``` [afx:Root; rdf:1[xyz:id"1034";xyz:accessionnumber"A00002";xyz:title"AFigureBowing"; ],.. ] ``` **JSON**. The JavaScript Object Notation is specified by ECMA39. The syntax defines three types of elements: _objects_, a set of key-value pairs, where keys are supposed to be unique; values, which are either strings, numbers, boolean, or the primitive 'null', and arrays, which specify sequences (containing other arrays, objects, or values). We interpret objects and arrays as containers. We can reuse rdf:Property to link objects to values. Arrays can be represented by the ordered sequence component. Values can be expressed as rdf:Literal, selecting relevant XSD datatypes from the RDFS specification: xsd:string, xsd:boolean, xsd:int, xsd:float40. The following example shows a JSON document with metadata of an artist in the Tate Gallery Collection. The JSON file will be represented as follows in RDF (in Turtle syntax): Footnote 39: [https://www.ecma-international.org/publications-and-standards/standards/ecma-404/](https://www.ecma-international.org/publications-and-standards/standards/ecma-404/) Footnote 40: Currently, we chose to ignore fields with the ‘null’ value. However, we may decide to represent it as blank node or to create a primitive entity to express it, for example, similar to rdf:nil. **HTML** and **XML**. Both formats can be captured by the Document Object Model (DOM) specification, which we will refer to in the following description41. HTML/XML elements (also known as tags) can be definitely considered containers, so we can reuse both the rdf:Property Facade-X component for specifying tag attributes, and container membership properties for specifying relations to child elements in the DOM tree. These may include text, which can be expressed as RDF literals of type xsd:string. What about element types (tag names)? Facade-X does already provide a solution for _unary_ attributes: rdf:type. Finally, we can use namespaces declared within the original document to name properties and types, if available, instead of the default xyz:. Examples with HTML content will be referred to later in Section 4. Footnote 41: However, it needs to be clarified how our methodology focuses on the elements of the data structure and does not aim at reproducing the DOM API in RDF. **Textual data** is an interesting case where we can use containment to refer to different elements of the text. The whole content can be included in one single literal, as follows: [] a fx:root ; rdf:1 "lorem ipsum..."\(\sim\)xsd:string. Alternatively, the text can be tokenized (with a user-defined delimiter) and the resulting sequence represented with our facade. **Binary** content such as images can be also supported, by embedding the content in a single literal of datatype xsd:binary64encoding. This solution does not allow to query the actual content, clearly, but still allows to bring in the content and serve it, for example, as linked data. However, binary files such as JPEG images often include annotations, for example, using the common EXIF metadata schema. These annotations can be considered an additional data source, and represented in a separate _metadata_ graph with Facade-X. **YAML** is a lightweight, human-readable data-serialization language. YAML is a "superset" of JSON (any JSON file can be specified in YAML) and, similarly to JSON, data can be organised in lists or associative arrays42. Footnote 42: However, differently from JSON, comments and custom data types are allowed. Therefore, in addition to the basic data structures required for capturing JSON files, _rdf:type_ is needed for representing custom data types, when available. **BibTeX** is a text format for computational bibliographies typically used together with the LaTeX system. Each entry consists of the type (e.g. article, inproceedings etc.), a citation key, and key-value pairs for the other characteristics of an entry. Each BibTeX entry can be represented as a typed container that holds a set of key-value pairs. A word processing document is any text-based document compiled using a word processor software. **Markdown** is a lightweight markup language for writing formatted documents inspired to conventions of web posting. We can interpret a document (compiled with a Word processor or specified in Markdown syntax) as a sequence of blocks (e.g. paragraphs, lists, headings, code blocks). Some blocks (e.g. list items) contain other blocks, whereas others contain inline contents (e.g. links, images etc.). A document can be represented as a list of typed containers. Where the type denotes the kind of block (e.g. heading, paragraph, emphasised text, link, image etc.); _lists_ are needed for specifying the sequence of the blocks. Additional attributes such as the depth of the header or the type of list (bullets, numbers, etc...) can be also supported, relying on the key-value structure. The following shows an example of Markdown: ``` #SPARQLAnything SPARQLAnything is a system forSemanticWebre-engineeringthat allowsusersto...queryanythingwithSPARQL. - []afx:root;axyz:Document; rdf:1[axyz:Heading; rdf:1"SPARQLAnything"^^^^xsd:string; xyz:level"1"^^^xsd:int]; rdf:2[axyz:Paragraph; rdf:1"SPARQLAnythingis..."^^xsd:string]. ``` Directory structures can be interpreted as collections of folders (containers) or file names (values). This allows to develop facade-based data access to explore the content of a local directory. The same concept applies to archives (e.g. zip files). As discussed, Facade-X is the same for all the surveyed formats. facade-based data access acts as a virtual endpoint that can be queried exactly as a remote SPARQL endpoint, through a SERVICE call to the special IRI-schema x-sparql-anything: The related URI-schema supports an open-ended set of parameters specified by the facade implementations available. A minimal example only includes the resource locator, and guesses the data source type from the file extension. Options are embedded as key-value pairs, separated by comma. These can incorporate a set of parameters, to allow the user to configure the system (for example, to indicate that the system should consider the first line of a CSV as headers): ``` x-sparql-anything:media-type=application/json;charset=UTF-8,loca tion=... ``` Figure 1 shows an example query. The query (taken from the Tate Gallery Showcase [Daga, 2022]) selects artwork metadata from a CSV file and collects additional data from a related JSON file from the local directory. First, it iterates over a CSV with artworks' metadata and, for each one of them, constructs the path to the local JSON file. Then, the JSON file is queried for artwork subjects. The query solution is finally projected into a CONSTRUCT clause43. Footnote 43: This example is reproducible, along with other example queries on the same data source, at [https://github.com/sparql-anything/showcase-tate](https://github.com/sparql-anything/showcase-tate) SPARQL Anything: system overview In what follows, we describe the process of executing facade-based data access with SPARQL Anything. After that, we summarise the main features of the system. SPARQL Anything extends the Apache Jena framework44 with a special query executor capable of handling facade-based data access. The system behaves essentially as a standard SPARQL 1.1 query engine, receiving as input a query and returning either a SPARQL Result Set (for SELECT/ASK queries) or an RDF stream (for CONSTRUCT/DESCRIBE types of queries). Figure 2 describes the general architecture of the system. Footnote 44: [https://jena.apache.org/index.html](https://jena.apache.org/index.html) Figure 1: Example of SPARQL Anything query from the Tate Gallery Collection showcase (Daga 2022). The query selects artworks’ metadata from a CSV, builds the path to a related JSON file containing additional annotations (subjects). These JSON files are queried in another facade-based data access operation, where subjects are collected. The variables are projected into a Knowledge Graph design in the CONSTRUCT clause. The process starts with an input query, which is handled by the ARQ engine of Apache Jena. The query is parsed into an abstract algebra, and operations are executed according to their internal dependencies, where the output of one operation is served as input to the dependent one. Our system intercepts attempts to execute any SERVICE pointing to a x-sparql-anything; IRI. However, configuration can be expressed to SPARQL Anything either via the IRI schema (as in the previous example query) or by using triples having as subject the special entity fx:properties, as in the following examples: ``` fx:propertiesfx:location"./my-file:csv". [or] fx:propertiesfx:location"http://my-web-api";fx:media-type"application n/json";fx:http.query.param.api-key"my-api-key". [or] fx:propertiesfx:command"echofirst,second,third";fx:media-type"text/csv" [or] fx:propertiesfx:content"first,second,third";fx:media-type"text/csv" [or] fx:propertiesfx:content?data;fx:media-type"text/csv" ``` In the last line, the input data is supposed to come from a previous operation. Therefore, if there are configuration variables that are not been evaluated, the execution is postponed. When all the input parameters are ready, our process starts. The system first gathers all configuration options. We refer the reader to the official documentation on the web for a complete list of configuration options, including format-specific ones ("SPARQL Anything Figure 2: SPARQL Anything: system architecture. software documentation" 2022). Next, the system identifies the input source. Currently, SPARQL Anything supports input from within the SPARQL query - content, or by defining a command to be executed on the host machine (process STDOUT is streamed to the triplifier), or by specifying a resource URL - option _location_. In the case the location is an HTTP URL, the resource is resolved via a full fledged HTTP client. HTTP client options include setting the HTTP method (GET\(<\) POST, PUT,...), passing authentication options and query parameters, and setting HTTP request headers such as Accept: application/json; charset=utf-8. Through this component, SPARQL Anything is capable of supporting querying complex Web APIs. The output of the request can be interpreted as any of the content types supported. Finally, any other URLs are resolved via the underlying Java IO URL connection library. Independently from the method to obtain a resource, the input is passed to a triplifier. In this phase, the system uses the option _mediatype_ or tries to guess the format from the file extension, if available. The re-engineering of the input is performed by the triplifier according to the available configuration options. By default, the process triplifies all the content, materialising a view _in-memory_, then executes the query (the inner part of the SERVICE clause) and returns a _query solution_, like any other SPARQL operations. When the input is too large to be loaded all in-memory, the user has two possibilities. The _on-disk_ option instructs the system to save the facade-based RDF into a temporary local triple store database (Jena TDB2), and then execute the query on the database. In the alternative, the _slice_ option can be invoked, and the content is partitioned and the triplification and query execution performed on each one of the parts separately (currently supported only for CSV, JSON, and XML). The output is streamed together so that external operations can continue in the same way as with an execution with a complete view. In all cases, the triplification process only preserves the part of the Facade-X RDF view that is mentioned in the input query - _triple-filtering_. Specifically, if no quad pattern in the SERVICE clause matches a given generated RDF statement, this is not included in the materialised view (this feature, enabled by default, can be disabled via the option _strategy_, see the documentation ("SPARQL Anything software documentation" 2022). We illustrate the main features of SPARQL Anything, referring to the official documentation for further details ("SPARQL Anything software documentation" 2022). Users can customise the behaviour of facade-based data access according to a set of options. In addition to the ones already mentioned, options include preparing string values (trim-strings), generate IRIs instead of BNodes (blank-nodes=false), specifying the input charset, or a custom namespace instead of the default xyz:. With the option null-string, users can request to ignore triples having as value the given string (for example, an empty string or the string "N/A"). Additional metadata can be extracted (e.g. from EXIF annotations in image files) via the metadata=true option. The system allows to query the following file formats in plain SPARQL 1.1: XML, JSON, CSV, HTML, Excel, Text, Binary, EXIF, File System, Zip/Tar, Markdown, YAML, Bibtex, DOCx. Users can customise the behaviour of the triplifiers with format-specific options. For example, JSON can be filtered passing a JsonPath expression (json.path), while XML with XPath (xml.path), HTML content with a CSS selector (html.selector), and plain text via a regular expression (txt.regex) or a string delimiter (txt.split). The CSV triplifier can be applied to any char-separated format (e.g. TSV) with the option csv.delimiter. HTML pages can be loaded with a virtual browser before querying, allowing to parse content produced via JavaScript (html.browser). SPARQL Anything includes a full fledged HTTP client to query Web APIs (options include http.header.*, http.auth.*, http.query.*, and others). SPARQL Anything exposes an extensive set of functions in addition to the ones already provided by Apache Jena, for example, the magic property fx:anyslot to match the value of any container membership property. These include shorthand functions for building RDF nodes - \(\mathbb{fx}\):entity, fx:literal - and querying and manipulating container membership properties (\(\mathbb{fx}\):before, fx:after, fx:prev, fx:next), string manipulations and hashes, and others (Wood et al., 1998). SPARQL Anything can be used as a Command Line Interface or via as a SPARQL Endpoint, featuring Apache Jena Fuseki. The CLI supports additional features that allow to combine the output of a SPARQL Anything query as the input of another one, designing rich data flows. Output formats (-f) can be JSON, XML, CSV, TEXT, TTL, NT, NQ, and result can be saved to a file (\(\neg\)o). For example, query in Figure 1 can be executed as follows: ``` fx-qqueries/arts-and-subjects.sparql-fTTL-oarts-and-subjects.ttl ``` The system supports parametrized queries using the BASIL syntax convention (Daga, Panziera, and Pedrinaci, 2015). Users can specify a SPARQL Result Set file to provide variable parameter values (option -i) or specify values inline (-v). When present, the query is pre-processed by replacing parameters with values from the input file (or values), and repeated for each of the provided bindings. Parameter values can be used in the output file name (-p). In addition, it is possible to reuse content from a previously performed transformation and execute the query against an existing (set of) RDF files (option -l or --load). The option requires the path to one RDF file or a folder including a set of files to be loaded. When present, the data is loaded in memory and the query executed against it. ## 4 Showcase In this Section we provide references to a set of real-world use cases implemented with SPARQL Anything. ### The Tate Gallery Collection open data This showcase (Daga, 2022) provides examples of using SPARQL Anything to query open data from the Tate Gallery collection (Daga, 2022). The repository shows basic features such as the use of the option csv.headers=true. In addition, it showcases more advanced features demonstrating how to (a) query local CSV and JSON files together to build a knowledge graph of artists and artworks metadata using Schema.org and (b) build a SKOS taxonomy from artworks' topics distributed in thousands of JSON files. Showcased features include incorporating binary data in the RDF graph and dynamically generating the _location_ option from previous facade-based data access operations, among others. ### The Irish Museum of Modern Art Website Data can be anywhere! In this showcase, implemented during the SPICE project (Daga et al., 2022), we developed a knowledge graph of artists and artworks scraping the website at [http://imma.ie](http://imma.ie). The code shows how to query an HTML web page using a custom CSS selector (option html.selector), and the use of BASIL (Daga, Panziera, and Pedrinaci, 2015) variables in parametrised queries. In addition, the use case developed demonstrates how CLI commands can be used in sequence to build complex knowledge graph construction pipelines. Through these features, we extract artists names and pages, from which we scrape additional metadata, including the list of artworks' pages'. These are visited next to complete the KG with artworks' information. #### Generating a Knowledge Graph from The Proposition Bank (PropBank) This showcase features the popular PropBank corpus of _linguistic frames45_. The input for the process is a release of the PropBank dataset, typically released as a single zip file containing a folder which stores all the XML files, one for each _frame_. The query shows how to chain multiple facade-based data access operations, exploiting key features of SPARQL Anything such as archive and directories querying (to get all XML files from archive), iterate on the solution of one operation to feed it into another (query each one of the XML) and project all transformations into a CONSTRUCT graph template. Footnote 45: [https://propbank.github.io](https://propbank.github.io) #### Scraping Webpages with SPARQL This online guide46, explores advanced usage of the HTML triplifier, showing features such as the headless browser. In addition, it demonstrates the use of list functions such as fx:before, fx:next, and others. Footnote 46: [https://github.com/justin2004/weblog/tree/master/scraping_with_sparql](https://github.com/justin2004/weblog/tree/master/scraping_with_sparql) #### Querying YAML metadata embedded in GitHub Markdown files Projects on GitHub typically include a README.md file. The Polifonia project publishes an _ecosystem_ of tools and data for the computational treatment of musical cultural heritage. Such collection of material is spread over a number of GitHub repositories exposing a collection of _components_, each one described in a documentation Markdown file, annotated according to an _annotation schema_. This showcase demonstrates how to query Markdown and YAML annotations contained within by chaining multiple facade-based data access operations with a single SPARQL Anything query, exploiting the _content_ option. The query47 traverses a local file system (where all the relevant repositories are included) in search of.md files, extracts the YAML front matter, and transforms the annotations according to Facade-X. Relevant RDF is projected into a KG of component types. Footnote 47: [https://github.com/SPARQL-Anything/showcase-polifonia-ecosystem/blob/master/queries/components-to-rdf.sparql](https://github.com/SPARQL-Anything/showcase-polifonia-ecosystem/blob/master/queries/components-to-rdf.sparql) #### Musical scores feature extraction (MusicXML) Musical scores are an excellent example of a complex data object. In (Ratta and Daga, 2022), the authors explore the application of SPARQL Anything to extrapolate musical features such as (a) extracting melodic information, (b) extracting N-grams of musical information, (c) supporting the analysis of those N-grams and (d) populate a musical note ontology (code available on GitHub48). Footnote 48: [https://github.com/SPARQL-Anything/showcase-musicxml](https://github.com/SPARQL-Anything/showcase-musicxml) ## 5 Engagement with target users A survey was conducted in order to engage with Semantic Web practitioners and SPARQL developers and users and also gain some initial insights into their data transformation requirements and perception and use of existing tools. There were 27 completed responses to the survey. A fuller account of the survey can be found in (Daga et al. 2021). Participants covered a diverse range of expertise. 37% rated their expertise in transforming data to RDF as high or very high, and 36% as low or none. 51.8% were frequent or very frequent users of SPARQL 1.1. 33.3% rated their expertise in SPARQL 1.1. as high or very high, and 37% as low. The software most commonly used for data transformation was custom code written for the specific task, followed by RML and SPARQL Generate. Participants transformed datasets of different sizes, from less than 10MB (18.5% of participants) through to over 1GB (also 18.5% of participants). 33% transformed the data into fewer than 1 million triples. 22.2% generated over 100 million triples. A set of questions in the survey asked participants to rate the importance of various usability characteristics of systems for transforming data into RDF. 51.8% considered it very important or essential that the system should minimise the languages and syntaxes needed. 70.3% considered it very important or essential that the system be easy to learn. 55.5% considered it essential or very important that the system can support new types of data sources without changes to the mapping language. A final set of questions asked participants to rate the usability of three notations for transforming non-RDF data into RDF: RML, SPARQL Generate and SPARQL Anything. 29.6% rated data transformations specified in SPARQL Anything code as very easy to understand, 63% as easy to understand. 4.8% rated the SPARQL Generate code as very easy, 40.7% as easy. 7.4% rated the RML code as very easy, 22.2% as easy. In summary, this small survey of a sample of target users covering a range of expertise and experience suggests a potentially receptive audience for an easy-to-use method for transforming a range of data sources without the need to learn additional languages and syntaxes. ## 6 The Open-Source project SPARQL Anything is produced by a community of contributors as open source software distributed under the commercial-friendly Apache Licence 2.0. The project is managed on GitHub at this address: [https://github.com/SPARQL-Anything](https://github.com/SPARQL-Anything), and can be cited via its related entry in Zenodo ("SPARQL Anything software" 2022), following good practices of Open Science and FAIR data management policies. The official documentation is published via Readthedocs.io49. The development activity started in November 2020 and has continued since then with a steady increment of contributions from people external to the original team, including practitioners from the industry. The GitHub project has currently 12 watchers, 5 forks, and 132 stars. In one month (March 2023, time of this submission), the project had 72 unique visitors and was cloned by 25 unique users. Footnote 49: [https://sparql-anvthing.readthedocs.io/en/latest/](https://sparql-anvthing.readthedocs.io/en/latest/) Footnote 50: [https://github.com/oeg-upm/mapeathor](https://github.com/oeg-upm/mapeathor) ## 7 The industry perspective: a field report In this Section we report on the experience of a software engineer who joined the SPARQL Anything open source project recently, working within the context of an US-based IT company active in the healthcare sector. The US company team included, apart from software engineers, two ontologists and a data scientist, all collaborating for constructing RDF KGs for about a year and a half. It took the team several months to find a method for constructing RDF that worked well for them. Initial experiments involved R2RML/RML based tools, including Mapeathor51(Iglesias-Molina et al., 2020), RML Mapper51, and Ontop52(Calvanese et al., 2017). Footnote 51: [https://github.com/RMLio/mmlmapper-java](https://github.com/RMLio/mmlmapper-java) Footnote 52: [https://github.com/ontop/ontop](https://github.com/ontop/ontop) At first, the ontologists did not implement their own mappings. Instead, they would annotate sample data with the triples they would like to produce and then one of the software engineers would use one of the RML based tools to implement the mappings. However, the ontologists were unable to revise the mappings and any subsequent evolution required intervention by the software engineers. Indeed, SPARQL and Bash were the only common languages everyone on the team used. The team was changing tools on each project, in search for a better solution. The discovery of SPARQL Anything had a huge impact on the workflow, allowing the team to transform more sources and doing it more quickly. Now the ontologists and data scientist implement their own mappings and everyone participates in the maintenance of the mappings. One episode illustrates well the impact that the tool had on the team. A few months ago, the company needed graph of data from a paying service to which the company had access. Unfortunately, the data was only available via a web browser as HTML. This brought the opportunity of contributing to the SPARQL Anything open-source project by expanding the capabilities of the HTML triplifier with a headless browser option, illustrated in a blog post (see Section 4), written with the ontologists in mind, demonstrating how to scrape a webpage (with content produced by javascript) using SPARQL Anything53. As a result, the ontologists read the blog post and created a SPARQL construct query, without any assistance from the software engineers, to construct a KG with the content of the webpage of interest. Footnote 53: [https://github.com/justin2004/weblog/tree/master/scraping_with_sparql](https://github.com/justin2004/weblog/tree/master/scraping_with_sparql) Recently the team were involved in a short workshop where one of the goals was to produce a graph of a product catalog54. Because the team was producing triples so quickly the workshop was mainly spent working with a subject matter expert on the data in order to _carve nature at its joints_. The team was happy to adapt their workflow so that ontologists could use SPARQL Anything in complex data integration pipelines, sometimes involving millions of data points. In one case, healthcare data is integrated from a relational database that gets exported (as CSV) weekly. The ontologists wrote one or more construct queries for each table. To overcome the in-memory limits of the tool, a script split each CSV file into files of a few thousand rows. SPARQL Anything was run over each file (with a configurable number of parallel processes) to produce quads. As an example, with 5 parallel processes one table with about 10 million rows took 4-5 hours to complete and produced 82 million quads. The team used this technique before SPARQL Anything had the _slice_ option. As that option gets more optimised, the team looks forward to having SPARQL Anything doing the slicing rather than doing it as a pre-processing stage. Footnote 54: Technical aspects were reported in [https://github.com/justin2004/weblog/tree/master/SPARQL_value_functions](https://github.com/justin2004/weblog/tree/master/SPARQL_value_functions) ## 8 Related work Motivation for our work resides in research on end-user development and human interaction with data. End-user development is defined by (Lieberman et al., 2006) as "_methods, techniques, and tools that allow users of software systems, who are acting as non-professional software developers, at some point to create, modify or extend a software artefact_". Many end-user development tasks are concerned with the use of software to manipulate data. Recent works encompass sending, receiving and manipulating data from web APIs, IoT devices and robots (Paterno and Santoro, 2019). Unlike professional software development, end-user development involves the construction of software for personal rather that public use (Ko et al., 2011) in order to carry out professional activities. Many SPARQL users fall into the category of end-user developer. In a survey of SPARQL users, (Warren and Mulholland, 2018) found that although 58% came from the computer science and IT domain, other SPARQL users came from non-IT areas, including social sciences and the humanities, business and economics, and biomedical, engineering or physical sciences. In addition, findings in this area (Panko and Aurigemma, 2010) suggest that the data with which users work is more often primarily list-based and/or hierarchical rather than tabular. For example, (Chang and Myers, 2016) proposes an extension to spreadsheets to explicitly support hierarchical data and [11] proposes an alternative formulation to spreadsheets in which data is represented as _list-of-lists_, rather than tables. Our proposal goes in this direction and accounts for recent findings in end-user development research. We survey approaches to extend SPARQL. A standard method for extending SPARQL is by providing custom functions to be used in FILTER or BIND operators55. Query processing engines can extend SPARQL by using so-called magic properties. This approach defines custom predicates to be used for instructing specific behaviour at query execution56. SPARQL Generate [13] introduces a novel approach for performing data transformation from heterogeneous sources into RDF by extending the SPARQL syntax with a new GENERATE operator [13]. The method introduces two more operators, SOURCE and ITERATOR. Custom functions perform ad-hoc operations on the supported formats, for example, relying on XPath or JSONPath. However, there are also approaches to extend SPARQL without changes to the standard syntax. For example, BASIL [14] allows to define parametric queries by enforcing a convention in SPARQL variable names. SPARQL Anything reuses BASIL variables to support parametric queries and file names. SPARQL Micro-service [12] provides a framework that, on the basis of API mapping specification, wraps web APIs in SPARQL endpoints and uses JSON-LD profile to translate the JSON responses of the API into RDF. In this paper, we follow a similar, minimalist approach and extend SPARQL by overriding the behaviour of the SERVICE operator. Footnote 55: ARQ provides a library of custom functions for supporting aggregates such as computing a standard deviation of a collection of values. ARQ functions: [https://jena.apache.org/documentation/query/extension.html](https://jena.apache.org/documentation/query/extension.html) (accessed 15/12/2020). Footnote 56: For example, this allows the specification of complex fulltext searches over literal values. Query processors can delegate execution to a fulltext engine (e.g. Lucene) and return a collection of query solutions as triple patterns. Several tools are available for automatically transforming data sources of several formats into RDF [15], JSON2RDF58, CSV2RDF59 to name a few). While these tools have a similar goal (i.e. enabling the user to access the content of a data source as if it was in RDF), the (meta)model used for generating the RDF data highly depends on the input format thus limiting the homogeneity of data generated from heterogeneous data formats. However, none of these approaches are based on a common abstraction to heterogeneous formats. Footnote 57: [http://any23.apache.org/](http://any23.apache.org/) Footnote 58: [https://github.com/AtomGraph/JSON2RDE](https://github.com/AtomGraph/JSON2RDE) Footnote 59: [http://clarkparisa.github.io/csv2rdf/](http://clarkparisa.github.io/csv2rdf/) Mapping languages for transforming heterogeneous files into RDF are represented by RML [12], also specialised to support data cleaning operations [13] and specific forms of data, for example, relational [14] or geospatial data [15]. This family of solutions are based on a set of declarative rules that Semantic Web practitioners are expected to develop by analysing the input data sources. The language incorporates format-specific query languages (e.g. XPath) and require the practitioner to have deep knowledge not only of the input data model but also of standard methods used for its processing. A recent approach [16] applies OBDA to SPARQL any type of Web resource, with a sophisticated set of mappings supporting an intermediate query service. Authors of a recent alternative, based on SHeX [13], stress the importance of making mappings usable by end users. Indeed, recent work acknowledges how these languages are built with machine-processability in mind [14] and how defining or even understanding the rules is not trivial to users. SPARQL Anything goes beyond current approaches and aims at equipping SPARQL users with the simplest possible assumption on how to deal with heterogeneous resources. We have already discussed how the SPARQL Anything approach is doing better than alternative solutions, from the SPARQL user standpoint. However, this comes at the cost of leaving the internal machinery to deal with the difficulty of developing strategies for executing SPARQL queries on different formats. For example, our survey highlighted the importance of being able to cope with very large data sources, partly addressed by the on-disk and slice options. Future work includes research on query-rewriting approaches to stream the data, similarly to the internal machinery of SPARQL Generate. Actually, one way of doing it would be rewriting the queries from the Facade-X structure to SPARQL Generate expressions, and run them as a back-end component. Currently, our system supports files, Web APIs, and content streamed from a separate process. Future work includes support the connection to relational databases, for example, relying on engines implementing the W3C Direct Mapping recommendation (Prud'hommeaux et al. 2012) for relational databases. To do that, we aim at reusing recent research in OBDA (e.g. [Sequeda and Miranker 2017]) to develop optimised query-rewriting strategies from Facade-X to the underlying relational model, on demand, without asking users to engage with the mappings. We plan to implement different methods, including experimenting with alternative back-end engines. ## 9 Conclusions We introduced SPARQL Anything, a reusable research software supporting facade-based data access over heterogeneous data sources, for the benefits of knowledge engineers. The Facade-X approach, introduced in (Daga et al. 2021) has received the attention of the community, confirmed by the user survey (Section 5), and it is unique in the landscape of solutions for RDF re-engineering. SPARQL Anything has a low complexity, compared to existing solutions: the framework allows others to save significant coding effort. In addition, the design methodology allows it to be extended to support an ever-ending set of formats. The performance of the first version of the system was evaluated and discussed in (Daga et al. 2021). Future work is going to focus on optimisation strategies and improving the internal machinery. However, the current version of the system is ready to tackle real-world problems, such as the ones encountered in our field report. In addition, engaging with semantic web practitioners highlighted the _value-to-users_ of essential and unique aspects of our tool. With SPARQL Anything, Semantic Web practitioners are relieved of the problem of content re-engineering and can finally focus on generating high-quality semantic data in plain SPARQL. ## 10 Acknowledgements This work was partially supported by the EU's Horizon Europe research and innovation programme within the SPICE project (Grant Agreement N. 870811) and the Polifonia project (grant agreement N. 101004746).
2303.16638
Derived equivalence of elliptic K3 surfaces and Jacobians
We present a detailed study of elliptic fibrations on Fourier-Mukai partners of K3 surfaces, which we call derived elliptic structures. We fully classify derived elliptic structures in terms of Hodge-theoretic data, similar to the Derived Torelli Theorem that describes Fourier-Mukai partners. In Picard rank two, derived elliptic structures are fully determined by the Lagrangian subgroups of the discriminant group. As a consequence, we prove that for a large class of Picard rank 2 elliptic K3 surfaces all Fourier-Mukai partners are Jacobians, and we partially extend this result to non-closed fields. We also show that there exist elliptic K3 surfaces with Fourier-Mukai partners which are not Jacobians of the original K3 surface. This gives a negative answer to a question raised by Hassett and Tschinkel.
Reinder Meinsma, Evgeny Shinder
2023-03-29T12:53:58Z
http://arxiv.org/abs/2303.16638v3
# Derived equivalence for elliptic K3 surfaces and Jacobians ###### Abstract. We present a detailed study of elliptic fibrations on Fourier-Mukai partners of K3 surfaces, which we call derived elliptic structures. We fully classify derived elliptic structures in terms of Hodge-theoretic data, similar to the Derived Torelli Theorem that describes Fourier-Mukai partners. In Picard rank two, derived elliptic structures are fully determined by the Lagrangian subgroups of the discriminant group. As a consequence, we prove that for a large class of Picard rank 2 elliptic K3 surfaces all Fourier-Mukai partners are Jacobians, and we partially extend this result to non-closed fields. We also show that there exist elliptic K3 surfaces with Fourier-Mukai partners which are not Jacobians of the original K3 surface. This gives a negative answer to a question raised by Hassett and Tschinkel. E.S. was partially supported by the EPSRC grant EP/T019379/1 "Derived categories and algebraic K-theory of singularities", and by the ERC Synergy grant "Modern Aspects of Geometry: Categories, Cycles and Cohomology of Hyperkahler Varieties". index of \(f\), then \(\mathrm{J}^{k}(X)\) is derived equivalent to \(X\) and we refer to \(J^{k}(X)\) as a _coprime Jacobian_ of \(X\). In 2014 Hassett and Tschinkel have asked if the converse is also true [17, Question 20]: **Question 1.1**.: _Is every Fourier-Mukai partner of an elliptic K3 surface \(X\) a coprime Jacobian of \(X\)?_ In fact, since elliptic K3 surfaces can have several non-isomorphic elliptic fibrations, one can interpret this question differently depending on whether we fix a fibration on \(X\) in advance or not. For elliptic surfaces of non-zero Kodaira dimension, as well as for bielliptic and Enriques surfaces, [1, 1], Question 1.1 has an affirmative answer. We do not know the answer in the abelian case. One of our main results is the following answer to Question 1.1 for K3 surfaces: **Theorem 1.2** (See Corollaries 5.13 and 5.14).: _Let \(X\) be an elliptic K3 surface of Picard rank 2. Let \(t\) be the multisection index of \(X\) and let \(2d\) be the degree of a polarisation on \(X\). Denote \(m=\gcd(d,t)\)._ 1. _If_ \(m=1\)_, then every Fourier-Mukai partner of_ \(X\) _is isomorphic to a coprime Jacobian of a fixed elliptic fibration on_ \(X\)_;_ 2. _If_ \(m=p^{k}\)_, for a prime_ \(p\)_, then every Fourier-Mukai partner of_ \(X\) _is isomorphic to a coprime Jacobian of one of the two elliptic fibrations on_ \(X\)_;_ 3. _If_ \(m\) _is not a power of a prime, and_ \(X\) _is very general with these properties, then_ \(X\) _admits Fourier-Mukai partners which are not isomorphic to any Jacobian of any elliptic fibration on_ \(X\)_._ Our method of proof of Theorem 1.2 relies on the Ogg-Shafarevich theory for elliptic surfaces, the Derived Torelli Theorem and lattice theory. In addition we introduce a new ingredient: a _derived elliptic structure_. The notion of the derived elliptic structure goes into the direction of describing an elliptic structure on \(X\) and its Fourier-Mukai partner in terms of the derived category \(\mathcal{D}^{b}(X)\). We define a derived elliptic structure on a K3 surface \(X\) as a choice of an elliptic fibration on a Fourier-Mukai partner of \(X\). Using this language, Question 1.1 translates to the question whether every derived elliptic structure on \(X\) is isomorphic to a coprime Jacobian of an actual elliptic structure on \(X\). We proceed to completely classify derived elliptic structures, for an elliptic K3 surface \(X\) of Picard rank two, in terms of certain _Lagrangian subgroups_ of the discriminant lattice \(A_{\mathrm{NS}(X)}\) of the Neron-Severi lattice of \(X\). The final answer, at least when \(X\) is very general, is that the number of derived elliptic structures on \(X\), up to coprime Jacobians, equals \(2^{\omega(m)}\) where \(m\) is as in Theorem 1.2 and \(\omega(m)\) is the number of distinct prime factors of \(m\), that is \(\omega(1)=0\), \(\omega(p^{k})=1\) and \(\omega(m)>1\) otherwise. This explains the condition on \(m\) appearing in Theorem 1.2. Let us explain some difficulties that we encounter along the way. First of all, elliptic K3 surfaces of Picard rank two can have one or two elliptic fibrations, and in the latter case these elliptic fibrations are sometimes isomorphic. Thus, a direct comparison between the number of coprime Jacobians and Fourier-Mukai partners is complicated. Secondly, many results that we state for arbitrary elliptic K3 surfaces \(X\) of Picard rank two simplify considerably when \(X\) is very general. Indeed in this case, the group \(G_{X}\) of Hodge isometries of the transcendental lattice \(T(X)\) is trivial, that is \(G_{X}=\{\pm 1\}\). In general this is a finite cyclic group of even order \(|G_{X}|\leq 66\). This group appears in various bijections we produce similarly to how it appears in the counting formula of Fourier-Mukai partners [10]. The set of isomorphism classes of derived elliptic structures on \(X\) is in natural bijection with the set \[\widetilde{\mathrm{L}}(A_{T(X)})/G_{X},\] see Theorem 5.10. Here \(A_{T(X)}\) is the discriminant lattice of the transcendental lattice \(T(X)\), and \(\widetilde{\mathrm{L}}(A_{T(X)})\) denotes the set of Lagrangian elements (Definition 3.5). Taking a coprime Jacobian \(\mathrm{J}^{k}\) of an elliptic structure translates into multiplying the corresponding Lagrangian vector by \(k\) and changing elliptic fibrations on a given surface corresponds to an involution which can be described intrinsically in terms of \(A_{T(X)}\). For very general \(X\), \(G_{X}=\{\pm 1\}\), and this group acts by multiplying Lagrangian elements by \(-1\). On the other hand, special \(X\) will have fewer Fourier-Mukai partners and fewer coprime Jacobians, however they will still match perfectly in cases (i) and (ii) of Theorem 1.2. See Example 3.13 for the most special (in terms of the size of \(G_{X}\) and \(\mathrm{Aut}(X)\)) elliptic K3 surface. Similarly, when considering very general elliptic K3 surfaces, every isomorphism preserving the fibre class is necessarily an isomorphism over the base. This is false in general, and this is important, because the Ogg-Shafarevich theory works with elliptic surfaces over the base, whereas the natural equivalence relation is that of preserving the elliptic pencil. We provide a careful analysis of the difference between isomorphism over \(\mathbb{P}^{1}\) and isomorphism as elliptic surfaces, which can be of independent interest. In particular, we are able to state which of the coprime Jacobians \(\mathrm{J}^{k}(X)\) of an elliptic K3 surfaces \(X\) are isomorphic as elliptic surfaces (resp. over \(\mathbb{P}^{1}\)). Indeed, very general elliptic K3 surfaces with multisection index \(t\) have at most \(\frac{\phi(t)}{2}\) coprime Jacobians, and the explicit number can be computed in all cases as follows: **Proposition 1.3**.: _(see Proposition 4.15) Let \(X\) be a complex elliptic K3 surface. There exist explicitly defined cyclic subgroups \(B_{X}\subset\widetilde{B}_{X}\) of \((\mathbb{Z}/t\mathbb{Z})^{*}\), of even order, such that the number of isomorphism classes of coprime Jacobians \(\mathrm{J}^{k}(X)\) considered up to isomorphism over the base (resp. preserving the elliptic pencil) equals \(\phi(t)/|B_{X}|\) (resp. \(\phi(t)/|\widetilde{B}_{X}|\))._ The group \(B_{X}\) can only be non-trivial if \(X\) is isotrivial with \(j\)-invariant \(0\) or \(1728\). We give examples when \(B_{X}\) and \(\widetilde{B}_{X}\) are non-trivial, and when they are different. ### Applications We deduce from Theorem 1.2 that zeroth Jacobians of derived equivalent elliptic K3 surfaces are non-isomorphic in general (Corollary 5.16), that is passing to the Jacobian can not be defined solely in terms of the derived category (Remark 5.17). Furthermore, Theorem 1.2 is relevant every time potential consequences of derived equivalence between K3 surfaces are considered. Let us explain two non-trivial situations when the explicit or geometric form of derived equivalence is desirable. The first is rational points over non-closed fields and the second is L-equivalence. The motivation of Hassett-Tschinkel [17] was the question of existence of rational points on derived equivalent elliptic K3 surfaces over non-closed fields. Namely, since \(X\) and any of its coprime Jacobians \(\mathrm{J}^{k}(X)\) are isogenous, it follows that \(X\) has a rational point if and only if \(\mathrm{J}^{k}(X)\) has a rational point by the Lang-Nishimura theorem. Using Galois descent, as we know automorphism groups of elliptic K3 surfaces quite explicitly, we can partially extend Theorem 1.2 to subfields \(k\subset\mathbb{C}\), and deduce the implication about rational points of Fourier-Mukai partners (see Corollary 5.21). We note that the question about the simultaneous existence of rational points on derived equivalent K3 surfaces has in general a negative solution [1] but for elliptic K3 surfaces the question seems to be still open. Another application for Theorem 1.2 is to the question of L-equivalence of derived equivalent K3 surfaces \(X\), \(Y\)[18]. For elliptic K3 surfaces the natural strategy is to prove L-equivalence for the generic fibres, which are genus one curves over the function field of the base, and then spread-out the L-equivalence over the total space. This strategy has been realized in [19] for elliptic K3 surfaces of multisection index five. It follows from Theorem 1.2 that the same approach can work when the mutlisection index \(t\) is a power of a prime (and \(d\) is arbitrary). ### Structure of the paper In Section 2, we recall basic classical results about lattices and complex K3 surfaces, and moduli spaces of sheaves on K3 surfaces. In Section 3, we describe in detail the elliptic K3 surfaces of rank two, including their Neron-Severi lattices, Lagrangian elements in their discriminant lattices, Hodge isometries of the transcendental lattices and the group of automorphisms. Most results in this section are standard except the focus on the Lagrangian elements. In Section 4, we recall the Ogg-Shafarevich theory and explain in detail when different Jacobians of a given elliptic fibration are isomorphic. In Section 5, we introduce derived elliptic structures and Hodge elliptic structures on a K3 surface and fully classify them in terms of Lagrangian elements in the case of Picard rank two. ### Acknowledgements We thank Arend Bayer, Tom Bridgeland, Daniel Huybrechts, Alexander Kuznetsov, Gebhard Martin, Giacomo Mezzedimi, Sofia Tirabassi and Mauro Varesco for discussions and interest in our work. ## 2. Preliminary results ### Lattices Our main reference for lattice theory is [14]. A lattice is a free Abelian group \(L\) together with a symmetric non-degenerate bilinear form \(b:L\times L\to\mathbb{Z}\). We consider the quadratic form \(q(x)=b(x,x)\) and sometimes we write \(x\cdot y\) for \(b(x,y)\) and \(x^{2}\) for \(q(x)\). A morphism of lattices between \((L,b)\) and \((L^{\prime},b^{\prime})\) is a group homomorphism \(\sigma:L\to L^{\prime}\) which respects the bilinear forms, meaning \(b(x,y)=b^{\prime}(\sigma(x),\sigma(y))\) for all \(x,y\in L\). An isomorphism of lattices is called an isometry. We write \(O(L)\) for the group of isometries of \(L\). The lattice \(L\) is called even if \(x^{2}\) is even for all \(x\in L\). All the lattices we consider will be assumed to be even. The dual of a lattice \(L\) is defined as \(L^{*}\coloneqq\operatorname{Hom}(L,\mathbb{Z})\). It comes equipped with a natural bilinear form taking values in \(\mathbb{Q}\). The bilinear form gives rise to a natural map \(L\to L^{*}\) which is injective because we assume \(b\) to be non-degenerate; furthermore we have a canonical isomorphism \[L^{*}\simeq\{x\in L\otimes\mathbb{Q}\mid\forall y\in L:x\cdot y\in\mathbb{Z} \}\subseteq L\otimes\mathbb{Q}.\] The quotient \(L^{*}/L=A_{L}\) is called the discriminant group of \(L\). If the discriminant group is trivial, we call \(L\) unimodular. The discriminant group comes equipped with a quadratic form \(\overline{q}:A_{L}\to\mathbb{Q}/2\mathbb{Z}\). The discriminant group admits an orthogonal direct sum decomposition \[A_{L}=\bigoplus_{p}A_{L}^{(p)} \tag{2.1}\] where \(A_{L}^{(p)}\) consists of elements annihilated by a power of a prime \(p\). The group \(A_{L}^{(p)}\) coincides with the discriminant group of the \(p\)-adic lattice \(L\otimes\mathbb{Z}_{p}\). Two lattices \(L,L^{\prime}\) are said to be in the same genus if they have the same signature and have isometric discriminant groups. An _overlattice_ of a lattice \(T\) is a lattice \(L\) together with an embedding of lattices \(T\hookrightarrow L\) of finite index. We say that two overlattices \(T\hookrightarrow L\) and \(T^{\prime}\hookrightarrow L^{\prime}\) are isomorphic if there exists a commutative diagram where \(\sigma\) and \(\tau\) are isometries. For any overlattice \(T\hookrightarrow L\), there is a natural embedding of the cokernel \(H_{L}\coloneqq L/T\) in the discriminant group of \(T\) via the chain of embeddings \[T\hookrightarrow L\hookrightarrow L^{*}\hookrightarrow T^{*}.\] The subgroup \(H_{L}\) is isotropic with respect to the quadratic form on \(A_{T}\), and conversely any isotropic subgroup of \(A_{T}\) gives rise to an overlattice of \(T\). The following result gives a complete classification of all overlattices of a given lattice \(T\), up to isomorphism. **Lemma 2.1** ([14, Proposition 1.4.2]).: _Let \(T\) be a lattice, and let \(T\hookrightarrow L\) and \(T\hookrightarrow M\) be two overlattices of \(T\). An isometry \(\sigma\in O(T)\) fits into a commutative diagram of the form_ (2.2) _if and only if the induced isometry \(\overline{\sigma}\in O(A_{T})\) satisfies \(\overline{\sigma}(H_{L})=H_{M}\). Moreover, the assignment \((T\hookrightarrow L)\mapsto H_{L}\) is a bijection between the set of isomorphism classes of overlattices of \(T\) and the set of \(O(T)\)-orbits of isotropic subgroups of \(A_{T}\)._ Note that (2.2) can be completed as follows: (2.3) ### K3 surfaces Our basic reference for K3 surfaces is [10]. If \(X\) is a complex projective K3 surface, \(H^{2}(X,\mathbb{Z})\) is a free abelian group of rank \(22\). Moreover, the cup product is a symmetric bilinear form on \(H^{2}(X,\mathbb{Z})\), turning \(H^{2}(X,\mathbb{Z})\) into an even, unimodular lattice isometric to \(\Lambda_{\text{K3}}=U^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}\). Here \(U\) is the hyperbolic lattice given by the symmetric bilinear form \[\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\] and \(E_{8}\) is the unique even, unimodular, positive-definite lattice of rank \(8\) (see [1, SSVIII.1] for details). The Neron-Severi lattice \(\text{NS}(X)\) is a sublattice of \(H^{2}(X,\mathbb{Z})\), defined as the image of the first Chern class \(c_{1}:\operatorname{Pic}(X)\hookrightarrow H^{2}(X,\mathbb{Z})\). We have \(\operatorname{Pic}(X)\simeq\operatorname{NS}(X)\); it is a free abelian group of rank \(\rho\) which is called the Picard number of \(X\). The orthogonal complement \(T(X)=\operatorname{NS}(X)^{\perp}\subseteq H^{2}(X,\mathbb{Z})\) is called the transcendental lattice of \(X\). The image of the line \(H^{2,0}(X)=\mathbb{C}\sigma\subset H^{2}(X,\mathbb{C})\) under any isometry \(H^{2}(X,\mathbb{Z})\to\Lambda_{K3}\) is called a _period_ of \(X\). Since \(\sigma^{2}=0\) and \(\sigma\cdot\overline{\sigma}>0\), any period of \(X\) lies in the open subset \[D\coloneqq\left\{\ell\in\mathbb{P}(\Lambda_{K3}\otimes\mathbb{C})\mid\ell^{2} =0\text{ and }\ell\cdot\overline{\ell}>0\right\},\] called the period domain. The following two results are among the most fundamental results about K3 surfaces. **Theorem 2.2** (Surjectivity of the Period Map).: _[_7_]_ _Any point in the period domain is a period of a K3 surface, i.e. for any \(\ell\in D\), there is a K3 surface \(X\) with an isometry \(H^{2}(X,\mathbb{Z})\to\Lambda_{K3}\) such that \(H^{2}(X,\mathbb{C})\to\Lambda_{K3}\otimes\mathbb{C}\) maps \(H^{2,0}(X)\) to \(\ell\)._ **Theorem 2.3** (Torelli Theorem for K3 Surfaces).: _[_10_]_ _(see [11, Theorem 5.5.3]) Let \(X\) and \(Y\) be K3 surfaces. Then \(X\) and \(Y\) are isomorphic if and only if there exists a Hodge isometry \(H^{2}(X,\mathbb{Z})\simeq H^{2}(Y,\mathbb{Z})\). Moreover, for any Hodge isometry \(\psi:H^{2}(X,\mathbb{Z})\to H^{2}(Y,\mathbb{Z})\) which preserves the ample cone, there is a unique isomorphism \(f:X\to Y\) such that \(\psi=f_{*}\)._ The Hodge structure on the transcendental lattice determines \(X\) up to derived equivalence due to what is known as the Derived Torelli Theorem. **Theorem 2.4** (Derived Torelli Theorem).: _[_11_]__, [_12_]_ _Let \(X\) and \(Y\) be two K3 surfaces. Then there exists an equivalence \(\mathcal{D}^{b}(X)\simeq\mathcal{D}^{b}(Y)\) if and only if there exists a Hodge isometry \(T(X)\simeq T(Y)\)._ If \(\mathcal{D}^{b}(X)\simeq\mathcal{D}^{b}(Y)\), we say that \(X\) and \(Y\) are derived equivalent and that \(Y\) is a Fourier-Mukai partner of \(X\). Theorem 2.4 implies that two derived equivalent K3 surfaces must have equal Picard numbers. If we denote \(\Lambda=\operatorname{NS}(X)\), there is an isometry [14, Corollary 1.6.2] \[(A_{\Lambda},\overline{q_{\Lambda}})\simeq(A_{T(X)},-\overline{q_{T(X)}}). \tag{2.4}\] Thus derived equivalent K3 surfaces have isomorphic discriminant lattices, and it follows easily that their Neron-Severi lattices must be in the same genus. Instead of \((A_{T(X)},-\overline{q_{T(X)}})\), we usually write \(A_{T(X)}(-1)\). For a K3 surface \(X\) we write \(G_{X}\) for the Hodge isometry group of \(T(X)\). Then \(G_{X}\simeq\mathbb{Z}/2g\mathbb{Z}\) for some \(g\geq 1\), and we have \(\phi(2g)\mid\operatorname{rk}T(X)\)[15, Appendix B]. From the Derived Torelli Theorem one can deduce: **Theorem 2.5** (Counting Formula).: _[_15_]_ _Let \(X\) be a K3 surface, and write \(\operatorname{FM}(X)\) for the set of isomorphism classes of Fourier-Mukai partners of \(X\). Then_ \[\left|\operatorname{FM}(X)\right|=\sum_{\Lambda}\left|O(\Lambda)\setminus O(A _{\Lambda})/G_{X}\right|\] _where the sum runs over isomorphism classes of lattices \(\Lambda\) which are in the same genus as the Neron-Severi lattice \(\operatorname{NS}(X)\). Furthermore, each summand computes the number of isomorphism classes of Fourier-Mukai partners \(Y\) of \(X\) with \(\operatorname{NS}(Y)\simeq\Lambda\)._ It follows from the Counting Formula that an elliptic K3 surface \(S\to\mathbb{P}^{1}\) which admits a section has no non-trivial Fourier-Mukai partners [15, Proposition 2.7(3)]. **Definition 2.6**.: _We say that a K3 surface \(X\) is \(T\)-general if \(G_{X}=\left\{\pm\operatorname{id}\right\}.\) A K3 surface that is not \(T\)-general is called \(T\)-special._ When \(X\) is \(T\)-general, the Counting Formula shows that the number of Fourier-Mukai partners is maximal (for a fixed \(\operatorname{NS}(X)\)) and only depends on \(\operatorname{NS}(X)\). A similar effect holds for the invariants we study, see Theorem 5.10. Thus it is important to have explicit criteria for \(T\)-generality. If the Picard number \(\rho\) of \(X\) is odd, then \(\phi(2g)\) must be odd, so \(|G_{X}|=2\) and \(X\) is \(T\)-general. Furthermore, we have the following result going back to Oguiso [16]: **Lemma 2.7** ([16, Lemma 3.9]).: _If \(X\) is a very general K3 surface in any lattice polarized moduli space of K3 surfaces, with Picard number \(\rho<20\), then \(X\) is T-general._ See Example 3.13 for an explicit \(T\)-special K3 surface. ### Caldararu class for a non-fine moduli space The Brauer group of an elliptic K3 surface with a section is one of the main technical tools used in this paper. We follow the discussions in [10] and [11]. For every complex K3 surface we have a canonical isomorphism \[\operatorname{Br}(X)\simeq\operatorname{Hom}(T(X),\mathbb{Q}/\mathbb{Z}). \tag{2.5}\] In particular, \(\operatorname{Br}(X)\) is an infinite torsion group and for all integers \(t\geq 1\) we have \[\operatorname{Br}(X)_{t-tors}\simeq\operatorname{Hom}(T(X),\mathbb{Z}/t \mathbb{Z})\simeq(\mathbb{Z}/t\mathbb{Z})^{22-\rho}, \tag{2.6}\] where \(\rho\) is the Picard number of \(X\). We explain the explicit description of the Brauer class associated to a moduli space of sheaves on a K3 surface [13], [11]. Let \(X\) be a complex K3 surface, and consider a Mukai vector \[v=(r,D,s)\in N(X):=\mathbb{Z}\oplus\operatorname{NS}(X)\oplus\mathbb{Z}.\] We assume that \(v\) is a primitive vector such that \(v^{2}=0\), i.e. \(D^{2}=2rs\). Let \(M\) be the moduli space of stable sheaves on \(X\) of class \(v\). By Mukai's results, if \(M\) is nonempty, then it is again a K3 surface, see e.g. [14, Corollary 3.5] (we assume \(v\) is primitive, so stability coincides with semistability for a generic choice of a polarization). Let \(t\) be the divisibility of \(v\) that is \[t=\operatorname*{gcd}_{u\in N(X)}u\cdot v=\gcd\left(r,s,\operatorname*{gcd} _{E\in NS(X)}E\cdot D\right).\] We consider the obstruction \(\alpha_{X}\in\operatorname{Br}(M)\) for the existence of a universal sheaf on \(X\times M\); under the isomorphism (2.5), we will equivalently consider \(\alpha_{X}\) as a homomorphism \(T(M)\to\mathbb{Q}/\mathbb{Z}\). If the divisibility of \(v\) equals \(t\), then \(\alpha_{X}\) has order \(t\) and we have \[0\to T(X)\to T(M)\overset{\alpha_{X}}{\to}\mathbb{Z}/t\mathbb{Z}\to 0.\] Here \(\mathbb{Z}/t\mathbb{Z}\) is the subgroup of \(\mathbb{Q}/\mathbb{Z}\) generated by \(1/t\). Note that the \(t=1\) case corresponds to fine moduli spaces, in which case \(T(X)\simeq T(M)\). In general we have \[\mathbb{Z}/t\mathbb{Z}=T(M)/T(X)\subset T(X)^{*}/T(X)=A_{T(X)}. \tag{2.7}\] We call the image \(w\) of \(\overline{1}\) under (2.7) the _Caldararu class_ of \(M\) (or of \(v\)). By construction, the Caldararu class \(w\) generates the isotropic subgroup of \(A_{T(X)}\) given by Lemma 2.1 corresponding to the overlattice \(T(X)\subset T(M)\). **Lemma 2.8** ([11]).: _Under the isomorphism (2.4), the Caldararu class \(w\) of the Mukai vector \((r,D,s)\) of divisibility \(t\) corresponds to \(-\frac{1}{t}D\)._ Proof.: By [13, Proposition 6.4(3)], the cokernel of \(i:T(X)\hookrightarrow T(M)\) is generated by \(\frac{1}{t}\lambda\), where \(\lambda\in T(X)\) is chosen such that \(D+\lambda=ta\) for some \(a\in H^{2}(X,\mathbb{Z})\). Here \(D\) and \(\lambda\) correspond to each other under the natural isomorphism (2.4): \[\begin{array}{rllll}A_{T(X)}(-1)&\to H^{2}(X,\mathbb{Z})/(T(X)\oplus \operatorname{NS}(X))&\to A_{\operatorname{NS}(X)}\\ \frac{1}{t}\lambda&\mapsto&\frac{1}{t}(D+\lambda)=a&\mapsto\frac{1}{t}D.\end{array}\] Furthermore the defining equation for \(\lambda\) can be equivalently written in the full integral cohomology of \(X\) as \[v+\lambda=t\widetilde{a}\] where \(\widetilde{a}=(r/t,a,s/t)\) (this vector is integral). We claim that \(-\frac{1}{t}\lambda\) is the Caldararu class of \((r,D,s)\). To show this, we compute the value of the Brauer class \(\alpha_{X}\), considered as a map \(T(M)\to\mathbb{Q}/\mathbb{Z}\) (with image \((\frac{1}{t}\mathbb{Z})/\mathbb{Z}\simeq\mathbb{Z}/t\mathbb{Z}\)), on the element \(\frac{1}{t}\lambda\in T(M)\). Set \(u\in H^{*}(S,\mathbb{Z})\) such that \(u\cdot v=1\) (this vector exists by unimodularity). Then we have \[\alpha_{X}(\lambda/t)=u\cdot\lambda/t=u\cdot(\widetilde{a}-v/t)=u\cdot \widetilde{a}-u\cdot v/t\equiv-1/t\pmod{\mathbb{Z}}.\] Here we used [11, Theorem 5.3.1] in the first equality and the definition of \(\widetilde{a}\) in the second one. Thus we have \(w=-\frac{1}{t}\lambda\) by definition of the Caldararu class and the corresponding element in \(A_{\operatorname{NS}(X)}\) is \(-\frac{1}{t}D\) ### Elliptic K3 surfaces Recall that an elliptic surface is a surface \(X\) which admits a surjective morphism \(f:X\to C\) where \(C\) is a smooth curve, such that the fibres of \(f\) are connected and the genus of the generic fibre is \(1\)[11, SS10]. Our elliptic surfaces will be assumed to be relatively minimal, i.e. contain no \((-1)\)-curves in the fibres of \(f\); this is automatic for K3 surfaces. We say that an elliptic surface is isotrivial if all smooth fibres are isomorphic. For an elliptic K3 surface we have the base \(C\simeq\mathbb{P}^{1}\). There are two natural concepts of an isomorphism between elliptic K3 surfaces \(f:X\to\mathbb{P}^{1}\) and \(\phi:Y\to\mathbb{P}^{1}\). **Definition 2.9**.: _(1) We say that \(X\), \(Y\) are isomorphic as elliptic surfaces if there exists an isomorphism \(X\simeq Y\) preserving the fibre classes, or equivalently there is a commutative diagram_ (2.8) _We say that the isomorphism \(X\simeq Y\) twists the base by \(\overline{\beta}\). (2) We say that \(X\) and \(Y\) are isomorphic over \(\mathbb{P}^{1}\) if there is an isomorphism \(X\simeq Y\) twisting the base by the identity, or equivalently if there exists a commutative diagram_ Being isomorphic over \(\mathbb{P}^{1}\) is more restrictive than being isomorphic as elliptic surfaces. For example, for every \(\overline{\beta}\in\operatorname{Aut}(\mathbb{P}^{1})\), \(f:X\to\mathbb{P}^{1}\) and \(\overline{\beta}f:X\to\mathbb{P}^{1}\) are isomorphic as elliptic surfaces, but usually not over \(\mathbb{P}^{1}\). Let \(S\to\mathbb{P}^{1}\) be an elliptic K3 surface with a fixed section. We denote by \(\operatorname{Aut}_{\mathbb{P}^{1}}(S)\) (resp. \(\operatorname{Aut}(S,F)\)) the group of automorphisms of \(S\) over \(\mathbb{P}^{1}\) (resp. automorphisms of \(S\) preserving the fibre class). We have \(\operatorname{Aut}_{\mathbb{P}^{1}}(S)\subset\operatorname{Aut}(S,F)\). We denote by \(A_{\mathbb{P}^{1}}(S)\) (resp. \(A(S,F)\)) the group of _group_ automorphisms of \(S\) over \(\mathbb{P}^{1}\) (resp. preserving the fibre class), where an automorphism of \(S\) is called a group automorphism if it preserves the zero-section (see e.g. [10]). **Remark 2.10**.: _The category of relatively minimal elliptic surfaces and their isomorphisms over \(\mathbb{P}^{1}\) is equivalent to the category of genus one curves over \(\mathbb{C}(t)\) and their isomorphisms. The functor is given by taking the generic fibre. This functor is an equivalence e.g. by [11, Theorem 7.3.3] or [10, Theorem 3.3]._ ## 3. Elliptic K3 surfaces of Picard rank 2 ### Neron-Severi lattices We recall some basic facts about elliptic K3 surfaces of Picard rank 2 following [10, 12]. Let \(f:X\to\mathbb{P}^{1}\) be a complex projective elliptic K3 surface. Let \(F\in\operatorname{NS}(X)\) be the class of a fibre. Recall that the multisection index \(t\) of \(f\) is the minimal positive \(t>0\) such that there exists a divisor \(D\in\operatorname{NS}(S)\) with \(D\cdot F=t\). **Proposition 3.1**.: _[_12_, Remark 4.2]__, [13, Lemma 3.3]_ _Let \(X\) be an elliptic K3 surface of Picard rank 2. Then there exists a polarisation \(H\) on \(X\) such that \(H,F\) form a basis of \(\operatorname{NS}(X)\) and \(H\cdot F=t\). In particular, the Neron-Severi lattice of \(X\) is given by a matrix of the form_ \[\begin{pmatrix}2d&t\\ t&0\end{pmatrix}. \tag{3.1}\] We write \(\Lambda_{d,t}\) for the lattice of rank 2 with matrix (3.1) with respect to some basis \(H,F\). It is easy to see that the lattice \(\Lambda_{d,t}\) has exactly two isotropic primitive vectors up to sign: one is \(F\), and the other is \[F^{\prime}=\frac{1}{\gcd(d,t)}(tH-dF). \tag{3.2}\] The following lemma describes when the class \(F^{\prime}\) gives rise to another elliptic fibration on \(X\). **Lemma 3.2**.: _[_12_, SS4.7]_ _A K3 surface \(X\) with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) has two elliptic fibrations if and only if \(d\not\equiv-1\pmod{t}\). If \(d\equiv-1\pmod{t}\), \(X\) admits one elliptic fibration. If \(X\) is \(T\)-general, \(t>2\) and \(d\not\equiv-1\pmod{t}\), then the two fibrations are isomorphic (as elliptic surfaces) if and only if \(d\equiv 1\pmod{t}\)._ We denote by \(A_{d,t}\) the discriminant lattice of \(\Lambda_{d,t}\) and we have \[|A_{d,t}|=t^{2}. \tag{3.3}\] It is easy to compute (see e.g. [10, Proof of Lemma 3.2]) that the dual lattice \(\Lambda_{d,t}^{*}\) is generated by \[F^{*}=\frac{-2d}{t^{2}}F+\frac{1}{t}H,\quad H^{*}=\frac{1}{t}F \tag{3.4}\] so that the images of (3.4) generate \(A_{d,t}\). Furthermore for \(a,b\in\mathbb{Z}\) we have \[q(aF^{*}+bH^{*})=\frac{2a(bt-ad)}{t^{2}}. \tag{3.5}\] **Lemma 3.3**.: _The discriminant group \(A_{d,t}\) is isomorphic to \(\mathbb{Z}/a\mathbb{Z}\oplus\mathbb{Z}/b\mathbb{Z}\) with \(a=\gcd(2d,t)\) and \(b=t^{2}/a\). In particular, \(A_{d,t}\) is cyclic if and only if \(\gcd(2d,t)=1\)._ _Furthermore, if \(\Lambda\) is a lattice in the same genus as \(\Lambda_{d,t}\) then \(\Lambda\simeq\Lambda_{e,t}\) with \(\gcd(2e,t)=\gcd(2d,t)\)._ Proof.: The first claim follows by putting \(\Lambda_{d,t}\) into Smith normal form. Let \(\Lambda\) be a lattice in the same genus as \(\Lambda_{d,t}\). Following the proof of [11, Proposition 16], \(\Lambda\) contains a primitive isotropic vector \(v\). Hence, \(\Lambda\simeq\Lambda_{e,s}\) for some \(e,s\in\mathbb{Z}\), \(s>0\). Comparing discriminant groups of \(\Lambda_{d,t}\) and \(\Lambda_{e,s}\) we obtain \(t=s\) and \(\gcd(2d,t)=\gcd(2e,s)\). **Example 3.4**.: _Let \(d=0\), then by Lemma 3.3, \(A_{0,t}\simeq\mathbb{Z}/t\mathbb{Z}\oplus\mathbb{Z}/t\mathbb{Z}\). Explicitly, generators (3.4) of the dual lattice \(\Lambda_{0,t}^{*}\) are \(F^{*}=\frac{1}{t}H\) and \(H^{*}=\frac{1}{t}F\) and their images in \(A_{0,t}\) are the two order \(t\) generators which are isotropic elements in \(A_{0,t}\)._ We introduce some properties of the discriminant groups which we will need to count Fourier-Mukai partners. **Definition 3.5**.: _We call an isotropic element of order \(t\) in \(A_{d,t}\) a Lagrangian element. We call a cyclic isotropic subgroup \(H\subseteq A_{d,t}\) of order \(t\) a Lagrangian subgroup._ We denote by \(\widetilde{\operatorname{L}}(A_{d,t})\) (resp. \(\operatorname{L}(A_{d,t})\)) the set of Lagrangian elements (resp. Lagrangian subgroups) of \(A_{d,t}\). The main reason we are interested in studying Lagrangians of \(A_{d,t}\) is their correspondence with Fourier-Mukai partners which we establish in Section 5. **Proposition 3.6**.: _Let \(d,t\) be integers and let \(m=\gcd(d,t)\). Then we have_ \[|\,\widetilde{\operatorname{L}}(A_{d,t})|=\phi(t)\cdot 2^{\omega(m)},\quad| \operatorname{L}(A_{d,t})|=2^{\omega(m)}. \tag{3.6}\] Even though \(\gcd(2d,t)\) is responsible for the structure of \(A_{d,t}\), it is \(\gcd(d,t)\) that appears in Proposition 3.6. For instance, if \(d\) and \(t\) are coprime and \(t\) is even, the discriminant group \(A_{d,t}\) is not cyclic, but \(|\operatorname{L}(A_{d,t})|=1\). Proof.: Any cyclic subgroup \(H\subset A_{d,t}\) of order \(t\) has \(\phi(t)\) generators. \(H\) is a Lagrangian subgroup if and only if its generator is a Lagrangian element. Thus the two formulas in (3.6) are equivalent, and it suffices to prove the second one. Let \(t=\prod_{p}p^{k_{p}}\) be the prime factorisation of \(t\). For any prime \(p\), we have an isomorphism of \(p\)-adic lattices \(\Lambda_{d,t}\otimes\mathbb{Z}_{p}\simeq\Lambda_{d,p^{k_{p}}}\otimes\mathbb{Z} _{p}\) (the isometry is given by \(H\mapsto H\) and \(F\mapsto\alpha F\), where \(\alpha\) is the unit in \(\mathbb{Z}_{p}\) given by \(\alpha p^{k_{p}}=t\)). By [16, Proposition 1.7.1], \(A_{d,t}\) is isometric to the orthogonal direct sum of \(A_{d,p^{k_{p}}}\) over all primes \(p\). Therefore we have \[|\operatorname{L}(A_{d,t})|=\prod_{p}|\operatorname{L}(A_{d,p^{k_{p}}})|.\] Therefore we need to prove that \(|\operatorname{L}(A_{d,p^{k}})|=1\) if \(d\) is coprime to \(p\) and \(|\operatorname{L}(A_{d,p^{k}})|=2\) otherwise. The result follows from Lemma 3.7 and Lemma 3.8 below. **Lemma 3.7**.: _The elements_ \[v=\frac{1}{t}F,\quad v^{\prime}=\frac{1}{t}F^{\prime} \tag{3.7}\] _are primitive isotropic vectors in \(\Lambda_{d,t}^{*}\) and their images \(\overline{v}\) and \(\overline{v^{\prime}}\) in \(A_{d,t}\) generate Lagrangian subgroups in \(A_{d,t}\). We have \(\langle\overline{v}\rangle=\langle\overline{v^{\prime}}\rangle\) if and only if \(m\coloneqq\gcd(d,t)=1\), in which case_ \[\overline{v^{\prime}}=-d\cdot\overline{v} \tag{3.8}\] Proof.: The first part is a simple computation. The corresponding Lagrangian subgroups are equal if and only if \(v^{\prime}=\frac{1}{tm}(tH-dF)=\frac{1}{m}H-\frac{d}{tm}F\) is a multiple of \(v=\frac{1}{t}F\) modulo \(\Lambda_{d,t}\). This is only the case when \(m=1\). **Lemma 3.8**.: _Let \(t=p^{k}\) with \(p\) a prime number and \(k\geq 1\). Then the subgroups \(\langle\overline{v}\rangle,\langle\overline{v^{\prime}}\rangle\) are the only Lagrangian subgroups of \(A_{d,t}\)._ Proof.: Write \(d=\ell\cdot p^{n}\) for some \(\ell\in\mathbb{Z}\) coprime to \(p\) and some \(n\geq 0\). Note that whenever \(n\geq k\), we have \(d\equiv 0\pmod{p^{k}}\), so that \(\Lambda_{d,p^{k}}\simeq\Lambda_{0,p^{k}}\) and we can assume that \(d=0\). In this case we have \(\overline{v^{\prime}}=\overline{F^{*}}\) and it is easy to see that \(\langle\overline{H^{*}}\rangle\) and \(\langle\overline{F^{*}}\rangle\) are the only Lagrangian subgroups of \(A_{0,p^{k}}\) (see Example 3.4). Therefore we may assume \(0\leq n<k\). In terms of generators (3.4) the quadratic form is given by \[q(aF^{*}+bH^{*})=\frac{2a}{p^{2k-n}}\left(bp^{k-n}-a\ell\right). \tag{3.9}\] To find all Lagrangian subgroups, we start by describing the subgroup of elements in \(A_{d,t}\) having order dividing \(t=p^{k}\). We consider the vectors (3.7) which in our case are given by \[\overline{v}=\frac{F}{p^{k}},\quad\overline{v^{\prime}}=\frac{H}{p^{n}}-\frac {\ell F}{p^{k}}.\] Furthermore, orders of \(\overline{v}\) and \(\overline{v^{\prime}}\) are equal to \(p^{k}\), and these elements satisfy a relation \[p^{n}(\ell\overline{v}+\overline{v^{\prime}})=0. \tag{3.10}\] There are two cases to consider now. If \(p>2\), then \[(A_{d,t})_{p^{k}-tors}=\left\langle\frac{F}{p^{k}},\frac{H}{p^{n}}\right\rangle =\langle\overline{v},\overline{v^{\prime}}\rangle.\] The vectors \(\overline{v}\) and \(\overline{v^{\prime}}\) are isotropic and the discriminant form in terms of these elements equals \[\overline{q}(a\overline{v}+b\overline{v^{\prime}})=\frac{2ab}{p^{n}}.\] Hence an element \(a\overline{v}+b\overline{v^{\prime}}\) is isotropic if and only \(p^{n}\) divides \(ab\). On the other hand, if \(a\overline{v}+b\overline{v^{\prime}}\) has order precisely \(p^{k}\), then at least one of \(a\) or \(b\) is coprime to \(p\). Hence isotropic elements of \(A_{d,t}\) of order \(p^{k}\) are given by \[a\overline{v}+bp^{n+j}\overline{v^{\prime}},\quad ap^{n+j}\overline{v}+b \overline{v^{\prime}},\] with both \(a\) and \(b\) coprime to \(p\) and \(j\geq 0\). Using (3.10) we can rewrite these types of elements as \[a^{\prime}\overline{v},\quad b^{\prime}\overline{v^{\prime}},\] with \(a^{\prime}\) and \(b^{\prime}\) coprime to \(p\). This finishes the proof in the \(p>2\) case. If \(p=2\), then we have \[(A_{d,t})_{2^{k}-tors}=\left\langle\frac{F}{2^{k}},\frac{H}{2^{n+1}}\right\rangle \supsetneq\langle\overline{v},\overline{v^{\prime}}\rangle=\left\langle \frac{F}{2^{k}},\frac{H}{2^{n}}\right\rangle.\] However, a simple computation shows that all isotropic vectors are actually contained in \(\langle\overline{v},\overline{v^{\prime}}\rangle\) and the proof works in the same way as in the \(p>2\) case. Lemma 3.8 allows us to define a canonical involution on the set of Lagrangian subgroups of \(A_{d,t}\) as follows. For \(H\subset A_{d,t}\) a Lagrangian, we take its primary decomposition with respect to (2.1) \[H=\bigoplus_{p}H_{p},\quad H_{p}\subset A_{d,t}^{(p)}\] with each \(H_{p}\) a Lagrangian in \(A_{d,t}^{(p)}\). We set \(\iota_{p}(H_{p})\) to denote the other Lagrangian subgroup as determined by Lemma 3.8; in the case \(p\) does not divide \(d\), \(\iota_{p}(H_{p})=H_{p}\). We set \[\iota(H):=\bigoplus_{p}\iota_{p}(H_{p})\subset A_{d,t}. \tag{3.11}\] The geometric significance of this involution is explained in Theorem 5.10. For now we note that \[\iota(\langle\overline{v}\rangle)=\langle\overline{v^{\prime}}\rangle \tag{3.12}\] for \(\overline{v}\), \(\overline{v^{\prime}}\) defined in Lemma 3.7. ### Automorphisms and Hodge isometries Recall the Hodge isometry group \(G_{X}\) defined in Section 2.2. **Lemma 3.9**.: _If \(X\) is a K3 surface of Picard rank 2, then \(G_{X}\) is a cyclic group of one of the following orders:_ \[2,4,6,8,10,12,22,44,50,66.\] Proof.: The fact that \(G_{X}\) is a finite cyclic group of even order \(2g\) such that \(\phi(2g)|\operatorname{rk}T(X)\) is [10, Appendix B]. We solve the equation \(\phi(2g)\mid 20\). Possible primes that can appear in the prime factorization of \(2g\) are \(2\), \(3\), \(5\), \(11\). Maximal powers of these primes such that \(\phi(p^{k})\mid 20\) are \(2^{\bar{3}}\), \(3\), \(5^{2}\), \(11\) and the result follows by combining these or smaller prime powers. **Proposition 3.10**.: _Let \(X\) be an elliptic K3 surface of Picard rank 2. Then we have a canonical isomorphism_ \[\operatorname{Aut}(X)\simeq\operatorname{Ker}\left(G_{X}\to O(A_{T(X)})/O^{+ }(\operatorname{NS}(X))\right). \tag{3.13}\] _where \(O^{+}(\operatorname{NS}(X))\) is the group of isometries of \(\operatorname{NS}(X)\) that preserve the ample cone. In particular, \(\operatorname{Aut}(X)\) is a finite cyclic group and \(|\operatorname{Aut}(X)|\leq 66\). Moreover, for any elliptic fibration \(X\to\mathbb{P}^{1}\), the isomorphism above induces an isomorphism_ \[\operatorname{Aut}(X,F)\simeq\operatorname{Ker}\left(G_{X}\to O(A_{T(X)}) \right), \tag{3.14}\] _where \(\operatorname{Aut}(X,F)\) is the group of automorphisms which fix the fibre class \(F\) of the elliptic fibration._ Proof.: By the Torelli Theorem 2.3, there is a bijection between automorphisms of \(X\) and Hodge isometries of \(H^{2}(X,\mathbb{Z})\) which preserve the ample cone. Using [11, Corollary 1.5.2], we can write \[\operatorname{Aut}(X)\simeq\left\{(\sigma,\tau)\in G_{X}\times O^{+}( \operatorname{NS}(X))\;\mid\;\overline{\sigma}=\overline{\tau}\in O(A_{T(X)}) \right\}. \tag{3.15}\] This isomorphism induces a surjective map \((\sigma,\tau)\mapsto\sigma\) \[\operatorname{Aut}(X)\to\operatorname{Ker}\left(G_{X}\to O(A_{T(X)})/O^{+}( \operatorname{NS}(X))\right). \tag{3.16}\] The kernel of this map consists of the pairs \((\operatorname{id}_{T(X)},\tau)\in G_{X}\times O^{+}(\operatorname{NS}(X))\) such that \(\overline{\tau}=\operatorname{id}_{A_{T(X)}}.\) We claim that the homomorphism \(O^{+}(\operatorname{NS}(X))\to O(A_{T(X)})\) is injective. Since \(\operatorname{NS}(X)\) contains four isotropic vectors \(\pm F,\pm F^{\prime}\), and \(-1\in O(\operatorname{NS}(X))\) never preserves the ample cone, we note that \(O^{+}(\operatorname{NS}(X))\) must be either trivial, or isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) with non-trivial element swapping \(F\) with \(F^{\prime}\). The latter case is only possible when \(F^{\prime}\) represents a class of an elliptic fibration on \(X\), which by Lemma 3.2 corresponds to the case \(d\not\equiv-1\pmod{t}\). Then \(\frac{1}{t}F\) and \(\frac{1}{t}F^{\prime}\) represent distinct classes in \(A_{T(X)}\) (see (3.8)) and the element of \(O^{+}(\operatorname{NS}(X))\) swapping \(F\) and \(F^{\prime}\) has a non-trivial image in \(O(A_{T(X)})\). Thus, since \(O^{+}(\operatorname{NS}(X))\to O(A_{T(X)})\) is injective, the map (3.16) is a bijection. The claim about isomorphism type of \(|\operatorname{Aut}(X)|\) follows from Lemma 3.9. For the last statement, note that the only element of \(O^{+}(\operatorname{NS}(X))\) which fixes \(F\) is the identity. Therefore, (3.14) also follows from (3.15). **Example 3.11**.: _Let \(X\) be an elliptic K3 surface with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) and assume that \(\gcd(2d,t)=1\). In this case \(A_{d,t}\) is cyclic of order \(t^{2}\) by Lemma 3.3. An isometry \(\sigma\in O(A_{d,t})\) is given by multiplication by a unit \(\alpha\in\mathbb{Z}/t^{2}\mathbb{Z}\) with \(\alpha^{2}\equiv 1\pmod{t}\), so that the group \(O(A_{d,t})\) is \(2\)-torsion. Thus by Proposition 3.10, \(\operatorname{Aut}(X)\subset G_{X}\) is a cyclic subgroup of index one or two._ **Lemma 3.12**.: _Let \(S\) and \(S^{\prime}\) be K3 surfaces of Picard rank 2 which admit elliptic fibrations with a section. Then every Hodge isometry between \(T(S)\) and \(T(S^{\prime})\) lifts to a unique isomorphism between \(S\) and \(S^{\prime}\). In particular, we have \(\operatorname{Aut}(S)\simeq G_{S}\). Finally, \(S\) admits a unique elliptic fibration with a unique section, hence every automorphism of \(S\) is a group automorphism._ Proof.: By Proposition 3.1 we have \(\operatorname{NS}(S)\simeq\Lambda_{d,1}\), which is isomorphic to the hyperbolic lattice \(U\), in particular \(\operatorname{NS}(S)\) is unimodular and \(A_{\operatorname{NS}(S)}=0\). If there is a Hodge isometry between \(T(S)\) and \(T(S^{\prime})\), extending it to a Hodge isometry between \(H^{2}(S,\mathbb{Z})\) and \(H^{2}(S^{\prime},\mathbb{Z})\) preserving the ample cones, we obtain \(S\simeq S^{\prime}\), by the Torelli Theorem, as in the proof of Proposition 3.10. Thus we may assume that \(S=S^{\prime}\) in which case the result follows Proposition 3.10. By Lemma 3.2, \(S\) admits a unique elliptic fibration. Since \(\operatorname{NS}(S)=U\), there is a unique \((-2)\)-curve which intersects the fibres of the elliptic fibration with multiplicity \(1\), i.e. a unique section. **Example 3.13**.: _Let \(S\to\mathbb{P}^{1}\) be the elliptic K3 surface with a section given by the Weierstrass equation \(y^{2}=x^{3}+t^{12}-t\). This surface is isotrivial with \(j\)-invariant \(0\). It was studied in [10] and [11]. We have \(\operatorname{rk}\operatorname{NS}(S)=2\), and \(S\) is \(T\)-special. In fact, the group \(G_{S}\) is cyclic of order \(66\), and \(S\) is unique with this property. Furthermore, \(\operatorname{Aut}(S)\simeq\mathbb{Z}/66\mathbb{Z}\) by Lemma 3.12. The action of the subgroup \(\mathbb{Z}/6\mathbb{Z}\subset\operatorname{Aut}(S)\) commutes with projection to \(\mathbb{P}^{1}\) and rescales \(x\) and \(y\) coordinates, and the subgroup \(\mathbb{Z}/11\mathbb{Z}\subset\operatorname{Aut}(S)\) preserves the fibre class \(F\in\operatorname{NS}(S)\) and induces an order \(11\) automorphism \(t\mapsto\zeta_{11}t\) on \(\mathbb{P}^{1}\)._ **Corollary 3.14**.: _Let \(X\) be a T-general elliptic K3 surface of Picard rank 2 and multisection index \(t>2\), then \(\operatorname{Aut}(X,F)=\left\{\operatorname{id}\right\}.\)_ Proof.: By Proposition 3.10, there is an isomorphism \(\operatorname{Aut}(X,F)\simeq\operatorname{Ker}(G_{X}\to O(A_{T(X)})).\) We have \(G_{X}=\left\{\pm 1\right\}\) by assumption. Since \(t>2\), and \(A_{T(X)}\) has order \(t^{2}\), we see that \(-1\) acts non-trivially on \(A_{T(X)}\). Thus \(\operatorname{Ker}(G_{X}\to O(A_{T(X)}))\) is trivial. ## 4. Jacobians ### Ogg-Shafarevich Theory Given an elliptic K3 surface \(f:X\to\mathbb{P}^{1}\) and \(k\in\mathbb{Z}\) we can define an elliptic K3 surface \(\operatorname{J}^{k}(f):\operatorname{J}^{k}(X)\to\mathbb{P}^{1}\), called the \(k\)-th Jacobian of \(X\), as the moduli space of sheaves supported at the fibres of \(f\) and having degree \(k\)[10, Chapter 11]. In particular, we have \(S:=\operatorname{J}^{0}(X)\) which is an elliptic K3 surface with a distinguished section. In what follows, we sometimes write \(C\), \(C^{\prime}\) for bases of elliptic fibrations when they are not canonically isomorphic. **Lemma 4.1**.: _Let \(X\to C\) and \(X^{\prime}\to C^{\prime}\) be elliptic K3 surfaces with zeroth Jacobians \(S\to C\) and \(S^{\prime}\to C^{\prime}\), respectively. Then an isomorphism of elliptic surfaces \(\gamma:X\simeq X^{\prime}\) which twists the base by \(\overline{\beta}:C\to C^{\prime}\) (see Definition 2.9), induces a group isomorphism \(\operatorname{J}^{0}(\gamma):S\simeq S^{\prime}\) twisting the base by \(\overline{\beta}\)._ Proof.: When \(\overline{\beta}\) is the identity, this is a standard result which follows immediately from Remark 2.10. For the general case, see [1, SS3, (3.3)]. The Ogg-Shafarevich theory relates elements in the Brauer group \(\operatorname{Br}(S)\) of an elliptic K3 surface \(S\) with a section, to \(S\)-torsors. For our purposes, the following definition of a torsor is convenient. See [11], [10, Proposition 5.6] for the equivalence with the standard definition. **Definition 4.2**.: _Let \(f:S\to\mathbb{P}^{1}\) be an elliptic K3 surface with a section. An \(f\)-torsor is a pair \((g:X\to\mathbb{P}^{1},\theta)\) where \(g:X\to\mathbb{P}^{1}\) is an elliptic K3 surface and \(\theta:\operatorname{J}^{0}(X)\to S\) is an isomorphism over \(\mathbb{P}^{1}\) preserving the zero-sections, i.e. a group isomorphism over \(\mathbb{P}^{1}\)._ An isomorphism of \(f\)-torsors \((g:X\to\mathbb{P}^{1},\theta)\) and \((h:Y\to\mathbb{P}^{1},\eta)\) is an isomorphism \(\gamma:X\to Y\) over \(\mathbb{P}^{1}\) such that commutes. **Example 4.3**.: _If \(X\) is an elliptic K3 surface, then \(X\) has a natural structure \((X,\operatorname{id}_{\operatorname{J}^{0}(X)})\) of a torsor over \(\operatorname{J}^{0}(X)\). Since \(\operatorname{J}^{0}(\operatorname{J}^{k}(X))=\operatorname{J}^{0}(X)\) (this can be checked e.g. using Remark 2.10), all Jacobians \(\operatorname{J}^{k}(X)\) also have a natural \(\operatorname{J}^{0}(X)\)-torsor structure._ The set of isomorphism classes of \(f\)-torsors is in bijection with the Tate-Shafarevich group of \(f:S\to\mathbb{P}^{1}\)[10, 11.5.5(ii)], and we denote it \(\operatorname{\ShSh}(f:S\to\mathbb{P}^{1})\) or just \(\operatorname{\Sh}(S)\) if it can not lead to confusion. If \(S\to\mathbb{P}^{1}\) is an elliptic K3 surface with a section, then there is an isomorphism \[\operatorname{Br}(S)\simeq\operatorname{\Sh}(S), \tag{4.1}\] see [14, 10, Corollary 11.5.5]. We recall the construction of the Tate-Shafarevich group and of (4.1) in the proof of Lemma 4.5. For an \(S\)-torsor \((X,\theta)\) we write \(\alpha_{X}\in\operatorname{Br}(S)\) for the class corresponding to \([X]\in\operatorname{\Sh}(S)\) under (4.1). It would be more precise to include \(\theta\) in the notation, but we do not do that, assuming that the torsor structure on \(X\) is fixed. We also write \(\alpha_{X}:T(X)\to\mathbb{Q}/\mathbb{Z}\) for the corresponding element with the respect to (2.5). **Lemma 4.4**.: _Let \((X,\theta)\) be an \(S\)-torsor. Let \(t\) be the order of \(\alpha_{X}\in\operatorname{Br}(S)\)._ _(i) \(X\) has a section if and only if \(\alpha_{X}=0\), in which case \(X\) is isomorphic to \(S\) as an \(S\)-torsor._ _(ii) For all \(k\in\mathbb{Z}\) we have \(\alpha_{\mathrm{J}^{k}(X)}=k\cdot\alpha_{X}\)._ _(iii) The multisection index of \(X\) equals \(t\)._ _(iv) We have a Hodge isometry \(T(X)\simeq\operatorname{Ker}(\alpha_{X}:T(S)\to\mathbb{Z}/t\mathbb{Z})\)._ Proof.: (i) It follows by construction that all \(S\)-torsor structures on \(S\) are isomorphic, and correspond to \(0\in\operatorname{Br}(S)\) under (4.1). Thus, if \(\alpha_{X}=0\), then \(X\) is isomorphic as \(S\)-torsor to \(S\), in particular \(X\) and \(S\) are isomorphic as elliptic surfaces, hence \(X\) has a section. Conversely, if \(X\) has a section, then we have \(S\simeq\mathrm{J}^{0}(X)\simeq X\) hence \(X\) is isomorphic as a torsor to some torsor structure on \(S\), so that \(\alpha_{X}=0\) by the argument above. Part (ii) is [12, Theorem 4.5.2] and part (iv) is [12, Theorem 5.4.3]. (iii) For a K3 surface \(X\) with a chosen elliptic fibration let us write \(\operatorname{ind}(X)\) for the multisection index of the fibration. Since \(\operatorname{J}^{\operatorname{ind}(X)}(X)\) admits a section, we have \(\operatorname{J}^{\operatorname{ind}(X)}(X)\simeq S\) as torsors by (i). It follows using (ii) that \(0=\alpha_{\mathrm{J}^{\operatorname{ind}(X)}(X)}=\operatorname{ind}(X)\alpha_ {X}\) hence \(\operatorname{ord}(\alpha_{X})\) divides \(\operatorname{ind}(X)\). To prove their equality we use [13, Ch. 4, (4.5), (4.6)] to deduce that for all \(k\in\mathbb{Z}\) \[\operatorname{ind}(\mathrm{J}^{k}(X))=\frac{\operatorname{ind}(X)}{ \gcd(\operatorname{ind}(X),k)}.\] In particular, \[1=\operatorname{ind}(\mathrm{J}^{\operatorname{ord}(\alpha_{X})}(X))=\frac{ \operatorname{ind}(X)}{\gcd(\operatorname{ind}(X),\operatorname{ord}(\alpha_{ X}))}=\frac{\operatorname{ind}(X)}{\operatorname{ord}(\alpha_{X})}\] so that \(\operatorname{ind}(X)=\operatorname{ord}(\alpha_{X})\), which proves part (ii). Let \(S\to\mathbb{P}^{1}\) be an elliptic K3 surface with a section. Recall that we denote by \(A_{\mathbb{P}^{1}}(S)\) (resp. \(A(\mathbb{P}^{1},F)\)) the group of group automorphisms of \(S\) over \(\mathbb{P}^{1}\) (resp. group automorphisms of \(S\) preserving the fibre class \(F\in\operatorname{NS}(S)\)). We have \(A_{\mathbb{P}^{1}}(S)\subset A(S,F)\), and we are interested in the orbits of these two groups acting on the Brauer group \(\operatorname{Br}(S)\). We do this more generally, by explaining functoriality of \(\Sha(S)\) and \(\operatorname{Br}(S)\) with respect to \(S\). Let \(f:S\to C\) and \(f^{\prime}:S^{\prime}\to C^{\prime}\) be elliptic K3 surfaces with fixed sections. Assume that there exists a group isomorphism \(\beta:S\simeq S^{\prime}\) twisting the base by \(\overline{\beta}:C\simeq C^{\prime}\). We define a map \(\beta_{*}:\Sha(f:S\to C)\to\Sha(f^{\prime}:S^{\prime}\to C^{\prime})\) as follows: \[\beta_{*}(g:X\to C,\theta)=(\overline{\beta}\circ g:X\to C^{\prime},\beta \circ\theta).\] Note that the element on the right-hand side belongs to \(\Sha(f^{\prime})\) by Lemma 4.1. Furthermore, in the same setting, we define \(\beta_{*}:\operatorname{Hom}(T(S),\mathbb{Q}/\mathbb{Z})\to\operatorname{Hom} (T(S^{\prime}),\mathbb{Q}/\mathbb{Z})\) by \(\beta_{*}(\alpha)=\alpha\circ\beta^{*}\), where \(\beta^{*}:T(S^{\prime})\to T(S)\) is the Hodge isometry induced by \(\beta\). It is important for applications that these two pushforwards are compatible with (4.1): **Lemma 4.5**.: _Let \(f:S\to C\) and \(f^{\prime}:S^{\prime}\to C^{\prime}\) be elliptic K3 surfaces with fixed sections, and let \(\beta:S\simeq S^{\prime}\) be a group isomorphism twisting the base by \(\overline{\beta}\). Then there is a commutative square of isomorphisms_ \[\Sha(f:S\to C)\xrightarrow{\beta_{*}}\Sha(f^{\prime}:S^{\prime}\to C^{\prime})\] \[\operatorname{Hom}(T(S),\mathbb{Q}/\mathbb{Z})\xrightarrow{\beta_{ *}}\operatorname{Hom}(T(S^{\prime}),\mathbb{Q}/\mathbb{Z}). \tag{4.2}\] Proof.: The vertical arrows in (4.2) are the compositions of the vertical maps in the following diagram, with cohomology groups in etale and analytic topology respectively: (4.3) c.f. [10, Corollary 11.5.6]. Here \(\mathcal{X}_{0}\) and \(\mathcal{X}^{\prime}_{0}\) are the sheaves of etale local sections of \(f\) and \(f^{\prime}\), respectively. The horizontal arrows (1), (2), (3) are induced by \(\overline{\beta}_{*}\mathcal{X}_{0}\simeq\mathcal{X}^{\prime}_{0}\) and \(\beta_{*}\mathbb{G}_{m}\simeq\mathbb{G}_{m}.\) Arrows (4) are induced by the exponential sequence. One can check commutativity for each square in (4.3), and this gives the desired result. **Proposition 4.6**.: _Let \(f:S\to C\), \(f^{\prime}:S^{\prime}\to C^{\prime}\) be elliptic K3 surfaces with sections. Let \((g:X\to C,\theta)\), \((g^{\prime}:X^{\prime}\to C^{\prime},\theta^{\prime})\) be torsors for \(f\) and \(f^{\prime}\) respectively. Then there is a group isomorphism \(\beta:S\simeq S^{\prime}\), twisting the base by \(\overline{\beta}:C\simeq C^{\prime}\) and such that \(\beta_{*}(g,\theta)\simeq(g^{\prime},\theta^{\prime})\) if and only if there is an elliptic surface isomorphism \(X\simeq X^{\prime}\) twisting the base by \(\overline{\beta}.\)_ Proof.: Suppose there is a group isomorphism \(\beta:S\simeq S^{\prime}\) twisting the base by \(\overline{\beta}\) and such that \(\beta_{*}(g,\theta)=(g^{\prime},\theta^{\prime})\). Then it follows from the definition of \(\beta_{*}\) that there is an elliptic surface isomorphism \(X\simeq X^{\prime}\) twisting the base by \(\overline{\beta}.\) Conversely, suppose there is an elliptic surface isomorphism \(\gamma:X\simeq X^{\prime}\) twisting the base by \(\overline{\beta}\). Consider the isomorphism \(\beta\coloneqq\theta^{\prime}\circ\mathrm{J}^{0}(\gamma)\circ\theta^{-1}:S \to S^{\prime}\). We can compute \(\beta_{*}(g,\theta)\) via the composition of isomorphisms \[\Sha(S)\stackrel{{\theta_{*}^{-1}}}{{\to}}\Sha(\mathrm{J}^{0}(X)) \xrightarrow{\mathrm{J}^{0}(\gamma)_{*}}\Sha(\mathrm{J}^{0}(X^{\prime})) \stackrel{{\theta^{\prime}_{*}}}{{\to}}\Sha(S^{\prime})\] to see that \(\beta_{*}(g,\theta)=(g^{\prime},\theta^{\prime})\). **Remark 4.7**.: _The proof of Proposition 4.6 in fact shows that given \((g,\theta)\), \((g^{\prime},\theta^{\prime})\) as in the statement, the set of isomorphisms between elliptic fibrations \(g\) and \(g^{\prime}\) twisting the base by \(\overline{\beta}\) (and ignoring the choice of \(\theta\), \(\theta^{\prime}\)) is in natural bijection with the set of group isomorphisms \(\beta\) between \(S\) and \(S^{\prime}\) twisting the base by \(\overline{\beta}\) together with a chosen isomorphism \(\gamma\) between \(\beta_{*}(g,\theta)\) and \((g^{\prime},\theta^{\prime})\)._ It will be more convenient for us to work with the Brauer group instead of the Tate-Shafarevich group: **Proposition 4.8**.: _Using the same notation as in Proposition 4.6, there is a group isomorphism \(\beta:S\simeq S^{\prime}\), twisting the base by \(\overline{\beta}:C\simeq C^{\prime}\) and such that \(\beta_{*}\alpha_{X}=\alpha_{X^{\prime}}\) if and only if there is an elliptic surface isomorphism \(X\simeq X^{\prime}\) twisting the base by \(\overline{\beta}.\)_ Proof.: This follows immediately from Proposition 4.6 and Lemma 4.5. **Corollary 4.9**.: _Let \(g:X\to C\), \(g^{\prime}:X^{\prime}\to C^{\prime}\) be elliptic K3 surfaces which are isomorphic via an isomorphism which twists the base by \(\overline{\beta}:C\to C^{\prime}\). Then for all \(k\in\mathbb{Z}\), there exists an elliptic surface isomorphism \(\mathrm{J}^{k}(X)\simeq\mathrm{J}^{k}(X^{\prime})\) twisting the base by \(\overline{\beta}.\)_ Proof.: Let \(S\to C\) and \(S^{\prime}\to C^{\prime}\) be the zeroth Jacobians of \(X\to C\) and \(X^{\prime}\to C^{\prime}\), respectively. By Proposition 4.8, there is a group isomorphism \(\beta:S\to S^{\prime}\) such that \(\beta_{*}\alpha_{X}=\alpha_{X^{\prime}}\). This means that \(\beta_{*}(k\cdot\alpha_{X})=k\cdot\beta_{*}\alpha_{X}=k\cdot\alpha_{X^{\prime}}\) for all \(k\in\mathbb{Z}\). Since the Brauer classes of \(\mathrm{J}^{k}(X)\to C\) and \(\mathrm{J}^{k}(X^{\prime})\to C^{\prime}\) are \(k\cdot\alpha_{X}\) and \(k\cdot\alpha_{X^{\prime}}\), the result follows from Proposition 4.8. **Corollary 4.10**.: _Let \(S\to C\) be an elliptic K3 surface with a section. The set of \(A(S,F)\)-orbits (resp. \(A_{C}(S)\)-orbits) of \(\mathrm{Br}(S)\) parametrizes \(S\)-torsors up to isomorphism as elliptic surfaces (resp. up to isomorphism over \(C\))._ Proof.: We put \(S=S^{\prime}\) in Proposition 4.8, consider \(S\)-torsors \((X,\theta)\) and \((X^{\prime},\theta^{\prime})\) and write \(\alpha_{X},\alpha_{X^{\prime}}\in\mathrm{Br}(S)\) for the corresponding Brauer classes. By Proposition 4.8 there is an isomorphism between elliptic surfaces \(X\), \(X^{\prime}\) twisting the base (resp. over the base) if and only if there exists \(\beta\in A(S,F)\) (resp. \(\beta\in A_{C}(S)\)) such that \(\beta_{*}(\alpha_{X})=\alpha_{X^{\prime}}\). Thus the resulting sets of orbits are as stated in the Corollary. **Example 4.11**.: _The automorphism \(\beta=-1\in A_{C}(S)\) acts on \(\mathrm{Br}(S)\) as multiplication by \(-1\). This way we always have (at least) two torsor structures on every elliptic K3 surface \(X\). If \(X\) has no sections, these two torsor structures are isomorphic if and only if \(\alpha_{X}\in\mathrm{Br}(X)\) has order two, which by Lemma 4.4 is equivalent to \(X\) having multisection index two._ We write \(\mathrm{EllK3}\) for the set of isomorphism classes of elliptic K3 surfaces (isomorphisms are allowed to twist the base). We can express Ogg-Shafarevich theory as a natural bijection between \(\mathrm{EllK3}\) and the set of isomorphism classes of twisted Jacobian K3 surfaces. **Definition 4.12**.: _A twisted Jacobian K3 surface is a triple \((S,f,\alpha)\) where \(S\) is a K3 surface with elliptic fibration \(f\) together with a fixed section, and \(\alpha\) is a Brauer class on \(S\)._ An isomorphism of two twisted Jacobian K3 surfaces \((S,f:S\to C,\alpha)\) and \((S^{\prime},f^{\prime}:S^{\prime}\to C^{\prime},\alpha^{\prime})\) is a group isomorphism \(\beta:S\simeq S^{\prime}\) such that \(\beta_{*}\alpha=\alpha^{\prime}.\) We write \(\mathrm{BrK3}\) for the set of isomorphism classes of twisted Jacobian K3 surfaces. The above results show the following. **Theorem 4.13**.: _The map \(\mathrm{EllK3}\to\mathrm{BrK3}\) given by \((X,g)\mapsto(\mathrm{J}^{0}(X),\mathrm{J}^{0}(g),\alpha_{X})\) is a bijection._ Proof.: From Proposition 4.8, it follows that the map \(\mathrm{EllK3}\to\mathrm{BrK3}\) is well-defined and injective. For surjectivity, let \((S,f,\alpha)\in\mathrm{BrK3}\). Using the isomorphism (4.1), we obtain an \(S\)-torsor \((g:X\to\mathbb{P}^{1},\theta:\mathrm{J}^{0}(X)\simeq S)\in\mathrm{III}(f)\) corresponding to \(\alpha\). In particular, the map \(\mathrm{EllK3}\to\mathrm{BrK3}\) assigns \((X,g)\mapsto(\mathrm{J}^{0}(X),\mathrm{J}^{0}(g),\theta_{*}^{-1}\alpha)\simeq( S,f,\alpha)\). ### Isomorphisms of Jacobians We work with an elliptic K3 surface \(X\); recall from Example 4.3 that \(X\) and all its Jacobians \(\mathrm{J}^{k}(X)\) have a natural structure of a torsor over \(\mathrm{J}^{0}(X)\). **Lemma 4.14**.: _[_6, Theorem 4.5.2]_ _Let \(X\) be an elliptic K3 surface, and let \(k,\ell\in\mathbb{Z}\). Then we have \(\mathrm{J}^{k}(\mathrm{J}^{\ell}(X))\simeq\mathrm{J}^{k\ell}(X)\) as torsors over \(\mathrm{J}^{0}(X)\)._ Proof.: By Lemma 4.4, in the Tate-Shafarevich group of \(\mathrm{J}^{0}(X)\), we have \([\mathrm{J}^{k}(\mathrm{J}^{\ell}(X))]=k\cdot[\mathrm{J}^{\ell}(X)]=k\ell\cdot[ X]=[\mathrm{J}^{k\ell}(X)].\) In particular, we have \(\mathrm{J}^{k}(\mathrm{J}^{\ell}(X))\simeq\mathrm{J}^{k\ell}(X)\) as torsors over \(\mathrm{J}^{0}(X)\). Let \(t\) be the multisection index of \(X\). We are especially interested in those Jacobians for which \(\gcd(k,t)=1\). We call these _coprime Jacobians_ of \(X\). By Theorem 5.1 below, every coprime Jacobian is a Fourier-Mukai partner of \(X\). For all \(k\in\mathbb{Z}\), we have well-known isomorphisms over \(\mathbb{P}^{1}\): \[\mathrm{J}^{k+t}(X)\simeq\mathrm{J}^{k}(X),\quad\mathrm{J}^{-k}(X)\simeq \mathrm{J}^{k}(X).\] Here the first isomorphism follows by adding the multisection on the generic fibre, and then spreading out as in Remark 2.10, and the second isomorphism can be obtained, by the same token, from the dualization of line bundles, or alternatively deduced from Proposition 4.8 with \(\beta\) acting by \(-1\) on the fibres (see Example 4.11). We see that there are at most \(\phi(t)/2\) isomorphism classes of coprime Jacobians of \(X\). The goal of the next result is to be able to compute this number precisely, see (4.6) for what this count will look like. **Proposition 4.15**.: _Let \(X\to\mathbb{P}^{1}\) be an elliptic K3 surface of multisection index \(t>2\). Then \(\mathrm{J}^{k}(X)\simeq\mathrm{J}^{\ell}(X)\) as \(\mathrm{J}^{0}(X)\)-torsors if and only if \(k\equiv\ell\pmod{t}\). Furthermore there exist subgroups \(B_{X}\subset\widetilde{B}_{X}\subset(\mathbb{Z}/t\mathbb{Z})^{*}\), such that for \(k,\ell\in(\mathbb{Z}/t\mathbb{Z})^{*}\) we have_ \[\mathrm{J}^{k}(X)\simeq\mathrm{J}^{\ell}(X)\text{ over }\mathbb{P}^{1}\iff k \ell^{-1}\in B_{X},\] _and_ \[\mathrm{J}^{k}(X)\simeq\mathrm{J}^{\ell}(X)\text{ as elliptic surfaces }\iff k\ell^{-1}\in\widetilde{B}_{X}.\] _Furthermore, \(B_{X}\) is a cyclic group of order \(2\), \(4\) or \(6\), containing \(\{\pm 1\}\) and the case \(B_{X}\simeq\mathbb{Z}/4\mathbb{Z}\) (resp. the case \(B_{X}\simeq\mathbb{Z}/6\mathbb{Z}\)) can occur only if \(X\) is an isotrivial elliptic fibration with \(j\)-invariant \(j=1728\) (resp. \(j=0\))._ _Finally, if \(X\) is \(T\)-general, then \(B_{X}=\widetilde{B}_{X}=\{\pm 1\}\), that is in this case \(\mathrm{J}^{k}(X)\) and \(\mathrm{J}^{\ell}(X)\) are isomorphic over \(\mathbb{P}^{1}\) if and only if they are isomorphic as elliptic surfaces if and only if \(k\equiv\pm\ell\pmod{t}\)._ In the statement we excluded the trivial cases \(t=1,2\) because such elliptic K3 surfaces do not admit non-trivial coprime Jacobians or Fourier-Mukai partners. Before we give the proof of the proposition, we need to set up some notation. Let \(S\) be an elliptic K3 with a section. For any subgroup \(H\subset A(S,F)\) and any class \(\alpha\in\operatorname{Br}(S)\) let \(H^{\alpha}\) be the subgroup of \(H\) consisting of elements \(\beta\in H\) with the property \(\beta_{*}(\langle\alpha\rangle)\subset\langle\alpha\rangle\). Considering the action of \(H^{\alpha}\) on \(\langle\alpha\rangle=\mathbb{Z}/t\mathbb{Z}\) we get a natural homomorphism \(H^{\alpha}\to(\mathbb{Z}/t\mathbb{Z})^{*}\) and we define \[\overline{H}^{\alpha}\coloneqq\operatorname{Im}(H^{\alpha}\to(\mathbb{Z}/t \mathbb{Z})^{*}).\] Proof of Proposition 4.15.: Write \(S=\operatorname{J}^{0}(X)\). We consider the following subgroups of \((\mathbb{Z}/t\mathbb{Z})^{*}\): \[B_{X}\coloneqq\overline{A_{\mathbb{P}^{1}}(S)}^{\alpha_{X}} \tag{4.4}\] \[\widetilde{B}_{X}\coloneqq\overline{A(S,F)}^{\alpha_{X}}. \tag{4.5}\] We have \(B_{X}\subset\widetilde{B}_{X}\), and \(-1\in A_{\mathbb{P}^{1}}(S)\) induces \(-1\in(\mathbb{Z}/t\mathbb{Z})^{*}\), in particular \(\{\pm 1\}\subset B_{X}\). Note that we are assuming \(t>2\), hence \(-1\not\equiv 1\pmod{t}\). By Corollary 4.9 and Lemma 4.14, \(\operatorname{J}^{k}(X)\) and \(\operatorname{J}^{\ell}(X)\) are isomorphic over \(\mathbb{P}^{1}\) if and only if \(\operatorname{J}^{k\ell^{-1}}(X)\) and \(X\) are isomorphic over \(\mathbb{P}^{1}\). By Corollary 4.10, this occurs if and only if \(k\ell^{-1}\in B_{X}\). By the same argument, \(\operatorname{J}^{k}(X)\) and \(\operatorname{J}^{\ell}(X)\) are isomorphic as elliptic surfaces if and only if \(\operatorname{J}^{k\ell^{-1}}(X)\) and \(X\) are isomorphic as elliptic surfaces if and only if \(k\ell^{-1}\in\widetilde{B}_{X}\). The group \(B_{X}\) is a quotient of a subgroup of \(A(S,F)\). The latter group, by Remark 2.10, is isomorphic to the group of group automorphisms of the generic fibre of \(S\). Thus, \(A(S,F)\) (and hence \(B\)) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\), unless the \(j\)-invariant equals \(1728\) or \(0\) in which case \(A(S,F)\) (and hence \(B_{X}\)) can be \(\mathbb{Z}/4\mathbb{Z}\) or \(\mathbb{Z}/6\mathbb{Z}\) respectively. It remains to prove that \(B_{X}=\widetilde{B}_{X}=\{\pm 1\}\) if \(X\) is \(T\)-general. By Proposition 4.8, an isomorphism \(X\simeq\operatorname{J}^{k}(X)\) as elliptic surfaces would induce a group automorphism \(\beta\) of \(S=\operatorname{J}^{0}(X)\) satisfying \(\beta_{*}\alpha_{X}=k\cdot\alpha_{X}.\) This means that \(T(S)\) admits a Hodge isometry \(\sigma\), which maps \(T(X)=\operatorname{Ker}(\alpha_{X})\) to itself. By \(T\)-generality, we get \(\sigma=\pm\operatorname{id}\) so that \(\beta_{*}=\pm 1\) and hence \(k\equiv\pm 1\pmod{t}\). **Corollary 4.16**.: _If \(A(\operatorname{J}^{0}(X),F)=A_{\mathbb{P}^{1}}(\operatorname{J}^{0}(X))\) then isomorphism classes of coprime Jacobians over \(\mathbb{P}^{1}\) are the same as isomorphism classes of coprime Jacobians as elliptic surfaces._ Proof.: This follows from Proposition 4.15 as in this case \(B_{X}=\widetilde{B}_{X}\) by construction. Corollary 4.16 applies when singular fibres of \(X\to\mathbb{P}^{1}\) lie over a non-symmetric set of points \(Z\subset\mathbb{P}^{1}\), that is when \(\overline{\beta}\in\operatorname{Aut}(\mathbb{P}^{1})\) satisfies \(\overline{\beta}(Z)=Z\) only for \(\overline{\beta}=\operatorname{id}\). On the other hand, if \(Z\) is symmetric, and this symmetry can be lifted to an automorphism of \(\operatorname{J}^{0}(X)\), we typically have \(B_{X}\subsetneq\widetilde{B}_{X}\). For an explicit such surface, see Example 4.19. ### \(j\)-special isotrivial elliptic K3 surfaces By a \(j\)-special isotrivial elliptic K3 surface we mean an elliptic K3 surface with smooth fibres all having \(j\)-invariant \(0\) or \(1728\). **Remark 4.17**.: _There exist Picard rank 2 isotrivial K3 surfaces with \(j=0\) (see Example 3.13), however for \(j=1728\) the minimal rank is \(10\) for the following reason. Let \(X\) be an isotrivial elliptic K3 surface \(X\) with \(j=1728\). The zeroth Jacobian \(S\) of \(X\) will have a Weierstrass equation \(y^{2}=x^{3}+F(t)x\) with \(F(t)\) a degree 8 polynomial in \(t\). We have \(\rho(S)=\rho(X)\). By semicontinuity of the Picard rank we may assume that \(F(t)\) has distinct roots. In this case \(S\) has eight singular fibres, and the Weierstrass equation has ordinary double points at the singularities of the fibre, so \(S\) is the result of blowing up the Weierstrass model at these \(8\) points. Thus, in addition to the fibre class and the section class, \(S\) has \(8\) reducible fibres, so \(\rho(S)\geq 10\). Isotrivial K3 surfaces with \(j\neq 0,1728\) are all Kummer and hence have Picard rank at least \(17\)[Saw14, Corollary 2]._ We do not claim a direct relationship between \(j\)-special and \(T\)-special, however both of these concepts require extra automorphisms. Let \(X\to\mathbb{P}^{1}\) be an elliptic K3 surface of multisection index \(t>2\), and let \(S\to\mathbb{P}^{1}\) be its zeroth Jacobian. Let \(H=A_{\mathbb{P}^{1}}(S)\); this group is \(\mathbb{Z}/2\mathbb{Z}\) unless \(S\) is \(j\)-special, in which case it can be equal to \(\mathbb{Z}/4\mathbb{Z}\) (resp. \(\mathbb{Z}/6\mathbb{Z}\)) when \(j=1728\) (resp. \(j=0\)). By Proposition 4.15 the number of coprime Jacobians of \(X\) up to isomorphism over equals \(\phi(t)/|B_{X}|\), which is \[\begin{cases}\phi(t)/2&\text{if $X\to\mathbb{P}^{1}$ is not isotrivial with $j=0$ or $j=1728$;}\\ \phi(t)/4&\text{for some isotrivial $X$ with $j=1728$, and $H=\mathbb{Z}/4\mathbb{Z}$;}\\ \phi(t)/6&\text{for some isotrivial $X$ with $j=0$ and $H=\mathbb{Z}/6\mathbb{Z}$.}\end{cases} \tag{4.6}\] We now show that the last two cases are indeed possible. For simplicity we assume that \(t=p\), an odd prime. **Proposition 4.18**.: _Let \(S\to\mathbb{P}^{1}\) be an elliptic K3 surface with a section. Assume \(S\) is isotrivial with \(j=1728\) (resp. \(j=0\)) and \(H=\mathbb{Z}/4\mathbb{Z}\) (resp. \(H=\mathbb{Z}/6\mathbb{Z}\)). Let \(p>2\) be a prime. Then \(S\) admits a torsor \(X\to\mathbb{P}^{1}\) of multisection index \(p\) with exactly \(\frac{\phi(p)}{4}\) (resp. \(\frac{\phi(p)}{6}\)) coprime Jacobians up to isomorphism over \(\mathbb{P}^{1}\) if and only if \(p\equiv 1\pmod{4}\) (resp. \(p\equiv 1\pmod{3}\))._ Proof.: Existence of such a torsor \(X\) implies the required numerical condition on \(p\) since \(4\) (resp. \(6\)) divides \(\phi(p)=p-1\). Conversely, assume that \(p\) satisfies the numerical condition. For every non-trivial element \(\beta\in H\), the fixed subspace \((T(S)\otimes\mathbb{C})^{\langle\beta\rangle}\) is zero; this is because \(S/\langle\beta\rangle\) admits a birational \(\mathbb{P}^{1}\)-fibration over \(\mathbb{P}^{1}\), hence must be a rational surface. Thus \(T(S)\otimes\mathbb{C}\), considered as a representation of a cyclic group \(H\) is a direct sum of one-dimensional representations corresponding to primitive roots of unity of order \(|H|\). This allows to describe \(T(S)\otimes\mathbb{Q}\) as an \(H\)-representation, because irreducible \(\mathbb{Q}\)-representations of \(H\) are direct sums of Galois conjugate one-dimensional representations. Thus in both cases \(T(S)\otimes\mathbb{Q}=V^{\otimes\left(\frac{22-p}{2}\right)}\), where \(V\) is the \(2\)-dimensional representation \(\mathbb{Q}[i]=\mathbb{Q}[x]/(x^{2}+1)\) and \(\mathbb{Q}[\omega]=\mathbb{Q}[x]/(x^{2}+x+1)\) respectively. At this point it follows that under our assumptions the Picard number \(\rho=\rho(X)\) is even. On the other hand, decomposition of the \(H\)-representation \(T(S)\otimes\mathbb{Q}\) is induced from decomposition of \(T(S)\otimes\mathbb{Z}[1/|H|]\), hence since \(|H|\) is coprime to \(p\), it induces a decomposition \(T(X)\otimes\mathbb{F}_{p}\simeq V_{p}^{\oplus\left(\frac{22-p}{2}\right)}\) with \(V_{p}\) defined by \(\mathbb{F}_{p}[x]/(x^{2}+1)\) and \(\mathbb{F}_{p}[x]/(x^{2}+x+1)\) respectively. Under the numerical condition on \(p\), the corresponding polynomial has roots and the representation \(V_{p}\) is a direct sum of two one-dimensional representations \(V_{p}=\chi\oplus\chi^{\prime}\). It follows that the dual representation \(\operatorname{Br}(S)_{p-tors}\) (2.6) splits into \(1\)-dimensional representations \(\chi\), \(\chi^{\prime}\) as well. Take a generator \(\alpha\in\operatorname{Br}(S)_{p-tors}\) for one of these representations, and let \(X\) be the corresponding torsor. The explicit description (4.4) shows that \(B_{X}=H\). For explicit examples of surfaces satisfying conditions of Proposition 4.18, see Example 3.13 and Remark 4.17. Finally we illustrate the difference between isomorphism over \(\mathbb{P}^{1}\) and isomorphism as elliptic surfaces. **Example 4.19**.: _Consider the \(j=0\) isotrivial elliptic K3 surface \(S\to\mathbb{P}^{1}\) of Example 3.13, and let \(\beta\in A(S,F)\) be an automorphism of order \(11\). Note that \(\beta\not\in A_{\mathbb{P}^{1}}(S)\) so we may have \(B_{X}\subsetneq\widetilde{B}_{X}\) in Proposition 4.15. By Lemma 3.12, \(\beta\) acts nontrivially on \(T(S)\). As in the proof of Proposition 4.18, we deduce that for every prime \(p\equiv 1\pmod{11}\), the number of coprime Jacobians up to isomorphism as elliptic surfaces for an eigenvector torsor will be \(11\) times less than when they are considered up to isomorphism over \(\mathbb{P}^{1}\)._ ## 5. Derived equivalent K3s and Jacobians The following well-known result goes back to Mukai, see also [10, Remark 5.4.6]. We provide the proof for completeness as it follows easily from what we have explained so far. **Theorem 5.1**.: _Let \(S\to\mathbb{P}^{1}\) be an elliptic K3 surface with a section, and let \(X\to\mathbb{P}^{1}\) be a torsor over \(S\to\mathbb{P}^{1}\). Let \(t\in\mathbb{Z}\) be the multisection index of \(X\to\mathbb{P}^{1}\). Then \(\operatorname{J}^{k}(X)\) is a Fourier-Mukai partner of \(X\) if and only if \(\gcd(k,t)=1\)._ Proof.: Let \(\alpha_{X}\in\operatorname{Br}(S)\) be the Brauer class of \(X\to\mathbb{P}^{1}\). From Lemma 4.4 it is easy to deduce that \[\det(T(X))=t^{2}\cdot\det(T(S)) \tag{5.1}\] (cf [14, Remark 3.1]). Recall that \(t=\operatorname{ord}(\alpha_{X})\) by Lemma 4.4. We know \(T(\operatorname{J}^{k}(X))\) is Hodge isometric to the kernel of \(k\cdot\alpha_{X}:T(S)\to\mathbb{Z}/t\mathbb{Z}\), again by Lemma 4.4. If \(\gcd(k,t)=1\), then \(\alpha_{X}\) and \(k\alpha_{X}\) have the same kernel so that \[T(\operatorname{J}^{k}(X))\simeq\ker(k\cdot\alpha_{X})=\ker(\alpha_{X})\simeq T (X),\] so \(\operatorname{J}^{k}(X)\) is a Fourier-Mukai partner of \(X\) by the Derived Torelli Theorem. Let us prove the converse implication. From (5.1), we get that for any \(k\in\mathbb{Z}\), we have \[\frac{\det(T(X)}{\det(T(\mathrm{J}^{k}(X)))}=\left(\frac{\mathrm{ord}(\alpha)}{ \mathrm{ord}(k\alpha)}\right)^{2}=\gcd(k,\mathrm{ord}(\alpha))^{2}.\] Thus if \(X\) and \(\mathrm{J}^{k}(X)\) are derived equivalent, then the left-hand side equals one by the Derived Torelli Theorem hence \(k\) is coprime to \(t=\mathrm{ord}(\alpha)\). In this section we will address Question 1.1, i.e. whether all Fourier-Mukai partners are of the form described in Theorem 5.1. ### Derived elliptic structures In this section, we set up the theory of derived elliptic structures and Hodge elliptic structures. **Definition 5.2**.: _Let \(X\) be a K3 surface. A derived elliptic structure on \(X\) is a pair \((Y,\phi)\), where \(Y\) is a K3 surface such that \(Y\) is derived equivalent to \(X\) and \(\phi:Y\to\mathbb{P}^{1}\) is an elliptic fibration._ We say that two derived elliptic structures are isomorphic if they are isomorphic as elliptic surfaces. We denote by \(\mathrm{DE}(X)\) (resp. \(\mathrm{DE}_{t}(X)\)) the set of isomorphism classes of derived elliptic structures on \(T\) (resp. derived elliptic structures on \(T\) of multisection index \(t\)). **Lemma 5.3**.: _Let \(X\) be a K3 surface. Then we have:_ 1. \(\mathrm{DE}(X)\) _is a finite set;_ 2. \(\mathrm{DE}(X)\) _is nonempty if and only if_ \(X\) _is elliptic;_ 3. \(\mathrm{DE}_{t}(X)\) _can be nonempty only for_ \(t\) _such that_ \(t^{2}\) _divides the order of the discriminant group_ \(A_{T(X)}\)_;_ 4. _If_ \(X\) _is elliptic with_ \(\rho(X)=2\) _and multisection index_ \(t\)_, then every elliptic structure on every Fourier-Mukai partner of_ \(X\) _also has multisection index_ \(t\)_, that is_ \(\mathrm{DE}(X)=\mathrm{DE}_{t}(X)\)_._ Proof.: (i) The set of isomorphism classes of Fourier-Mukai partners of \(X\) is finite [1, Proposition 5.3], [10], and each of them has only finitely many elliptic structures up to isomorphism [10]. It follows that \(\mathrm{DE}(X)\) is a finite set. (ii) If \(X\) elliptic, then \(X\) with its elliptic structure is an element of \(\mathrm{DE}(X)\), hence it is nonempty. Conversely, if \(\mathrm{DE}(X)\) is nonempty, then \(X\) admits an Fourier-Mukai partner which is an elliptic K3 surface. Then by the Derived Torelli Theorem \(\mathrm{NS}(X)\) and \(\mathrm{NS}(Y)\) are in the same genus, and since \(Y\) is elliptic, the intersection form \(\mathrm{NS}(Y)\) represents zero, hence a standard lattice theoretic argument shows that \(\mathrm{NS}(X)\) also represents zero, and \(X\) is elliptic. (iii) If \((Y,\phi)\) is a derived elliptic structure of \(T\) of multisection index \(t\), then we have \[|A_{T(X)}|=|A_{T(Y)}|=t^{2}\cdot|A_{T(\mathrm{J}^{0}(Y))}|\] where the first equality follows from the Derived Torelli Theorem and the second one can be deduced from (5.1) (cf [1, Remark 3.1]). In particular, \(\mathrm{DE}_{t}(X)\) is empty whenever \(t^{2}\) does not divide the order of \(A_{T(X)}\). (iv) Every Fourier-Mukai partner \(Y\) of \(X\) also has Picard number \(\rho(Y)=2\). By Proposition 3.1, multisection index of every elliptic fibration on \(Y\) equals the square root of \(|A_{T(Y)}|=|A_{T(X)}|=t^{2}\). We can take coprime Jacobians of a derived elliptic structure \((Y,\phi)\), which we denote by \(\mathrm{J}^{k}(Y,\phi)\). By Lemma 4.14 and Theorem 5.1 this defines a group action of \((\mathbb{Z}/t\mathbb{Z})^{*}\) on \(\mathrm{DE}_{t}(X)\). The set of \((\mathbb{Z}/t\mathbb{Z})^{*}\)-orbits on \(\mathrm{DE}_{t}(X)\) parametrizes derived elliptic structures up to taking coprime Jacobians, and it is sometimes a more natural set to work with. We now explain Hodge-theoretic analogues of derived elliptic structures. The following definition is motivated by the Derived Torelli Theorem. **Definition 5.4**.: _Let \(X\) be a K3 surface. A Hodge elliptic structure on \(X\) is a twisted Jacobian K3 surface \((S,f,\alpha)\) (see Definition 4.12) such that \(\mathrm{Ker}(\alpha)\simeq T(X)\)._ The index of a Hodge elliptic structure is defined to be the order of its Brauer class \(\alpha\). An isomorphism of Hodge elliptic structures \((S,f,\alpha)\), \((S^{\prime},f^{\prime},\alpha^{\prime})\) is an isomorphism \(\gamma:S\to S^{\prime}\) of elliptic surfaces such that \(\gamma_{*}(\alpha)=\alpha^{\prime}\). We denote by \(\mathrm{HE}(X)\) the set of isomorphism classes of Hodge elliptic structures on \(X\). We write \(\mathrm{HE}_{t}(X)\) for the set of isomorphism classes of Hodge elliptic structures of index \(t\). The operation \(k*(S,f,\alpha)=(S,f,k\alpha)\) defines a group action of \((\mathbb{Z}/t\mathbb{Z})^{*}\) on \(\mathrm{HE}_{t}(X)\). **Example 5.5**.: _Let \(X\) be an elliptic K3 surface of Picard rank \(2\) and multisection index \(t\). Let \((S,f,\alpha)\) be a Hodge elliptic structure on \(X\). Since the discriminant of \(X\) equals \(t^{2}\), from the sequence_ \[0\to T(X)\to T(S)\to\mathbb{Z}/t\mathbb{Z}\to 0,\] _we deduce that \(S\) is unimodular. Thus \(S\) is an elliptic K3 surface of Picard rank two, and it has a unique elliptic fibration, which has a unique section (see Lemma 3.2). We see that in the Picard rank two case \(f\) can be excluded from the data of a Hodge elliptic structure and we have a bijection_ \[\operatorname{HE}_{t}(X)=\{(S,\alpha)\}/\simeq, \tag{5.2}\] _with isomorphisms understood as isomorphisms between K3 surfaces respecting the Brauer classes._ **Proposition 5.6**.: _Let \(X\) be a K3 surface and let \(t\) be a positive integer. Then the bijection \(\operatorname{EllK3}\simeq\operatorname{BrK3}\) of Theorem 4.13 induces a \((\mathbb{Z}/t\mathbb{Z})^{*}\)-equivariant bijection \(\operatorname{DE}_{t}(X)\simeq\operatorname{HE}_{t}(X)\)._ Proof.: First of all note that by definition \(\operatorname{DE}_{t}(X)\) is a subset of \(\operatorname{EllK3}\) consisting of isomorphism classes \((Y,\phi)\) with \(Y\) derived equivalent to \(X\) and \(\phi\) having a multisection index \(t\). Similarly, \(\operatorname{HE}_{t}(X)\) is a subset of \(\operatorname{BrK3}\) consisting of \((S,f,\alpha)\) such that \(\operatorname{ord}(\alpha)=t\) and \(\operatorname{Ker}(\alpha)\simeq T(X)\). If \((Y,\phi)\in\operatorname{EllK3}\), then by Lemma 4.4, \((Y,\phi)\) belongs to \(\operatorname{DE}_{t}(X)\) if and only if the corresponding triple \((\operatorname{J}^{0}(Y),\operatorname{J}^{0}(\phi),\alpha_{Y})\in\operatorname {BrK3}\) belongs to \(\operatorname{HE}_{t}(X)\). The \((\mathbb{Z}/t\mathbb{Z})^{*}\)-equivariance of the map is a direct consequence of the fact that \(k\alpha_{Y}=\alpha_{\operatorname{J}^{k}(Y)}\), which holds again by Lemma 4.4. **Definition 5.7**.: _Let \(T\) be a lattice. For \(t\in\mathbb{Z}\), we write \(\operatorname{I}_{t}(A_{T})\) for the set of cyclic, isotropic subgroups of order \(t\) in \(A_{T},\) and we write \(\widetilde{\operatorname{I}}_{t}(A_{T})\) for the set of isotropic vectors of order \(t\) in \(A_{T}\)._ For a K3 surface \(X\), there is a natural action of \(G_{X}\), on \(\operatorname{I}_{t}(A_{T(X)})\) and \(\widetilde{\operatorname{I}}_{t}(A_{T(X)})\). Let \((S,f,\alpha)\) be a Hodge elliptic structure on \(X\) of index \(t\). There is a unique isomorphism \(r_{\alpha}:\mathbb{Z}/t\mathbb{Z}\simeq T(S)/\operatorname{Ker}(\alpha)\) such that the diagram (5.3) commutes. In particular, the Brauer class \(\alpha\) singles out a generator \(r_{\alpha}(\overline{1})\) of \(T(S)/\operatorname{Ker}(\alpha)\). Fix any Hodge isometry \(T(X)\simeq\operatorname{Ker}(\alpha)\). The natural inclusion \(T(S)/T(X)\subset A_{T(X)}\) allows us to view \(r_{\alpha}(\overline{1})\) as an element of \(A_{T(X)}\), which we denote by \(w_{\alpha}\). We denote the subgroup of \(A_{T(X)}\) generated by \(w_{\alpha}\) by \(H_{\alpha}\). Note that \(w_{\alpha}\), and hence \(H_{\alpha}\), is only well-defined up to the \(G_{X}\) action on \(A_{T(X)}\), since its construction depends on the original choice of Hodge isometry \(T(X)\simeq\operatorname{Ker}(\alpha)\). On the other hand isomorphic Hodge elliptic structures on \(X\) give rise to isotropic vectors in the same \(G_{X}\)-orbit by Lemma 2.1. We define the map \[w:\operatorname{HE}_{t}(X)\to\widetilde{\operatorname{I}}_{t}(A_{T(X)})/G_{X},\quad w(S,f,\alpha)=w_{\alpha}. \tag{5.4}\] The operation \(k*w=k^{-1}w\), where \(k^{-1}\) is an inverse to \(k\) modulo \(t\), defines a group action of \((\mathbb{Z}/t\mathbb{Z})^{*}\) on \(\widetilde{\operatorname{I}}_{t}(A_{T})/G_{T}\). **Lemma 5.8**.: _The map (5.4) is \((\mathbb{Z}/t\mathbb{Z})^{*}\)-equivariant._ Proof.: Recall from Lemma 4.4(ii) that \(\alpha_{\operatorname{J}^{k}(Y)}=k\cdot\alpha_{Y}\) in \(\operatorname{Br}(\operatorname{J}^{0}(Y))\) for all \(k\in\mathbb{Z}\). It follows from (5.3) that we have \(r_{k\alpha}=k^{-1}r_{\alpha}\). Thus from the definitions we get \[w_{k\alpha}=r_{k\alpha}(\overline{1})=k^{-1}r_{\alpha}(\overline{1})=k^{-1}w_{ \alpha}=k*w_{\alpha},\] which means that the map \(w\) is equivariant. Proposition 5.6 and Lemma 5.8 give rise to the following commutative diagram with the vertical arrows being quotients by the corresponding \((\mathbb{Z}/t\mathbb{Z})^{*}\)-actions: (5.5) For \((Y,\phi)\) a derived elliptic structure of \(X\), we consider \(w_{\phi}\coloneqq w_{\alpha_{Y}}\), the image of \((Y,\phi)\) under the composition of maps in the top row of (5.5). In particular, if \(f:X\to\mathbb{P}^{1}\) is an elliptic fibration with fibre class \(F\in\operatorname{NS}(X)\), then by construction, \(w_{f}\) is the Caldararu class of the moduli space \(\operatorname{J}^{0}(X)\) of sheaves with Mukai vector \((0,F,0)\) on \(X\), thus by Lemma 2.8\(w_{f}\) corresponds to \[\frac{1}{t}F\in\operatorname{I}_{t}(A_{NS(X)})/G_{X} \tag{5.6}\] (we can get rid of the minus sign in the formula at this point, as \(-1\in G_{X}\)). ### Fourier-Mukai partners in rank 2 In this section, we work with the elliptic Picard rank 2 case, so that by Proposition 3.1 we have \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) given by (3.1). The following result is one of the reasons why it is natural to concentrate on Picard rank two elliptic surfaces. **Lemma 5.9**.: _For an elliptic K3 surface \(X\) with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\), all derived elliptic structures and all Hodge elliptic structures on \(X\) have the same index \(t\)._ Proof.: This follows from Lemma 5.3 and Proposition 5.6. For \(X\) as in Lemma 5.9, we have \(\operatorname{DE}(X)=\operatorname{DE}_{t}(X)\). In particular, there is an action of \((\mathbb{Z}/t\mathbb{Z})^{*}\) on \(\operatorname{DE}(X)\) by taking coprime Jacobians. Recall that for a K3 surface \(X\) with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\), we have \(A_{T(X)}\simeq A_{\operatorname{NS}(X)}(-1)\simeq A_{d,t}(-1)\), and it has order \(t^{2}\), see Lemma 3.3. Thus isotropic elements (resp. cyclic isotropic subgroups) of order \(t\) are precisely Lagrangian elements (resp. Lagrangian subgroups), see Definition 3.5: \[\operatorname{I}_{t}(A_{T(X)})=\operatorname{L}(A_{T(X)}),\quad\widetilde{ \operatorname{I}}_{t}(A_{T(X)})=\widetilde{\operatorname{L}}(A_{T(X)}),\] The following result is related to [10, Proposition 3.3]. **Theorem 5.10**.: _Let \(X\) be an elliptic K3 surface of Picard rank 2 and multisection index \(t\). Then the map \(w\) (5.4) is a bijection. Furthermore, we have a bijection_ \[\operatorname{DE}(X)/(\mathbb{Z}/t\mathbb{Z})^{*}\simeq\operatorname{L}(A_{T( X)})/G_{X}. \tag{5.7}\] _The action (3.11) induces a \(\mathbb{Z}/2\mathbb{Z}\)-action on \(\operatorname{L}(A_{T(X)})/G_{X}\) which under the bijection (5.7) corresponds to the action on \(\operatorname{DE}(X)\) swapping the two elliptic fibrations on Fourier-Mukai partners of \(X\)._ Proof.: We first show that \(w\) is bijective. We start with the bijection (5.2). For injectivity of \(w\), take \((S,\alpha)\) and \((S^{\prime},\alpha^{\prime})\) with \(T(X)\simeq\operatorname{Ker}(\alpha)\simeq\operatorname{Ker}(\alpha^{\prime})\). Assume that there exists a Hodge isometry \(\sigma\in G_{X}\) with the property \(\overline{\sigma}(w_{\alpha})=w_{\alpha^{\prime}}\). Then Lemma 2.1 implies that \(\sigma\) can be extended to a Hodge isometry \(T(S)\to T(S^{\prime})\). Since \(S\) and \(S^{\prime}\) have Picard rank 2, Lemma 3.12 implies that this Hodge isometry is induced by a group isomorphism \(\beta:S\simeq S^{\prime}.\) From \(\overline{\sigma}(w_{\alpha})=w_{\alpha^{\prime}}\), it follows that \((S,\alpha)\) and \((S^{\prime},\alpha^{\prime})\) are isomorphic. For surjectivity of \(w\), let \(u\in A_{T(X)}\) be an isotropic vector of order \(t\) and \(H=\langle u\rangle\). Via Lemma 2.1, \(H\) corresponds to an overlattice \(i:T(X)\hookrightarrow T\) which inherits a Hodge structure from \(T(X)\), i.e. \(i:T(X)\hookrightarrow T\) is a Hodge overlattice. Note that \(T\) is unimodular, since the index of \(T(X)\subset T\) is \(t\) and \(A_{T(X)}\) has order \(t^{2}\). Hence \(T\oplus U\) is an even, unimodular lattice of rank 22 and signature \((3,19)\). This means that it is isomorphic to the K3-lattice \(\Lambda_{\operatorname{K3}}\). By the surjectivity of the period map (Theorem 2.2), we obtain a K3 surface \(S\) with \(T(S)\simeq T\) and \(\operatorname{NS}(S)\simeq U\). Therefore the overlattice \(i:T(X)\hookrightarrow T(S)\) is a Hodge overlattice with \(T(S)/T(X)=H\). We define the Brauer class \(\alpha:T(S)\to H\simeq\mathbb{Z}/t\mathbb{Z}\) where the second map is given by \(u\mapsto\overline{1}\). Thus we have constructed a pair \((S,\alpha)\) with Caldararu class \(u\) and \(\operatorname{Ker}(\alpha)\simeq T(X)\). Since \(w\) is bijective, the diagram (5.5) immediately implies (5.7). The action (3.11) induces the action on \(\operatorname{L}(A_{T(X)})/G_{X}\) because \(\iota\) commutes with \(G_{X}\). Indeed this can be checked on each primary part (2.1), where there at most two Lagrangian subgroups (see Lemma 3.8), hence the action of \(G_{X}\) factors through the action generated by \(\iota_{p}\). To show that \(\iota\) corresponds to swapping the elliptic fibrations on Fourier-Mukai partners \(Y\), we can use the identification \(\operatorname{L}(A_{T(X)})/G_{X}=\operatorname{L}(A_{T(Y)})/G_{Y}\), and assume \(Y=X\). The result follows from (3.12) because Lagrangian subgroups generated by \(\overline{v}\) and \(\overline{v^{\prime}}\) correspond to the two elliptic fibrations on \(X\) via (5.7) by (5.6). Recall from Lemma 3.2 that a K3 surface \(X\) with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) admits two elliptic fibrations, except when \(d\equiv-1\pmod{t}\), in which case \(X\) admits only one elliptic fibration. Using Theorem 5.10 we can easily compare the coprime Jacobians of these two fibrations. **Example 5.11**.: _Let \(X\) be an elliptic K3 surface of Picard rank two with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) such that \(\gcd(d,t)=1\) and \(d\not\equiv-1\pmod{t}\). Let \((X,f)\) and \((X,g)\) be two elliptic fibrations on \(X\) (see Lemma 3.2), and let \(w_{f}\) and \(w_{g}\) be their Caldararu classes, which are Lagrangian elements in \(A_{d,t}\). By Lemma 3.8, \(A_{d,t}\) admits a unique Lagrangian subgroup, thus we have \(\langle w_{f}\rangle=\langle w_{g}\rangle\). By Theorem 5.10 this implies that \(f\) and \(g\) are coprime Jacobians of each other. We can make this more precise as follows. Recall that by (5.6), \(w_{f}\) and \(w_{g}\) correspond to classes \(\overline{v}\), \(\overline{v^{\prime}}\) (3.7) respectively. Using (3.8), we compute_ \[w_{g}=\overline{v^{\prime}}=-d\overline{v}=-dw_{f}=-d^{-1}*w_{f}.\] _Here \(d^{-1}\) is the inverse to \(d\) modulo \(t\). Thus we have an isomorphism of elliptic surfaces_ \[(X,g)\simeq\operatorname{J}^{-d^{-1}}(X,f)\simeq\operatorname{J}^{d^{-1}}(X,f)\] _and \((X,f)\simeq\operatorname{J}^{d}(X,g)\)._ **Corollary 5.12**.: _Let \(X\) be an elliptic K3 surface of Picard rank two. The set of Fourier-Mukai partners of \(X\) considered up to isomorphism as surfaces, and up to coprime Jacobians (on every derived elliptic structure of \(X\)) is in natural bijection with the double quotient_ \[\langle\iota\rangle\backslash\operatorname{L}(A_{T(X)})/G_{X}.\] Proof.: This is the consequence of the action of \(\iota\) on \(\operatorname{L}(A_{T(X)})/G_{X}\) by swapping the two elliptic fibrations as explained in Theorem 5.10. **Corollary 5.13**.: _Let \(X\) be an elliptic K3 surface of Picard rank 2. Let \(d,t\in\mathbb{Z}\) such that \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\), and write \(m=\gcd(d,t)\)._ 1. _If_ \(m=1\)_, then_ \(\operatorname{DE}(X)\) _is a single_ \((\mathbb{Z}/t\mathbb{Z})^{*}\)_-orbit. Explicitly, every Fourier-Mukai partner of_ \(X\) _will be found among the coprime Jacobians of a fixed elliptic fibration_ \((X,f)\)_._ 2. _If_ \(m=p^{k}\)_, for a prime_ \(p\) _and_ \(k\geq 1\)_, then_ \(\operatorname{DE}(X)\) _consists of at most two_ \((\mathbb{Z}/t\mathbb{Z})^{*}\)_-orbits, permuted by the involution_ \(\iota\)_. Explicitly every Fourier-Mukai partner of_ \(X\) _will be found among the coprime Jacobians of one of the two elliptic fibrations on_ \(X\)_._ 3. _If_ \(m\) _has at least_ \(7\) _distinct prime factors then_ \(\operatorname{DE}(X)\) _has at least three_ \((\mathbb{Z}/t\mathbb{Z})^{*}\)_-orbits. In particular, there exist Fourier-Mukai partners of_ \(X\) _which are not isomorphic, as surfaces, to any of the Jacobians of elliptic structures on_ \(X\)_._ Proof.: In each case we use Theorem 5.10 combined with the count of Lagrangians given in Proposition 3.6. (i) Fix an elliptic fibration \(f:X\to\mathbb{P}^{1}\) and let \(H_{f}\subseteq A_{T(X)}\) be the corresponding Lagrangian subgroup. Since \(m=1\), Proposition 3.6 implies that \(H_{f}\subseteq A_{T(X)}\) is the only Lagrangian subgroup. Therefore all derived elliptic structures are of the form \(\operatorname{J}^{k}(X)\to\mathbb{P}^{1}\) for \(k\in\mathbb{Z}\) coprime to \(t\) by Theorem 5.10. (ii) By Proposition 3.6, \(A_{T(X)}\) contains precisely two Lagrangian subgroups. The condition \(m=p^{k}\) implies in particular that \(d\not\equiv-1\pmod{t}\), hence the surface \(X\) admits two elliptic fibrations \(f:X\to\mathbb{P}^{1}\) and \(g:X\to\mathbb{P}^{1}\). By Lemma 3.7, arguing like in Example 5.11, we see that the subgroups of \(A_{T(X)}\) induced by the two elliptic fibrations are not equal. Hence \(H_{f}\) and \(H_{g}\) are the only two Lagrangians of \(A_{T(X)}\), so every derived elliptic structure on \(X\) is either a coprime Jacobian of \(f\) or of \(g\) by Theorem 5.10. (iii) Assume \(\omega(m)\geq 7\). Since \(-1\in G_{X}\) acts trivially on \(\operatorname{L}(A_{T(X)})\) and \(|G_{X}|\leq 66\), by Proposition 3.6, the set \(\operatorname{L}(A_{T(X)})/G_{X}\) has cardinality at least \(2^{\omega(m)}/33\geq 128/33\), that is there are at least three elements. The final statement follows from Corollary 5.12. **Corollary 5.14**.: _Assume that \(X\) is a \(T\)-general elliptic K3 surface with \(\operatorname{NS}(X)=\Lambda_{d,t}\) with \(t>2\), and let \(m=\gcd(d,t)\). Then_ \[|\operatorname{DE}(X)|=2^{\omega(m)-1}\cdot\phi(t),\quad|\operatorname{DE}(X)/ (\mathbb{Z}/t\mathbb{Z})^{*}|=2^{\omega(m)}. \tag{5.8}\] _In particular, if \(m\) is not a power of a prime, then \(X\) has Fourier-Mukai partners not isomorphic, as surfaces, to any Jacobian of an elliptic structure on \(X\)._ Proof.: The second formula in (5.8) is an immediate consequence of Theorem 5.10, the fact that \(G_{X}=\{\pm 1\}\) acts trivially on \(\widetilde{\operatorname{L}}(A_{T(X)})\) and the Lagrangian count (3.6). By Proposition 4.15, coprime Jacobians of a T-general elliptic K3 surface form \(\phi(t)/2\) isomorphism classes. In other words, the orbits of the \((\mathbb{Z}/t\mathbb{Z})^{*}\)-action on \(\operatorname{DE}(X)\) are all of size \(\phi(t)/2\) and the first formula in (5.8) follows from the second one. The final statement follows from Corollary 5.12 because if \(m\) is not a power of a prime, \(\operatorname{DE}(X)/(\mathbb{Z}/t\mathbb{Z})^{*}\) has at least four elements by (5.8) which thus can not form a single \(\iota\)-orbit. ### The zeroth Jacobian In this section, we apply the results of Section 5.2 to investigate whether derived equivalent elliptic K3 surfaces have isomorphic zeroth Jacobians. A priori, this is a weaker question than Question 1.1. However, we now show that the two questions are equivalent in the very general case. In particular, the answer is negative. **Proposition 5.15**.: _Let \(f:X\to\mathbb{P}^{1}\) be an elliptic K3 surface of Picard rank 2, and write \(S\coloneqq\operatorname{J}^{0}(X)\). Assume that \(T(X)\) has no non-trivial rational Hodge isometries, that is_ \[O_{\text{Hodge}}(T(X)_{\mathbb{Q}})\simeq\mathbb{Z}/2\mathbb{Z}. \tag{5.9}\] _Let \((Y,\phi)\) be a derived elliptic structure on \(X\) such that \(S^{\prime}\coloneqq\operatorname{J}^{0}(Y)\simeq S\). Then \((Y,\phi)\) is isomorphic to a coprime Jacobian of \((X,f)\)._ Proof.: Fixing any Hodge isometry \(T(X)\simeq T(Y)\) we view \(T(X)\simeq T(Y)\hookrightarrow T(S^{\prime})\) as an overlattice of \(T(X)\). By assumption there exists a Hodge isometry \(\beta^{*}:T(S^{\prime})\simeq T(S)\) induced by an isomorphism \(\beta:S\simeq S^{\prime}\). Now \(\beta^{*}\) induces the rational Hodge isometry \[T(X)_{\mathbb{Q}}\simeq T(S^{\prime})_{\mathbb{Q}}\overset{\beta^{*}_{\mathbb{ Q}}}{\simeq}T(S)_{\mathbb{Q}}\simeq T(X)_{\mathbb{Q}}\] which by assumption equals \(\pm\operatorname{id}\), hence \(\beta^{*}\) fixes \(T(X)\) as a sublattice of \(T(S)\) and \(T(S^{\prime})\). In particular, \(\beta_{*}\alpha_{X}=k\alpha_{Y}\) for some \(k\in\mathbb{Z}\), hence \(Y\) is a coprime Jacobian of \(X\). It is well-known that if \(X\) is a very general \(\Lambda_{d,t}\)-polarized elliptic K3 surface then (5.9) is satisfied, see e.g. the argument of [20, Lemma 3.9]. Thus, if \(X\) is a very general elliptic K3 surface of Picard rank two with two elliptic fibrations, Proposition 5.15 allows us to compare the corresponding zeroth Jacobians, which generalises [1, Proposition 4.8]. **Corollary 5.16**.: _Let \(X\) be an elliptic K3 surface of Picard rank two with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\) and suppose \(d\not\equiv\pm 1\mod t\), so that \(X\) admits two non-isomorphic elliptic fibrations by Lemma 3.2. Assume (5.9) holds for \(X\). Then the zeroth Jacobians of the two elliptic fibrations on \(X\) are isomorphic if and only if \(\gcd(d,t)=1\)._ Proof.: If \(\gcd(d,t)=1\) the two fibrations on \(X\) are coprime Jacobians of each other by Corollary 5.13, hence the zeroth Jacobians are isomorphic. If \(\gcd(d,t)\neq 1\), then by \(T\)-generality of \(X\), the Caldararu classes of the two fibrations on \(X\) are not proportional in \(A_{T(X)}\), hence the two fibrations are not coprime Jacobians of each other and the result follows from Proposition 5.15. **Remark 5.17**.: _In the setting of Corollary 5.16, if zeroth Jacobians are not isomorphic, then they are also not derived equivalent. Indeed, elliptic K3 surfaces with a section do not admit nontrivial Fourier-Mukai partners [11, Proposition 2.7(3)]._ ### Question by Hassett and Tschinkel over non-closed fields In this subsection we will use the theory of twisted forms to extend our results to a subfield \(k\subset\mathbb{C}\). Let \(f:X\to\mathbb{P}^{1}\) be a complex elliptic K3 surface with \(\operatorname{NS}(X)\simeq\Lambda_{d,t}\). Recall that we denote by \(\operatorname{Aut}(X,F)\) the group of automorphisms of \(X\) which fix the class of the fibre in \(\operatorname{NS}(X)\). By Corollary 3.14, the group \(\operatorname{Aut}(X,F)\) is trivial whenever \(t>2\) and \(X\) is \(T\)-general. Let \(k\subset L\) be a field extension. An \(L\)-twisted form of an elliptic K3 surface \((Y,\phi:Y\to C)\) over \(k\) is any elliptic K3 surface \((Y^{\prime},\phi^{\prime}:Y^{\prime}\to C^{\prime})\) over \(k\) such that \((Y_{L},\phi_{L})\) is isomorphic to \((Y^{\prime}_{L},\phi^{\prime}_{L})\) as elliptic surfaces. **Lemma 5.18**.: _Let \((Y,\phi)\) be an elliptic K3 surface over \(k\) such that \(\operatorname{Aut}(Y_{\mathbb{C}},F)=\left\{\operatorname{id}\right\}.\) Then every \(\mathbb{C}\)-twisted form of \((Y,\phi)\) is isomorphic to \(Y\) as a surface._ Proof.: Any \(\mathbb{C}\)-twisted form \((Y^{\prime},\phi^{\prime})\) of \((Y,\phi)\) is also a \(\overline{k}\)-twisted form of \((Y,\phi)\)[13, Lemma 16.27]. Thus it suffices to show that for any Galois extension \(L/k\) all \(L\)-twisted forms of \((Y,\phi)\) are isomorphic to \(Y\). Let \((Y^{\prime},\phi^{\prime})\) be an \(L\)-twisted form of \((Y,\phi)\), and let \(g:Y_{L}\simeq Y^{\prime}_{L}\) be an isomorphism of elliptic surfaces, possibly twisting the base by an automorphism. Then for any \(\sigma\in\operatorname{Gal}(L/k)\), the map \(h\coloneqq g\circ(\sigma g)^{-1}\) is an automorphism of \(Y_{L}\) as an elliptic surface. Using injectivity of the map \(\operatorname{Aut}(Y_{L})\to\operatorname{Aut}(Y_{\mathbb{C}})\), c.f. [Stacks, Lemma 02VX], and the assumption about automorphisms of \(Y_{\mathbb{C}}\), we see that \(h\) is the identity, that is \(g\) commutes with the Galois action. Therefore \(g\) descends to an isomorphism \(Y\simeq Y^{\prime}\)[13, Proposition 16.9]. **Lemma 5.19**.: _If \((X,f)\) is an elliptic K3 surface over \(k\) such that \(\rho(X_{\mathbb{C}})=2\), then all elliptic fibrations of \(X_{\mathbb{C}}\) are induced by elliptic fibrations of \(X\)._ Proof.: By Lemma 3.2\(X_{\mathbb{C}}\) has one or two elliptic fibrations. If there is only fibration, it must come from the given elliptic fibration \(f\). If there are two elliptic fibrations on \(X_{\mathbb{C}}\), they are defined over some Galois extension \(L/k\). Let \(F\) and \(F^{\prime}\) be the corresponding divisor classes on \(X_{L}\). These classes can not be permuted by the Galois group, because one of them corresponds to \(f\), hence is fixed by the Galois group. Thus the other class is also fixed by the Galois group and the corresponding morphism \(X\to C\) is defined over \(k\), see e.g. [14, Proposition 2.7, Theorem 3.4(2)]. **Proposition 5.20**.: _Let \(X\) be an elliptic K3 surface over \(k\) with \(\operatorname{NS}(X_{\mathbb{C}})\simeq\Lambda_{d,t}\). Assume \(\operatorname{Aut}(X_{\mathbb{C}},F)=\left\{\operatorname{id}\right\}.\) If \(d\) and \(t\) are coprime or have only one prime factor in common, then every Fourier-Mukai partner of \(X\) is isomorphic, as a surface, to a coprime Jacobian of one of the elliptic fibrations on \(X\)._ Proof.: Let \(Y\) be a Fourier-Mukai partner of \(X\), and let \(\phi:Y\to C\) be an elliptic fibration of \(Y\), which exists by [13, Proposition 16]. By Corollary 5.13(i, ii), \(\phi_{\mathbb{C}}:Y_{\mathbb{C}}\to C_{\mathbb{C}}\) is isomorphic to \(\operatorname{J}^{k}(X_{\mathbb{C}},f_{\mathbb{C}})\) as elliptic surfaces, for some elliptic fibration \(f_{\mathbb{C}}\) on \(X_{\mathbb{C}}\). By Lemma 5.19, \(f_{\mathbb{C}}\) comes from an elliptic fibration \(f\) on \(X\), hence \((Y,\phi)\) is a \(\mathbb{C}\)-twisted form of \(\operatorname{J}^{k}(X,f)\). From the description of the automorphism groups given in Proposition 3.10 we deduce that \[\operatorname{Aut}(\operatorname{J}^{k}(X_{\mathbb{C}}),F)\simeq\operatorname {Aut}(X_{\mathbb{C}},F)\] and by assumption this group is trivial. It follows from Lemma 5.18 that \(Y\) is isomorphic to \(\operatorname{J}^{k}(X)\) as a surface. Proposition 5.20 implies the following: **Corollary 5.21**.: _Let \(X\) be as in Proposition 5.20. Let \(Y\) be any Fourier-Mukai partner of \(X\). Then \(X\) has a \(k\)-rational point if and only if \(Y\) has a \(k\)-rational point._ Proof.: From Proposition 5.20, it follows that there is an elliptic fibration \(f:X\to C^{\prime}\) and an integer \(\ell\in\mathbb{Z}\) such that \(Y\simeq\operatorname{J}^{\ell}(X,f)\) as surfaces. There is a rational map \(X\dashrightarrow\operatorname{J}^{\ell}(X)\simeq Y\) given by \(P\mapsto\ell\cdot P\). By the Lang-Nishimura Theorem [13], [15], it follows that \(X(k)\neq\emptyset\) implies \(Y(k)\neq\emptyset\). Conversely, since \(X\) is also a coprime Jacobian of \(Y\), the same argument shows that \(Y(k)\neq\emptyset\) implies \(X(k)\neq\emptyset\).
2301.00623
Triple Graph Grammars for Multi-version Models
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations. In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Matthias Barkowsky, Holger Giese
2023-01-02T12:32:50Z
http://arxiv.org/abs/2301.00623v1
# Triple Graph Grammars for Multi-version Models ###### Abstract Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations. In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance. ## 1 Introduction In model-driven software development, models are treated as primary development artifacts. Complex projects can involve multiple models, which describe the system under development at different levels of abstraction or with respect to different system aspects and can be edited independently by a team of developers. In this case, consistency of the holistic system description is ensured by model transformations that automatically derive concrete models from more abstract ones or propagate changes to a model describing one aspect of the system to other models concerned with different but overlapping aspects [20]. Similarly to program code in conventional software development, the evolution of models via changes made by different developers requires management of the resulting versions of the software description. In particular, version management has to support parallel development activities of multiple developers working on the same development artifact, where living with inconsistencies of a single artifact may temporarily be necessary to avoid loss of information [8]. In [2], we have introduced multi-version models as a means of managing multiple versions of the same model that also enables monitoring the consistency of the individual model versions and potential merge results of versions developed in parallel. However, with model transformations effectively linking multiple models via consistency relationships, considering only the evolution of a single model without its context is insufficient for larger model-driven software development projects. Thus, a mechanism for establishing consistency of different versions of such linked models that simultaneously allows parallel development of multiple versions is required. Such a mechanism would allow working with more compact representations that also enable further analysis operations as described in [2]. In addition, an integrated handling of multiple model versions may afford an improved execution time performance of the transformation. In this report, we propose a first step in the direction of model transformations working on multi-version models by adapting the well-known formalism of triple graph grammars, which enables the implementation of single-version model transformations, to the multi-version case. The remainder of the report is structured as follows: In Section 2, we briefly reiterate the basic concepts of graphs, graph transformations, triple graph grammars, and multi-version models, as used in this report. Subsequently, we present our approach for deriving transformation rules that work on multi-version models from single-version model transformation specifications in the form of triple graph grammars in Section 3. In Section 4, we describe how the derived rules can be used to realize the joint transformation of all individual model versions encoded in a multi-version model and prove the correctness of our technique with respect to the semantics of triple graph grammars. Section 5 reports on the results of an initial evaluation of the presented solution's performance regarding execution time, which is based on an application scenario in the software development domain. Related work is discussed in Section 6, before Section 7 concludes the report. ## 2 Preliminaries In this section, we give a brief overview of required preliminaries regarding graphs and graph transformations, triple graph grammars and multi-version models. ### Graphs and Graph Transformations We briefly reiterate the concepts of graphs, graph morphisms and graph transformations and their typed analogs as defined in [6] and required in the remainder of the report. A graph \(G=(V^{G},E^{G},s^{G},t^{G})\) consists of a set of nodes \(V^{G}\), a set of edges \(E^{G}\) and two functions \(s^{G}:E^{G}\to V^{G}\) and \(t^{G}:E^{G}\to V^{G}\) assigning each edge its source and target, respectively. A graph morphism \(m:G\to H\) is given by a pair of functions \(m^{V}:V^{G}\to V^{H}\) and \(m^{E}:E^{G}\to E^{H}\) that map elements from \(G\) to elements from \(H\) such that \(s^{H}\circ m^{E}=m^{V}\circ s^{G}\) and \(t^{H}\circ m^{E}=m^{V}\circ t^{G}\). We also call \(m^{V}\) the _vertex morphism_ and \(m^{E}\) the _edge morphism_. A graph \(G\) can be typed over a type graph \(TG\) via a typing morphism \(\textit{type}:G\to TG\), forming the typed graph \(G^{T}=(G,\textit{type}^{G})\). In this report, we consider a model to be a typed graph, with the type graph defining a modeling language by acting as a metamodel. A typed graph morphism between two typed graphs \(G^{T}=(G,\textit{type}^{G})\) and \(H^{T}=(H,\textit{type}^{H})\) with the same type graph then denotes a graph morphism \(m^{T}:G\to H\) such that \(\textit{type}^{G}=\textit{type}^{H}\circ m^{T}\). A (typed) graph morphism \(m\) is a monomorphism iff its functions \(m^{V}\) and \(m^{E}\) are injective. Figure 1 shows an example typed graph \(G\) from the software development domain along with the corresponding type graph \(TG\). The typing morphism is encoded by the node's labels. \(G\) represents an abstract syntax graph of a program written in an object-oriented programming language, where nodes may represent class declarations (\(ClassDecl\)), field declarations (\(FieldDecl\)) or type accesses (\(TypeAccess\)). Class declarations may contain field declarations via edges of type \(declaration\), whereas field declarations can reference a class declaration as the field type via a \(TypeAccess\) node and edges of type \(access\) and \(type\). The example graph contains two class declarations, one of which contains a field declaration, the field type of which is given by the other class declaration. A (typed) graph transformation rule \(r\) is characterized by a span of (typed) graph monomorphisms \(L\stackrel{{ l}}{{\leftarrow}}K\xrightarrow{r}R\) and can be applied to a graph \(G\) via a monomorphism \(m:L\to G\) called match that satisfies the so-called dangling condition [6]. The result graph \(H\) of the rule application is then formally defined by a double pushout over an intermediate graph [6]. Intuitively, the application of \(r\) deletes the elements in \(m(L)\) that do not have a corresponding element in \(R\) and creates new elements for elements in \(R\) that do not have a corresponding element in \(L\). The graph \(L\) is also called the rule's _left-hand side_, \(K\) is called the rule's _glueing graph_, and \(R\) is called the _right-hand side_. \(r\) is called a graph production if it does not delete any elements, that is, \(l\) Figure 1: example graph and type graph is surjective. In this case, since \(L\) and \(K\) are isomorphic with \(l\) an isomorphism and we only distinguish graphs up to isomorphism, we also use the simplified representation \(L\xrightarrow{r}R\). Figure 2 shows an example graph production in shorthand notation, where preserved elements are colored black, whereas created elements are colored green and marked by an additional "++" label. For two existing classes, the production creates a field declaration in one of them that references the other class declaration as the field type. We denote a sequence of applications of rules from a set of rules \(R\) to a graph \(G\) with resulting graph \(G^{\prime}\) by \(G\rightarrow^{R}G^{\prime}\). We say that such a rule application sequence is maximal if it cannot be extended by any application of a rule from \(R\). **Definition 1**.: _Maximal Rule Application Sequence_ A sequence of rule applications \(G\rightarrow^{R}G^{\prime}\) with a set of (multi-version or original) forward rules \(R\) is maximal if no rule from \(R\) is applicable to \(G^{\prime}\). ### Triple Graph Grammars Triple graph grammars were initially presented by Schuerr [19]. This report is based on the slightly adapted version introduced in [9]. In [9], a triple graph grammar (TGG) relates a source and a target modeling language via a correspondence modeling language and is characterized by a set of TGG rules. A TGG rule is defined by a graph production that simultaneously transforms connected graphs from the source, correspondence and target modeling language into a consistently modified graph triplet. The set of TGG rules has to include a dedicated axiom rule, which has a triplet of empty graphs as its left-hand side and practically defines a triplet of starting graphs via its right-hand side. The left-hand side of a TGG rule \(r=L\xrightarrow{r}R\) can be divided into the source, correspondence, and target domains \(L_{S}\), \(L_{C}\), and \(L_{T}\) respectively, with \(L_{S}\subseteq L\), \(L_{C}\subseteq L\), and \(L_{R}\subseteq L\) and \(L_{S}\uplus L_{C}\uplus L_{R}=L\). The right-hand side can similarly be divided into three domains \(R_{S}\), \(R_{C}\), and \(R_{T}\). The type graph for graph triplets and TGG rules is hence given by the union of the type graphs defining the source, correspondence, and target language along with additional edges connecting nodes in the correspondence language to nodes in the source and target language. Figure 2: example graph transformation rule in shorthand notation Figure 3 shows a TGG rule for linking the language for abstract syntax graphs given by the type graph in Figure 1 to a modeling language for class diagrams given by the type graphs \(TT\) in Figure 4, using the correspondence language \(TC\) from Figure 4. The rule simultaneously creates a \(FieldDecl\) and \(TypeAccess\) along with associated edges in the source domain (labeled S) and a corresponding \(Association\) with associated edges in the target domain (labeled T), which are linked via a newly created correspondence node of type \(CorrField\) in the correspondence domain (labeled C). TGGs can be employed to transform a model of the source language into a model of the target language. This requires the derivation of so-called forward rules from the set of TGG rules. A forward rule for a TGG rule \(r=L\xrightarrow{r}R\) can be constructed as \(r^{F}=L^{F}\xleftarrow{id}L^{F}\xrightarrow{r^{F}}R\), where \(L^{F}=L\cup(R_{S}\setminus r(L))\) and \(r^{F}=r\cup id\), with \(id\) the identity morphism. Intuitively, \(r^{F}\) already requires the existence of the elements in the source domain that would be created by an application of \(r\) and only creates elements in the correspondence and target domain. In the following, we also denote the subgraph of a forward rule that corresponds to the subgraph that is newly transformed by the rule by \(L^{T}=L^{F}\setminus L\). Additionally, the derivation of a forward rule requires a technical extension to avoid redundant translation of the same element. Therefore, a dedicated _bookkeeping node_, which is connected to every currently untranslated source element via a _bookkeeping edge_, is introduced. Then, a bookkeeping node and bookkeeping edges to all elements in \(L^{T}\) are added to the forward rule's left-hand side. The bookkeeping node is also added to the rule's glueing graph and right-hand side. Additionally, negative application conditions are added to \(L^{F}\) Figure 4: example type graphs for the TGG rule in Figure 3 Figure 3: example TGG rule in shorthand notation which ensure that for a match \(m\) from \(L^{F}\) into \(SCT\), \(\forall x\in L^{F}\setminus L^{T}:\nexists b\in B^{SCT}:t_{B}^{SCT}=m(x)\). The application of the forward rule via \(m\) thus requires that elements in \(m(L^{T})\) are untranslated, as indicated by the existence of bookkeeping edges, and marks these elements as translated by deleting the adjacent bookkeeping edges. Elements in \(m(L^{F}\setminus L^{T})\) are in contrast required to already be translated. Note that, in order to allow bookkeeping edges between the bookkeeping node and regular edges, a slightly extended graph model is used, which is detailed in [10]. Figure 5 shows the forward rule derived from the TGG rule in Figure 3. The elements \(f_{1}\) and \(t_{1}\) and adjacent edges are no longer created but preserved instead. Also, the rule requires bookkeeping edges to \(f_{1}\), \(t_{1}\), and adjacent edges, and contains NACs that forbid the existence of bookkeeping edges to \(c_{1}\) and \(c_{2}\). However, this bookkeeping mechanism is omitted in the figure for readability reasons. The rule's application then deletes the bookkeeping edges to \(f_{1}\), \(t_{1}\), and their adjacent edges, and creates the corresponding elements in the target domain along with the linking node \(cf_{1}\) in the correspondence domain. TGGs can also be used to perform a transformation from the target to the source language by means of similarly derived backward rules. In the following, we will focus on the forward case. However, the backward case simply works analogously. A TGG without any critical pairs [6] among its rules is called _deterministic_[9]. A forward transformation with a deterministic TGG can be executed via an operation \(trans^{F}\), which simply applies the TGG's forward rules for as long as there is a match for any of them, with the order of rule applications not affecting the final result due to the absence of critical pairs. Specifically, for a deterministic TGG with a set of forward rules \(R\) and a starting model triple \(SCT\), any maximal rule transformation sequence \(SCT\rightarrow^{R}SCT^{\prime}\) constitutes a correct model transformation if it deletes all bookkeeping edges in \(SCT\). Note that, if \(SCT\rightarrow^{R}SCT^{\prime}\) satisfies this bookkeeping criterion, every other possible maximal rule transformation sequence for \(SCT\) and \(R\) also satisfies the bookkeeping criterion. In this report, we will focus on such deterministic TGGs, which allow for efficient practical implementations that avoid potentially expensive undoing Figure 5: example forward rule derived from the TGG rule in Figure 3, with the bookkeeping mechanism omitted for readability reasons of forward rule applications and backtracking [9]. ### Multi-version Models In this report, we consider models in the form of typed graphs. A model modifications can in this context be represented by a span of morphisms \(M\gets K\to M^{\prime}\), where \(M^{\prime}\) is the original model, which is modified into a changed model \(M^{\prime}\) via an intermediate model \(K\), similar to a graph transformation step [21]. A _version history_ of a model is then given by a set of model modifications \(\Delta^{M_{\{1,...,n\}}}\) between models \(M_{1},M_{2},...,M_{n}\) with type graph \(TM\). We call a version history with a unique initial version and acyclic model modification relationships between the individual versions a _correct_ version history. In [2], we have introduced _multi-version models_ as a means of encoding such a version history in a single consolidated graph. Therefore, an adapted version of \(TM\), \(TM_{mv}\), is created. To represent model structure, \(TM_{mv}\) contains a node for each node and each edge in \(TM\). Source and target relationships of edges in \(TM\) are represented by edges in \(TM_{mv}\). In addition, a \(version\) node with a reflexive \(suc\) edge is added to \(TM_{mv}\), which allows the materialization of the version history's version graph. The version graph and the model structure are linked via \(cv_{v}\) and \(dv_{v}\) edges from each node \(v\) in \(TM_{mv}\) to the \(version\) node. The result of the adaptation of the type graph from Figure 1 is displayed in Figure 6. Note that \(cv\) and \(dv\) edges are omitted for readability reasons. \(TM_{mv}\) allows the translation of \(\Delta^{M_{\{1,...,n\}}}\) into a single typed graph \(MVM\) conforming to \(TM_{mv}\), which is called a _multi-version model_, via a procedure \(comb\). This yields a bijective function \(origin:V^{MVM}\rightarrow\bigcup_{i\in\{1,2,...,n\}}V^{M_{i}}\cup E^{M_{i}}\) mapping the vertices in \(MVM\) to their respective original element. An individual model versions can be extracted from \(MVM\) via the projection operation \(proj(MVM,i)=M_{i}\). Finally for a vertex \(v_{mv}\in V^{MVM}\), the set of model versions that include the element \(origin(v_{mv})\) can be computed via the function \(p\), with \(p(v_{mv})=\{M_{i}\in\{M_{1},M_{2},...,M_{n}\}|origin(v_{mv})\in M_{i}\}\). Figure 6: example adapted type graph derived from the type graph in Figure 1, with \(cv\) and \(dv\) edges omitted for readability reasons Derivation of Multi-version Transformation Rules from Triple Graph Grammars The transformation of the individual model versions encoded in a multi-version model with a triple graph grammar can trivially be realized via the projection operation \(proj\). However, the multi-version model may in practice afford a more compact representation compared to an explicit enumeration of all model versions, as derived via \(proj\). In such practical application scenarios, operations concerning all model versions that directly work on the multi-version model may therefore also perform better regarding execution time than the corresponding operations on individual model versions, as we have already demonstrated for the case of pattern matching for checking the well-formedness of all model versions in a version history [2]. Since pattern matching also constitutes an important task in model transformation via triple graph grammars, a direct, joint translation of all model versions based on the multi-version model representation seems desirable. Given a triple graph grammar \(TGG\), graph transformation rules for the joint translation of all source or target model versions encoded in a multi-version model can be derived from the regular translation rules in a straightforward manner. In the following, we will discuss the deriviation for forward translation. Rules for the backward case can be derived analogously. First, the adapted multi-version type graph for the TGG's merged source, correspondence and target type graph is created via the translation procedure described in [2]. The resulting adapted type graph \(TG_{mv}\) for multi-version models is extended by two additional edges, \(ucv_{v}\) and \(udv_{v}\), for each node \(v\) in the source domain of the merged type graph. Source and target of these edges are given by \(s^{TG_{mv}}(ucv_{v})=s^{TG_{mv}}(udv_{v})=v\) and \(t^{TG_{mv}}(ucv_{v})=t^{TG_{mv}}(udv_{v})=version\), where \(version\) is the dedicated version node in the adapted type graph. Analogously to the bookkeeping edges in the original typegraph, these edges will be used in the translation process to encode in which versions an element represented by a node \(v_{mv}\) with type \(v\) has already been translated. We therefore define the set of versions where \(v_{mv}\) has not been translated yet \(u(v_{mv})\) analogously to the set of versions \(p(v_{mv})\) where \(v_{mv}\) is present, except that \(ucv_{v}\) and \(udv_{v}\) replace \(cv_{v}\) and \(dv_{v}\) in the definition. Then, for each forward rule \(r=L\stackrel{{ l}}{{\leftarrow}}K\stackrel{{ r}}{{\rightarrow}}R\) a corresponding multi-version forward rule is created via a procedure \(adapt\), with \(adapt(r)=trans^{\prime}(L)\stackrel{{ l_{mv}}}{{\leftarrow}}trans^{ \prime}(K)\stackrel{{ r_{mv}}}{{\rightarrow}}trans^{\prime}(R)\). The vertex morphism of \(l_{mv}\) is given by \(l_{mv}^{V}=origin^{-1}\circ l\circ origin\) and the edge morphisms by \(l_{mv}^{E}=s\circ origin^{-1}\circ l^{E}\circ origin\circ s^{-1}\) and \(l_{mv}^{E}=t\circ origin^{-1}\circ l^{E}\circ origin\circ t^{-1}\) for all edges representing source respectively target relationships. \(r_{mv}\) is constructed analogously. The \(trans^{\prime}\) procedure is a minor adaptation of the \(trans\) procedure in [2], which ignores the bookkeeping node, bookkeeping edges, and negative application conditions, but otherwise works analogously. The bookkeeping mechanism is instead translated into the additional constraint \(P\neq\emptyset\) over \(trans^{\prime}(L)\), where \(P=(\bigcap_{v_{mv}\in V^{trans^{\prime}(L)}}p(v_{mv})\cap\bigcap_{v_{mv}\in origin^{-1}(L^{T})}u(v_{mv}))\setminus\bigcup_{v_{mv}\in V ^{trans^{\prime}(L)}}u(v_{mv})\). The application of the adapted rule additionally creates outgoing \(cv\) and \(dv\) edges for all vertices \(v_{mv}^{C}\in V^{trans(R)}\setminus(origin^{-1}\circ r\circ origin)(trans(K))\) to realize the assignment \(p(v_{mv}^{C})\coloneqq P\). Furthermore, for \(v_{mv}\in origin^{-1}(r(l^{-1}(L^{T})))\), the application also adds and deletes outgoing \(ucv\) and \(udv\) edges to realize the modification \(u(v_{mv})\coloneqq u(v_{mv})\setminus P\). Note that, since the computation of the \(p\) and \(u\) sets requires considering paths of arbitrary length, these computations cannot technically be defined as part of the graph transformation but have to be realized externally. For the set of forward rules \(R\), the corresponding set of multi-version forward rules is then simply given by \(R_{mv}=\{adapt(r)|r\in R\}\). ## 4 Execution of Multi-version Transformations The forward transformation of all model versions encoded in a multi-version model \(MVM\) according to a specified TGG can jointly be performed via the TGG's set of multi-version forward rules. In a first step, all \(ucv\) and \(udv\) edges present in \(MVM\) are removed. Then, for each edge \(e_{cv}\in E^{MVM}\) with \(type(e_{cv})=cv_{x}\) and \(s^{MVM}(e_{cv})\), an edge \(e_{ucv}\) with \(type(e_{ucv})=ucv_{x}\) and \(s^{MVM}(e_{cv})=s^{MVM}(e_{ucv})\) and \(t^{MVM}(e_{cv})=t^{MVM}(e_{ucv})\) is created. For all \(dv\) edges, corresponding \(udv\) edges are created analogously. Thus, after the creation of the \(ucv\) and \(udv\) edges, it holds that \(\forall v_{vm}\in V^{MVM}:u(v_{vm})=p(v_{vm})\). Subsequently, the simultaneous transformation of all model versions encoded in \(MVM\) is performed similarly to the regular transformation of a single model version via the TGG. More specifically, the adapted forward rules of the TGG are simply applied to \(MVM\) until no such rule is applicable anymore. In the following, we will argue that this transformation approach is correct in the sense that it yields the same result as the transformation of an individual model version via the regular forward rules. Therefore, we extend the projection operation \(proj\) from [2] to a bookkeeping-sensitive variant. **Definition 2**.: _(Bookkeeping-sensitive Projection) For a multi-version model \(MVM\) with version graph \(V\) and version \(t\in V^{V}\), the bookkeeping-sensitive projection operation works similarly to the regular projection operation \(proj\), except that it also adds a bookkeeping node and bookkeeping edges to an element \(origin(v)\) iff \(t\notin u(v)\) for all \(v\in V^{MVM}\). We also denote the result of the bookkeeping-sensitive projection operation by \(MVM[t]=proj^{M}(MVM,t)\)._ We also define two sets that represent the bookkeeping during the transformation process. **Definition 3**.: _(Bookkeeping Set) For a model \(M\), we denote the set of translated elements (vertices and edges) by \(B(M)=\{x\in M|\nexists b\in E^{\prime M}:t^{\prime M}=x\}\), with \(E^{\prime M}\) the set of bookkeeping edges in \(M\) and \(t^{\prime M}\) the target function for bookkeeping edges. We also call \(B(M)\) the bookkeeping set of \(M\)._ **Definition 4**.: _(Projection Bookkeeping Set)_ For a multi-version model \(MVM\) and version \(t\in V^{V}\), with \(V\) the version graph, we denote the set of already handled elements (vertices and edges) in \(MVM[t]\) by \(B_{mv}(MVM[t])=\{x\in MVM[t]|t\notin u(proj^{-1}(x))\}\). We also call \(B_{mv}(MVM[t])\) the _projection bookkeeping set_ of \(MVM[t]\). The following theorem states that, at the start of the transformation process via adapted forward rules, the prepared multi-version model via the bookkeeping-sensitive projection correctly encodes the starting situation for the translation of the individual model versions. **Theorem 1**.: Given a multi-version model \(MVM\) encoding a version history with model versions \(M_{1},M_{2},...,M_{n}\) such that \(\forall v_{vm}\in V^{MVM}:u(v_{vm})=p(v_{vm})\), it holds that \(\forall t\in\{1,2,...,n\}:MVM[t]=init_{F}(M_{t})\) up to isomorphism, where \(init_{F}(SCT_{t})\) denotes the graph with bookkeeping resulting from the preparation of \(M_{t}\) for the regular forward transformation process, that is, the graph \(M_{t}\) with an added bookkeeping node and bookkeeping edges to all elements in \(M_{t}\). Proof.: Follows directly from the fact that \(\forall t\in\{1,2,...,n\}:proj(MVM,t)=M_{t}\), which has been shown in [2], and the definition of the bookkeeping-sensitive projection operation. By Theorem 1, we also get the following corollary: **Corollary 1**.: Given a multi-version model \(MVM\) encoding a version history with model versions \(M_{1},M_{2},...,M_{n}\) such that \(\forall v_{vm}\in V^{MVM}:u(v_{vm})=p(v_{vm})\), it holds that \(\forall t\in\{1,2,...,n\}:B_{mv}(MVM[t])=B(init_{F}(M_{t}))\) up to isomorphism, where \(init_{F}(SCT_{t})\) denotes the graph with bookkeeping resulting from the preparation of \(M_{t}\) for the regular forward transformation process, that is, the graph \(M_{t}\) with an added bookkeeping node and bookkeeping edges to all elements in \(M_{t}\). Proof.: Follows directly from Theorem 1 and the definition of bookkeeping set and projection bookkeeping set. We now show that a multi-version rule is applicable to a multi-version model iff the corresponding regular rule is applicable to all individual model versions affected by the rule application. **Theorem 2**.: A multi-version forward rule \(r_{mv}=L_{mv}\gets K_{mv}\to R_{mv}\) is applicable to a multi-version model triple \(SCT_{mv}\) with bookkeeping via match \(m\), if and only if for all \(t\in P\), the associated original forward rule \(r=L\gets K\to R\) is applicable to \(SCT_{mv}[t]\) via match \(trans(m)\), with \(P=\bigcap_{v\in V^{L_{mv}}}p(m(v))\cap\bigcap_{v\in V^{L_{mv}^{T}}}u(m(v))\). Proof.: For a version \(t\), as we have already shown in [2], the match \(m:L_{mv}\to SCT_{mv}\) has a corresponding match \(trans(m):L\to SCT_{mv}[t]\) if and only if \(t\in\bigcap_{v\in V^{L_{mv}}}p(m(v))\). Furthermore, due to the definition of \(P\) and the construction of \(r_{mv}\), all elements in \(m(trans(m)(L^{T}))\) have an adjacent bookkeeping edge in \(SCT_{mv}[t]\) iff \(t\in\bigcap_{v\in V^{L_{mv}^{T}}}u(m(v))\). Similarly, all elements in \(m(trans(m)(L\setminus L^{T}))\) have no adjacent bookkeeping edge in \(SCT_{mv}[t]\) iff \(t\notin\bigcup_{v\in V^{L_{mv}\setminus L_{mv}^{T}}}u(m(v))\). Since \(r\) and \(r_{mv}\) delete no vertices, the dangling condition is trivially satisfied for \(r\) and the match \(trans(m)\). \(r_{mv}\) is hence applicable to \(SCT_{mv}\) via \(m\), with \(t\in P\), iff \(r\) is applicable to \(SCT_{mv}[t]\) via \(trans(m)\). We can now show the equivalence of a single multi-version rule application to a multi-version model to the application of the corresponding regular rule to all affected model versions. **Theorem 3**.: For an application \(SCT_{mv}\rightarrow_{m}^{r_{mv}}SCT_{mv}^{\prime}\) of a multi-version forward rule \(r_{mv}=L_{mv}\gets K_{mv}\to R_{mv}\) with original forward rule \(r=L\gets K\to R\) to a multi-version model triple \(SCT_{mv}\) with bookkeeping and version graph \(V\) via match \(m\), it holds that \(\forall t\in P:SCT_{mv}^{\prime}[t]=SCT^{\prime}\wedge B_{mv}(SCT_{mv}^{ \prime}[t])=B(SCT^{\prime})\) up to isomorphism, with the corresponding application \(SCT_{mv}[t]\rightarrow_{trans(m)}^{r}SCT^{\prime}\). Furthermore, \(\forall t\in V^{V}\setminus P:SCT_{mv}^{\prime}[t]=SCT_{mv}[t]\wedge B_{mv}( SCT_{mv}^{\prime}[t])=B(SCT_{mv}[t])\) up to isomorphism, where \(P=\bigcap_{v\in V^{L_{mv}}}p(m(v))\cap\bigcap_{v\in V^{L_{mv}^{T}}}u(m(v))\). Proof.: Disregarding bookkeeping edges, all forward rules and thus also the adapted forward rules are productions. Due to the construction of the adapted forward rules, all elements created by the rule's application are only mv-present in \(SCT_{mv}^{\prime}\) for the versions in \(P\). Therefore, for all remaining versions, \(SCT_{mv}[t]\) contains the same elements as \(SCT_{mv}^{\prime}[t]\). An isomorphism \(iso:SCT_{mv}[t]\rightarrow SCT_{mv}^{\prime}[t]\) is hence trivially given by the identity in this case. Since the application of \(r_{mv}\) only changes the projection bookkeeping sets for versions in \(P\), \(B_{mv}(SCT_{mv}^{\prime}[t])=B(SCT_{mv}[t])\) with isomorphism \(iso\). It thus holds up to isomorphism that \(\forall t\in V^{V}\setminus P:SCT_{mv}^{\prime}[t]=SCT_{mv}[t]\wedge B_{mv}( SCT_{mv}^{\prime}[t])=B(SCT_{mv}[t])\). The application of \(r_{mv}\) to \(SCT_{mv}\) yields a comatch \(n:R_{mv}\to SCT_{mv}^{\prime}\) and the associated application of \(r\) to \(SCT_{mv}[t]\) similarly yields a comatch \(n^{\prime}:R\to SCT^{\prime}\) for any \(t\in P\). An isomorphism \(iso:SCT_{mv}^{\prime}[t]\to SCT^{\prime}\) can then be constructed as follows: Since \(r_{mv}\) is a production, \(SCT_{mv}\) is a subgraph of \(SCT_{mv}^{\prime}\) and hence \(SCT_{mv}[t]\) is also a subgraph of \(SCT_{mv}^{\prime}[t]\). Since \(r\) is a production, \(SCT_{mv}[t]\) is also a subgraph of \(SCT^{\prime}\). Isomorphic mappings for \(SCT_{mv}[t]\) between \(SCT_{mv}^{\prime}[t]\) and \(SCT^{\prime}\) are thus simply given by the identity. This leaves only the elements in \(n(R_{mv}\setminus L_{mv})\) and the elements in \(n^{\prime}(R\setminus L)\) unmapped. Due to the construction of \(r_{mv}\) being unique up to isomorphism, \(n\) and \(n^{\prime}\) being monomorphisms, and \(trans\) and \(origin\) being bijections, the remaining isomorphic mappings are given by \(n^{\prime}\circ trans\circ n^{-1}\circ origin\). Note that for elements in \(n(L_{mv})\), the definition of \(iso\) via identity and \(n^{\prime}\circ trans\circ n^{-1}\circ origin\) is redundant but compatible. Due to the definition of bookkeeping-sensitive projection, bookkeeping set, and projection bookkeeping set, it holds that \(B(SCT_{mv}[t])=B_{mv}(SCT_{mv}[t])\) and thus \(B_{mv}(SCT_{mv}[t])=B(SCT_{mv}[t])\)). Compared to \(B_{mv}(SCT_{mv}[t])\), the application of \(r_{mv}\) only changes the projection bookkeeping set \(B_{mv}(SCT_{mv}[t])\) by adding the elements in \(trans(m(L^{T}_{mv}))\). The modification to \(B_{mv}(SCT^{\prime}_{mv}[t])\) hence corresponds to the modification of the bookkeeping set \(B(SCT^{\prime})\) by the application of \(r\) via \(trans(m)\) for the isomorphism \(iso\) due to the construction of \(r_{mv}\). It thus holds that \(\forall t\in P:SCT^{\prime}_{mv}[t]=SCT^{\prime}\wedge B_{mv}(SCT^{\prime}_{mv }[t])=B(SCT^{\prime})\). Based on Theorem 3 for individual rule applications, we get the following corollary for sequences of rule applications: **Corollary 2**.: For a TGG with associated set of forward rules \(R\) and multi-version forward rules \(R_{mv}\) and a multi-version model triple \(SCT_{mv}\) with bookkeeping and version graph \(V\), there is a sequence of rule applications \(SCT_{mv}\rightarrow^{R_{mv}}SCT^{\prime}_{mv}\) if and only if for all \(t\in V^{V}\), there is a sequence of rule applications \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime}\) with \(SCT^{\prime}_{mv}[t]=SCT^{\prime}\wedge iso(B_{mv}(SCT^{\prime}_{mv}[t]))=B( SCT^{\prime})\), where \(iso\) is an isomorphism from \(SCT^{\prime}_{mv}[t]\) into \(SCT^{\prime}\). Proof.: We prove the corollary by induction over the length of the multi-version rule application sequence. For the base case of application sequences of length \(0\), the identity morphism and empty application sequences trivially satisfy the corollary. If there is a sequence of rule applications \(SCT_{mv}\rightarrow^{R_{mv}}SCT^{\prime}_{mv}\) if and only if for all \(t\in V^{V}\), there is a sequence of rule applications \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime}\) with \(SCT^{\prime}_{mv}[t]=SCT^{\prime}\wedge iso(B_{mv}(SCT^{\prime}_{mv}[t]))=B( SCT^{\prime})\), by Theorem 3 we have an extended multi-version sequence \(SCT_{mv}\rightarrow^{R_{mv}}SCT^{\prime}_{mv}\rightarrow^{r_{mv}}SCT^{ \prime\prime}_{mv}\) and all \(t\in V^{V}\) if and only if for all \(t\in V^{V}\), there is a sequence of regular rule applications \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime\prime}\) with \(SCT^{\prime\prime}_{mv}[t]=SCT^{\prime\prime}\wedge iso(B_{mv}(SCT^{\prime \prime}_{mv}[t]))=B(SCT^{\prime\prime})\). For all \(t\in V^{V}\setminus P\), where \(P=\bigcap_{v\in V^{L_{mv}}}p(m(v))\cap\bigcap_{v\in V^{L^{T}_{mv}}}u(m(v))\), the corresponding regular rule application sequence \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime}\) and isomorphism \(iso:SCT^{\prime}_{mv}[t]\rightarrow SCT^{\prime}\) are also valid for \(SCT^{\prime\prime}_{mv}[t]\) and satisfy the condition on bookkeeping sets, since \(SCT^{\prime}=SCT^{\prime}_{mv}[t]=SCT^{\prime\prime}_{mv}[t]\) (up to isomorphism). In accordance with Theorem 3, there is an extended sequence \(SCT_{mv}\rightarrow^{R_{mv}}SCT^{\prime}_{mv}\rightarrow^{r_{mv}}SCT^{ \prime\prime}_{mv}\) if and only if for all \(t\in P\), the regular rule application sequence \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime}_{mv}[t]\) can be extended by a rule application \(SCT^{\prime}_{mv}[t]\rightarrow^{r}SCT^{\prime}_{mv}[t]\rightarrow^{r}SCT^{ \prime\prime}_{mv}[t]\) that satisfies the condition on bookkeeping sets. Thus, there is a sequence of rule applications \(SCT_{mv}\rightarrow^{R_{mv}}SCT^{\prime}_{mv}\rightarrow^{r_{mv}}SCT^{ \prime\prime}_{mv}\) if and only if for all \(t\in V^{V}\), there is a sequence of rule applications \(SCT_{mv}[t]\rightarrow^{R}SCT^{\prime\prime}\) with \(SCT^{\prime\prime}_{mv}[t]=SCT^{\prime\prime}\wedge iso(B_{mv}(SCT^{\prime \prime}_{mv}[t]))=B(SCT^{\prime\prime})\). With the proof for the base case and the induction step, we have proven the validity of the corollary. Intuitively, the multi-version forward rules perform an interleaved, parallel transformation of all model versions encoded in \(\mathit{SCT}_{mv}\). The application of a multi-version rule \(L_{mv}\gets K_{mv}\to R_{mv}\) corresponds to the application of the original rule to all model versions in \(P=\bigcap_{v\in V^{L_{mv}}}p(m(v))\cap\bigcap_{v\in V^{L_{mv}^{T}}}u(m(v))\) and leaves all other model versions unchanged. Thus, a multi-version rule application effectively extends the corresponding original rule application sequences for all versions in \(P\) by the associated original rule application, whereas it represents the "skipping" of a step in the sequences of all versions not in \(P\). For a deterministic TGG, a correct translation of source graph \(S\) is given by any maximal rule application sequence of forward rules that deletes all book-keeping edges in the source model. Note that because of the determinism criterion, either every maximal rule application sequences or none of them satisfies the bookkeeping criterion. Correctness of the joint translation of all individual versions via multi-version forward rules is hence given by the following corollary: **Corollary 3**.: For a TGG with associated set of forward rules \(R\) and multi-version forward rules \(R_{mv}\) and a multi-version model triple \(\mathit{SCT}_{mv}\) with bookkeeping and version graph \(V\), there is a maximal sequence of rule applications \(\mathit{SCT}_{mv}\rightarrow^{R_{mv}}\mathit{SCT}_{mv}^{\prime}\) if and only if for all \(t\in V^{V}\), there is a maximal sequence of regular rule applications \(\mathit{SCT}_{mv}[t]\rightarrow^{R}\mathit{SCT}^{\prime}\) such that \(\mathit{SCT}_{mv}^{\prime}[t]=\mathit{SCT}^{\prime}\wedge B_{mv}(\mathit{SCT}_ {mv}^{\prime}[t])=B(\mathit{SCT}^{\prime})\). Proof.: The existence of a sequence of original rule applications for a sequence of multi-version rule applications and all versions \(t\in V^{V}\) and vice-versa is given by Corollary 2. From Theorem 2, it follows directly that the multi-version sequence is maximal if and only if the regular sequences are maximal for all \(t\in V^{V}\). Thus, for a deterministic TGG and by corollaries 1 and 3, the result of repeated application of adapted transformation rules to a multi-version model prepared for multi-version translation until a fixpoint is reached is equivalent to the results of repeated application of the original rules to the individual model versions prepared for translation, that is, the results of transforming the individual model versions using the TGG. We thereby have the correctness of the forward transformation using multi-version forward rules \(trans_{mv}^{F}\), which applies multi-version forward rules to a multi-version model with bookkeeping until a fixpoint is reached. **Theorem 4**.: For a correct version history \(\Delta^{M_{\{1,...,n\}}}\) and a triple graph grammar with set of forward rules \(R\), it holds up to isomorphism that \[\forall t\in\{1,...,n\}:trans_{mv}^{F}(init_{F}(comb(\Delta^{M_{\{1,...,n\}}}) ),adapt(R))[t]=trans^{F}(M_{t},R) \tag{1}\] Proof.: Follows directly from Theorem 1 and Corollary 3. Evaluation In order to evaluate our approach empirically with respect to execution time performance, we have realized the presented concepts in our MoTE2 tool [12] for TGG-based model transformation, which is implemented in the context of the Java-based Eclipse Modeling Framework [7] and has been shown to be efficient compared to other existing model transformation tools [12]. As an application scenario, we consider the transformation of Java abstract syntax trees to class diagrams. We have therefore modeled this transformation as a TGG with MoTE2 and use the original and our adapted implementation to automatically derive the required forward rules respectively multi-version forward rules. To obtain realistic source models, we have extracted the version history of one small personal Java project (_rete_, around 50 versions) and one larger open source Java project (_henshin_[1], around 2000 versions) from their respective GitHub repositories and have constructed the corresponding history of abstract syntax trees using the MoDisco tool [3]. As input for the solution presented in Sections 3 and 4, we have consolidated both version histories into multi-version models using a mapping based on hierarchy and naming. Our implementation and the employed datasets are available under [22]. Based on this, we run the following model transformations for both repositories and measure the overall execution time1 for each of them: Footnote 1: All experiments were performed on a Linux SMP Debian 4.19.67-2 machine with Intel Xeon E5-2630 CPU (2.3 GHz clock rate) and 386 GB system memory running OpenJDK version 11.0.6. Reported execution time measurements correspond to the mean execution time of 10 runs of the respective experiment. * **SVM**: individual forward transformation of all model versions (abstract syntax trees) in the repository using the original MoTE2 implementation * **MVM**: joint forward transformation of all model versions in the repository using a multi-version model encoding and our implementation of the technique presented in Sections 3 and 4 Note that the SVM strategy would require initial projection operations and a final combination of transformation results to work within the framework of multi-version models. However, for fairness of comparison of the transformation, we do not consider these additional operations in our evaluation. Figure 7 shows the execution times of the transformations using the two strategies. For both repositories, the transformation based on multi-version models requires substantially less time than the transformation of the individual model versions using the original MoTE2 tool, with a more pronounced improvement for the larger repository (factor 4 for the smaller and factor 74 for the larger repository). The improvement in efficiency and scalability can likely be explained by two factors: First, SVM has to perform a somewhat expensive initialization step for every individual model version that is to be transformed, whereas MVM only requires one such initialization. Second, many elements in the abstract syntax trees of the repositories are shared between many versions. SVM has to perform a separate transformation, including separate pattern matching, for each model version. In contrast, MVM only performs a transformation including pattern matching over a single multi-version model, the size of which is much smaller than the combined sizes of the encoded model versions, along with efficient search operations over the version graph. Since pattern matching is efficient in this example, that is, pattern matching has a runtime complexity that is linear in the size of the model for the derived forward rules, this results in an improved overall efficiency. Threats to the internal validity of our experimental results include unexpected behavior of the Java virtual machine such as garbage collection. To address this threat, we have performed multiple runs of all experiments and report the mean execution time of these runs, with the standard deviation always below 5% of the execution time. To minimize the impact of the concrete implementation on our measurements, we have realized our solution in the framework of the transformation tool we use for comparison and thereby largely use the same execution mechanism. To mitigate threats to external validity, we use real-world models as the source models of the transformation. However, we remark that our results are not necessarily generalizable to different examples or application domains and make no quantitative claims regarding the performance of our approach. ## 6 Related Work The general problem of model versioning has already been studied extensively, both formally [5, 17] and in the form of concrete tool implementations [14, 13]. Several solutions employ a unified representation of a model's version history Figure 7: execution time measurements for the transformation of all model versions in two different software repositories (logarithmic axis) similar to multi-version models [17, 14]. However, due to the problem definition focusing on the management of different versions of a single model, realising model transformation based on a unified encoding is out of scope for these approaches. There is also a significant body of previous work on synchronization of concurrently modified pairs of models using triple graph grammars [24, 15]. The focus of these works is the derivation of compatible versions of source and target model that respect the modifications to either of. This report aims to make a step in an orthogonal direction, namely towards allowing living with inconsistencies by enabling developers to temporarily work with multiple modified, possibly conflicting versions of source and target model. In the context of software product lines, so-called 150% models are employed to encode different configurations of a software system [4, 16]. In this context, Greiner and Westfechtel present an approach for propagating so-called variability annotations along trace links created by model transformations [23], explicitly considering the case of transformations implemented via triple graph grammars. A similar approach could also be employed to propagate versioning information and would have the advantage of not requiring any adaptation of the employed rules, type graph, or transformation process. However, not integrating this propagation with the transformation process and only propagating versioning information after the transformation has been executed would mean that certain cases that are covered by our approach could not be handled. The occurence of such cases may hence prevent a possible correct transformation. For instance, under standard TGG semantics, such cases include a model element being translated differently in different model versions based on its context. In previous work in our group, the joint execution of queries over multiple versions of an evolving model has been considered for both the case with [2] and without [11, 18] parallel, branching development. This report builds on these results, but instead of focusing on pure queries without side-effects considers the case of writing operations in the form of model transformations. ## 7 Conclusion In this report, we have presented a first step in the direction of model transformation on multi-version models in the form of an adaptation of the well-known triple graph grammar formalism that enables the joint transformation of all versions encoded in a multi-version model. The presented approach is correct with respect to the translation semantics of deterministic triple graph grammars for individual model versions, that is, it produces equivalent results. Initial experiments for evaluating the efficiency of our approach demonstrate that our technique can improve performance compared to a naive realization, which simply translates all model versions individually according to a triple graph grammar specification, in a realistic application scenario. In future work, we plan to build on the presented approach to realize model synchronization for multi-version models, that is, incremental propagation of changes to one or more versions of a source model to the corresponding target model versions. Furthermore, we want to explore the possibility of improving the efficiency of multi-version model transformations via incremental pattern matching for multi-version models. Another interesting direction for future work is the integration of advanced application conditions for the specification of triple graph grammar rules such as nested graph conditions into our approach. Finally, a more extensive evaluation can be conducted to study the scalability of the presented technique in more detail. ### Acknowledgements This work was developed mainly in the course of the project modular and incremental Global Model Management (project number 336677879), which is funded by the Deutsche Forschungsgemeinschaft.
2301.10430
On extremal problems on multigraphs
An $(n,s,q)$-graph is an $n$-vertex multigraph in which every $s$-set of vertices spans at most $q$ edges. Erd\H{o}s initiated the study of maximum number of edges of $(n,s,q)$-graphs, and the extremal problem on multigraphs has been considered since the 1990s. The problem of determining the maximum product of the edge multiplicities in $(n,s,q)$-graphs was posed by Mubayi and Terry in 2019. Recently, Day, Falgas-Ravry and Treglown settled a conjecture of Mubayi and Terry on the case $(s,q)=(4, 6a + 3)$ of the problem (for $a \ge 2$), and they gave a general lower bound construction for the extremal problem for many pairs $(s, q)$, which they conjectured is asymptotically best possible. Their conjecture was confirmed exactly or asymptotically for some specific cases. In this paper, we consider the case that $(s,q)=(5,\binom{5}{2}a+4)$ and $d=2$ of their conjecture, partially solve an open problem raised by Day, Falgas-Ravry and Treglown. We also show that the conjecture fails for $n=6$, which indicates for the case that $(s,q)=(5,\binom{5}{2}a+4)$ and $d=2$, $n$ needs to be sufficiently large for the conjecture to hold.
Ran Gu, Shuaichao Wang
2023-01-25T06:53:47Z
http://arxiv.org/abs/2301.10430v1
# On extremal problems on multigraphs ###### Abstract An \((n,s,q)\)-graph is an \(n\)-vertex multigraph in which every \(s\)-set of vertices spans at most \(q\) edges. Erdos initiated the study of maximum number of edges of \((n,s,q)\)-graphs, and the extremal problem on multigraphs has been considered since the 1990s. The problem of determining the maximum product of the edge multiplicities in \((n,s,q)\)-graphs was posed by Mubayi and Terry in 2019. Recently, Day, Falgas-Ravry and Treglown settled a conjecture of Mubayi and Terry on the case \((s,q)=(4,6a+3)\) of the problem (for \(a\geq 2\)), and they gave a general lower bound construction for the extremal problem for many pairs \((s,q)\), which they conjectured is asymptotically best possible. Their conjecture was confirmed exactly or asymptotically for some specific cases. In this paper, we consider the case that \((s,q)=(5,{5\choose 2}a+4)\) and \(d=2\) of their conjecture, partially solve an open problem raised by Day, Falgas-Ravry and Treglown. We also show that the conjecture fails for \(n=6\), which indicates for the case that \((s,q)=(5,{5\choose 2}a+4)\) and \(d=2\), \(n\) needs to be sufficiently large for the conjecture to hold. **Keywords:** Multigraphs; Turan problems; Extremal graphs **AMS Subject Classification (2020):** 05C35, 05C22 Introduction In 1963, Erdos [3, 4] posed the extremal question on \(ex(n,s,q)\), the maximum number of edges in an \(n\)-vertex graph in which every \(s\)-set of vertices spans at most \(q\) edges, where \(q\) is an integer satisfying that \(0\leq q\leq{n\choose 2}\). In the 1990s, Bondy and Tuza [1] and Kuchenbrod [7] raised an analogous extremal problem on multigraphs. A multigraph is a pair \((V,w)\), where \(V\) is a vertex set and \(w\) is a function \(w:{V\choose 2}\rightarrow\mathbb{Z}_{\geq 0}\). **Definition 1**: _Given integers \(s\geq 2\) and \(q\geq 0\), we say a multigraph \(G=(V,w)\) is an \((s,q)\)-graph if every \(s\)-set of vertices in \(V\) spans at most \(q\) edges; i.e. \(\sum_{xy\in{X\choose 2}}w(xy)\leq q\) for every \(X\in{V\choose s}\). An \((n,s,q)\)-graph is an \(n\)-vertex \((s,q)\)-graph. We write \(\mathcal{F}(n,s,q)\) for the set of all \((n,s,q)\)-graphs with vertex set \([n]:=\{1,...,n\}\)._ Bondy and Tuza [1], Kuchenbrod [7] and Furedi and Kundgen [6] studied the problem of the maximum of the sum of the edge multiplicities in an \((n,s,q)\)-graph. Especially, Furedi and Kundgen [6] obtained an asymptotically tight upper bound \(m{n\choose 2}+O(n)\) for the maximum of edges in \((n,s,q)\)-graphs, where \(m=m(s,q)\) is an explicit constant. Recently, Mubayi and Terry [8, 9] introduced a version of multiplicity product of the problem as follows. **Definition 2**: _Given a multigraph \(G=(V,w)\), we define_ \[P(G):=\prod_{xy\in{V\choose 2}}w(xy),\] \[ex_{\Pi}(n,s,q):=\max\{P(G):G\in\mathcal{F}(n,s,q)\},\] \[ex_{\Pi}(s,q):=\lim_{n\rightarrow+\infty}(ex_{\Pi}(n,s,q))^{{n\choose 2}^{-1}}.\] Mubayi and Terry showed in [8, Theorem 2.2], that for \(q\geq{s\choose 2}\), \[\left|\mathcal{F}(n,s,q-{s\choose 2})\right|=ex_{\Pi}(s,q)^{{n\choose 2}+o(n^{2})}.\] As Mubayi and Terry pointed out estimating the size of the multigraph family \(\mathcal{F}\left(n,s,q-{s\choose 2}\right)\) is equivalent to the Turan-type extremal problem of determining \(ex_{\Pi}(n,s,q)\). Therefore, Mubayi and Terry raised the general problem of determining \(ex_{\Pi}(n,s,q)\) as below. **Problem 1**: _[_8, 9_]_ _Given positive integers \(s\geq 2\) and \(q\), determine \(ex_{\Pi}(n,s,q)\)._ Mubayi and Terry in [8] proved that \[ex_{\Pi}(n,4,15)=2^{\gamma n^{2}+O(n)}\] where \(\gamma\) is defined by \[\gamma:=\frac{\beta^{2}}{2}+\beta(1-\beta)\frac{\log 3}{\log 2}\ \ \mbox{where}\ \ \beta:=\frac{\log 3}{2\log 3-\log 2}.\] This is the first example of a 'fairly natural extremal graph problem' whose asymptotic answer is given by an explicitly defined transcendental number. In response to a question of Alon which asked whether this transcendental behaviour is an isolated case, Mubayi and Terry [8] made a conjecture on the value of \(ex_{\Pi}(n,4,{4\choose 2}a+3)\). In [2], Day, Falgas-Ravry and Treglown resolved their conjecture fully, consequently providing infinitely many examples on Alon's question. Mubayi and Terry [9] exactly or asymptotically determined \(ex_{\Pi}(n,s,q)\) for pairs \((s,q)\) where \(a{s\choose 2}-\frac{s}{2}\leq q\leq a{s\choose 2}+s-2\) for some \(a\in\mathbb{N}\). Day, Falgas-Ravry and Treglown [2] investigated \(ex_{\Pi}(n,s,q)\) for a further range of values of \((s,q)\). In particular, they gave a potential extremal structure related to \(ex_{\Pi}(n,s,q)\). Their constructions may be seen as a class of multigraphs analogues of the well known Turan graphs. **Construction 1**: _[_2_]_ _Let \(a,r\in\mathbb{N}\) and \(d\in[0,{a-1}]\). Given \(n\in\mathbb{N}\), let \(\mathcal{T}_{r,d}(a,n)\) denote the collection of multigraphs \(G\) on \([n]\) for which \(V(G)\) can be partitioned into \(r\) parts \(V_{0},...,V_{r-1}\) such that: (i) all edges in \(G[V_{0}]\) have multiplicity \(a-d\); (ii) for all \(i\in[r-1]\), all edges in \(G[V_{i}]\) have multiplicity \(a\); (iii) all other edges of \(G\) have multiplicity \(a+1\)._ _Given \(G\in\mathcal{T}_{r,d}(a,n)\), we refer to \(\cup_{i=0}^{r-1}V_{i}\) as the canonical partition of \(G\)._ Fig. 1 is an example of what these graphs look like when \(r=4\). We write \[\Sigma_{r,d}(a,n):=\max\left\{e(G):G\in\mathcal{T}_{r,d}(a,n)\right\},\] and \[\Pi_{r,d}(a,n):=\max\left\{P(G):G\in\mathcal{T}_{r,d}(a,n)\right\}.\] In [2], Day, Falgas-Ravry and Treglown raised a new conjecture as below. **Conjecture 1**: _For all integers \(a,r,s,d\) with \(a,r\geq 1\), \(d\in[0,a-1]\),\(s\geq(r-1)(d+1)+2\) and all \(n\) sufficiently large,_ \[ex_{\Pi}(n,s,\Sigma_{r,d}(a,s))=\Pi_{r,d}(a,n). \tag{1}\] As the definition of \(\Sigma_{r,d}(a,n)\), we know \(ex_{\Pi}(n,s,\Sigma_{r,d}(a,s))\geq\Pi_{r,d}(a,n)\). Conjecture 1 roughly states that for any \(d\in[0,a-1]\) and other conditions of \(n,s,a,r\), when we take \(q=\Sigma_{r,d}(a,s)\), it is a graph \(G\) from \(\mathcal{T}_{r,d}(a,n)\) maximises the edge-product \(P(G)\) amongst all\((n,s,q)\)-graphs. Mubayi and Terry [9] proved that for all \(r\) such that \(\frac{s}{2}\leq r\leq s-1\) and \(n\geq s\), \(ex_{\Pi}(n,s,\Sigma_{r,0}(a,n))=\Pi_{r,0}(a,n)\), which partly confirms Conjecture 1 for \(d=0\). In [2], Day, Falgas-Ravry and Treglown completed the proof of the \(d=0\) case of Conjecture 1. Combine the results in [5] and [2], Conjecture 1 asymptotically holds for \(d=1\) and sufficiently large \(a\). Note that the known results on Conjecture 1 are considering the cases \(d=0\) or \(d=1\). In this paper, we consider (1) for the case of \(d=2\) and the special pair \((s,q)=(5,{5\choose 2}a+4)\) which is mentioned as an open problem by Day, Falgas-Ravry and Treglown in [2]. We obtain the results as follows. **Theorem 1**: _For \((s,q)=(5,{5\choose 2}a+4)\), we have_ * _For_ \(n\geq 8\) _and_ \(a\to\infty\)_,_ \(ex_{\Pi}(n,5,{5\choose 2}a+4)\to\Pi_{2,2}(a,n)\)_._ Figure 1: An example of the structure of graphs in \(\mathcal{T}_{4,d}(a,n)\). _._ 2. _For_ \(n=7\) _and_ \(a\geq 3\)_, we have_ \[(a-2)(a+1)^{10}a^{10}\leq ex_{\Pi}(7,5,{5\choose 2}a+4)<(a-1)a^{11}(a+1)^{9}.\] _Particularly,_ \(ex_{\Pi}(7,5,{5\choose 2}a+4)\to\Pi_{2,2}(a,7)\) _as_ \(a\to\infty\)_._ 3. _For_ \(n=6\) _and_ \(a\geq 3\)_,_ \(ex_{\Pi}(6,5,{5\choose 2}a+4)=a^{9}(a+1)^{6}>\Pi_{2,2}(a,6).\) _And_ \[\Pi_{2,2}(a,6)=\begin{cases}(a+1)^{5}a^{10},a=3,4;\\ (a-2)(a+1)^{8}a^{6},a\geq 5.\end{cases}\] 4. _For_ \(n=5\) _and_ \(a\geq 3\)_,_ \(ex_{\Pi}(5,5,{5\choose 2}a+4)=\Pi_{2,2}(a,5)=(a+1)^{4}a^{6}\)_._ It is not difficult to verify that \(\Sigma_{2,2}(a,5)={5\choose 2}a+4\), therefore, our results above imply that (1) does not hold for any \(a\geq 3\) when \(n=6\). This shows that for \((s,q)=(5,{5\choose 2}a+4)\) and \(d=2\), \(n\) needs to be sufficiently large for (1) to hold. The rest of the paper is organized as follows. In Section 2, we introduce some more notation and preliminaries. We give the proof of Theorem 1 in Section 3. ## 2 Preliminaries The following integral version of the AM-GM inequality was presented in [2], we will use it frequently. **Lemma 1**: _[_2_]_ _Let \(a,n\in[0,n]\), and let \(\omega_{1},...,\omega_{n}\) be non-negative integers with \(\sum_{i=1}^{n}\omega_{i}=an+t\). Then the following hold: (i) \(\Pi_{i=1}^{n}\omega_{i}\leq a^{n-t}(a+1)^{t}\); (ii) if \(t\leq n-2\) and \(\omega_{1}=a-1\) then \(\Pi_{i=1}^{t}\omega_{i}\leq(a-1)a^{n-t-2}(a+1)^{t+1}\)._ Considering the maximum \(P(G)\) among graphs in \({\cal T}_{2,2}(a,n)\), we obtain the following lemma. **Lemma 2**: _Let \(a\in[3,+\infty)\), \(n\in[5,+\infty)\), for the graph \(G\in{\cal T}_{2,2}(a,n)\), let \(\cup_{i=0}^{1}V_{i}\) be the canonical partition of \(G\). Set \(|V_{0}|=x\), then \(P(G)\) is maximum among all the graphs in \({\cal T}_{2,2}(a,n)\) if and only if_ \[n\in\left((x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln (1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\right].\] _Moreover,_ \[\Pi_{2,2}(a,n)=(a-2){x\choose 2}a^{{n-x\choose 2}}(a+1)^{x(n-x)}\] _for \(n\in\Big{(}(x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln (1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\Big{]}\)._ _Proof_. Let \(G\) be an \(n\)-vertex graph from \(\mathcal{T}_{2,2}(a,n).\) Suppose that \(V_{0}\) of \(G\) has \(x\) vertices, and \(V_{1}\) has \(n-x\) vertices. By simple calculation, when \(V_{0}\) has \(x\) vertices, the product of the edge multiplicities is \((a-2)^{\binom{x}{2}}a^{\binom{n-x}{2}}(a+1)^{x(n-x)}\). Now we define the new graph \(G^{{}^{\prime}}\) to be a graph obtained from \(G\) by moving a vertex from \(V_{1}\) to \(V_{0}.\) Considering the changing of the product of the edge multiplicities by moving a vertex from \(V_{1}\) to \(V_{0},\) we have that \(\frac{P(G)}{P(G^{{}^{\prime}})}=\frac{a^{n-x-1}(a+1)^{x}}{(a-2)^{x}(a+1)^{n-x-1}},\) which means that the product of the edge multiplicities is increased when the quantity is less than one and otherwise decreased. If \(n\in\left((x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln( 1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\right],\) then we find that \(n\) satisfies the following inequalities: \[\begin{cases}\frac{a^{n-x}(a+1)^{x-1}}{(a-2)^{x-1}(a+1)^{n-x}}<1,\\ \frac{a^{n-x-1}(a+1)^{x}}{(a-2)^{x}(a+1)^{n-x-1}}\geq 1,\end{cases}\] which means that the product of the edge multiplicities in \(G\) is decreased whether moving a vertex from \(V_{1}\) to \(V_{0}\) or moving a vertex from \(V_{0}\) to \(V_{1}.\) As \((a+1)^{2}\geq a(a-2)\) holds for any \(a\geq 3,\) the inequality \(\frac{a^{n-k}(a+1)^{k-1}}{(a-2)^{k-1}(a+1)^{n-k}}\leq\frac{a^{n-k-1}(a+1)^{k}} {(a-2)^{k}(a+1)^{n-k-1}}\) established when \(k\) is a positive integer. Therefore, \(n\) satisfies: \[\begin{cases}\frac{a^{n-2}(a+1)}{(a-2)(a+1)^{n-2}}<1,\\ \frac{a^{n-3}(a+1)^{2}}{(a-2)^{2}(a+1)^{n-3}}<1,\\...\\ \frac{a^{n-x}(a+1)^{x-1}}{(a-2)^{x-1}(a+1)^{n-x}}<1,\\ \frac{a^{n-x-1}(a+1)^{x}}{(a-2)^{x}(a+1)^{n-x-1}}\geq 1,\\...\\ \frac{a^{1}(a+1)^{n-2}}{(a-2)^{n-2}(a+1)^{1}}\geq 1,\\ \frac{a^{0}(a+1)^{n-1}}{(a-2)^{n-1}(a+1)^{0}}\geq 1.\end{cases} \tag{2}\] This system of inequalities means that the quantity of the product of the edge multiplicities is maximum when \(|V_{0}|=x.\) Thus we have proved the sufficiency of Lemma 2. Now we prove the necessity of Lemma 2. For the necessity of Lemma 2, if \(P(G)\) is maximum when \(|V_{0}|=x,\) then both adding a vertex to \(V_{0}\) and taking out a vertex from \(V_{0}\) will decrease the quantity of the product of the edge multiplicities, which means that: \[\begin{cases}\frac{a^{n-x}(a+1)^{x-1}}{(a-2)^{x-1}(a+1)^{n-x}}<1,\\ \frac{a^{n-x-1}(a+1)^{x}}{(a-2)^{x}(a+1)^{n-x-1}}\geq 1,\end{cases}\] Solving the above set of inequalities, we have that: \[n\in\left((x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln(1- \frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\right].\] As a result, when \(n\in\left((x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln(1 -\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\right]\), we have \(\Pi_{2,2}(a,n)=(a-2)^{\binom{x}{2}}a^{\binom{n-x}{2}}(a+1)^{x(n-x)}\). \(\Box\) **Lemma 3**: _Let \(G\) be an \(n\)-vertex graph from \(\mathcal{T}_{2,2}(a,n)\), and let \(\cup_{i=0}^{1}V_{i}\) be the canonical partition of \(G\). Then among all the graphs in \(\mathcal{T}_{2,2}(a,n)\), the product of the edge multiplicities will be maximum when \(|V_{0}|\)=2 if and only if_ \[n\in\left(\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+2,2\frac{\ln(1- \frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+3\right].\] _Proof._ By substituting \(x=2\) into the system of the inequalities (2), we get \[n\in\left(\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+2,2\frac{\ln(1- \frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+3\right],\] which proves the lemma. \(\Box\) If we let \(F(a)=\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})},a\geq 3\), we find the function is monotone decreasing for \(a\). Therefore, the maximum of \(F(a)\) is \(F(3)=4.82\). Also, it is not difficult to obtain that \(\lim_{a\rightarrow+\infty}F(a)=3\). ## 3 Proof of Theorem 1 ### The case when \(n\geq 7\) For \(n\geq 7\), let \(G\) be a graph from \(\mathcal{T}_{2,2}(a,n)\) and \(n\) be a positive integer. Let \(\cup_{i=0}^{1}V_{i}\) be the canonical partition of \(G\). Suppose \(P(G)\) is maximum when \(|V_{0}|=x\), among all the graphs in \(\mathcal{T}_{2,2}(a,n)\). By Lemma 2, we know: \[n\in\left((x-1)\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x,x\frac{\ln( 1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+x+1\right]. \tag{3}\] Moreover, \(\Pi_{2,2}(a,n)=(a-2)^{\binom{x}{2}}a^{\binom{n-x}{2}}(a+1)^{x(n-x)}\). Let \(G\) be a graph from \(\mathcal{F}(n,5,\binom{5}{2}a+4)\). By averaging over all \(5\)-sets, we obtain that the number of edges of \(G\) satisfying that \[e(G)\leq\left\lfloor\frac{\binom{n}{5}}{\binom{n-2}{3}}\left(\binom{5}{2}a+4 \right)\right\rfloor=\frac{n(n-1)}{2}a+\left\lfloor\frac{n(n-1)}{5}\right\rfloor.\] By Lemma 1 (i), we have \[P(G)\leq a^{\frac{n(n-1)}{2}-\lfloor\frac{n(n-1)}{5}\rfloor}(a+1)^{\lfloor \frac{n(n-1)}{5}\rfloor}.\] As a result, we have \[\Pi_{2,2}(a,n) =(a-2)^{\binom{x}{2}}a^{\binom{n-x}{2}}(a+1)^{x(n-x)}\] \[\leq ex_{\Pi}(n,5,\binom{5}{2}a+4)\] \[\leq a^{\frac{n(n-1)}{2}-\lfloor\frac{n(n-1)}{5}\rfloor}(a+1)^{ \lfloor\frac{n(n-1)}{5}\rfloor}.\] We will prove \[\lim_{a\rightarrow+\infty}\frac{a^{\frac{n(n-1)}{2}-\lfloor\frac{n(n-1)}{5} \rfloor}(a+1)^{\lfloor\frac{n(n-1)}{5}\rfloor}}{(a-2)^{\binom{x}{2}}a^{ \binom{n-x}{2}}(a+1)^{x(n-x)}}=1. \tag{4}\] Note that the case (i) in Theorem 1 follows if the equality (4) holds. Let \[\begin{cases}h_{1}(a)=a^{\frac{n(n-1)}{2}-\lfloor\frac{n(n-1)}{5}\rfloor}(a+1 )^{\lfloor\frac{n(n-1)}{5}\rfloor},\\ h_{2}(a)=(a-2)^{\binom{x}{2}}a^{\binom{n-x}{2}}(a+1)^{x(n-x)}.\end{cases}\] Expanding \(h_{1}(a)\) and \(h_{2}(a)\), we have \[h_{1}(a)=a^{\frac{n(n-1)}{2}}+...+a^{\frac{n(n-1)}{2}-\lfloor\frac{n(n-1)}{5} \rfloor},\] which is a polynomial on variable \(a\) with the highest degree \(\frac{n(n-1)}{2}\), and the coefficient of \(a^{\frac{n(n-1)}{2}}\) is \(1\). Also, \[h_{2}(a)=a^{\frac{n(n-1)}{2}}+...+(-2)^{\frac{x(x-1)}{2}}a^{\frac{(n-x)(n-x-1 )}{2}},\] which is a polynomial on variable \(a\) with the highest degree \(\frac{n(n-1)}{2}\), and the coefficient of \(a^{\frac{n(n-1)}{2}}\) is \(1\). It follows that \(\lim_{a\rightarrow+\infty}\frac{h_{1}(a)}{h_{2}(a)}=1\). ### The case when \(n=7\) For \(n=7\), note that \[7\in\left(\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+2,2\frac{\ln(1- \frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+3\right].\] Applying Lemma 3 for graph \(H\in{\cal T}_{2,2}(a,7)\), if \(H\) maximizes the product of the edge multiplicities, then \(V_{0}\) of \(H\) consists of two vertices. Therefore, \(\Pi_{2,2}(a,7)=(a-2)a^{10}(a+1)^{10}\). And \[ex_{\Pi}(7,5,\binom{5}{2}a+4)\geq\Pi_{2,2}(a,7)=(a-2)a^{10}(a+1)^{10}.\] Let \(G\) be a product-extremal graph in \({\cal F}(7,5,\binom{5}{2}a+4)\). By averaging over all 5-sets, we see that \[e(G)\leq\left\lfloor\frac{\binom{7}{5}}{\binom{5}{3}}\left(\binom{5}{2}a+4 \right)\right\rfloor=21a+\left\lfloor\frac{42}{5}\right\rfloor=21a+8.\] If \(e(G)\leq 21a+7\), then by Lemma 1 (i), \[P(G)\leq a^{14}(a+1)^{7}.\] Suppose now that \[e(G)=21a+8. \tag{5}\] If \(G\) contains at least one edge of multiplicity at most \(a-1\), then by Lemma 1 (ii), \(P(G)\leq(a-1)a^{11}(a+1)^{9}\). The equation holds if and only if there are one edge with multiplicity \(a-1\) and \(11\) edges with multiplicity \(a\) and \(9\) edges with multiplicity \(a+1\). On the other hand, suppose all the edges of \(G\) have multiplicity at least \(a\). Since \(G\in{\cal F}(7,5,\binom{5}{2}a+4)\), the edges in \(G\) have multiplicity between \(a\) and \(a+4\). We need to consider the various values of the edge multiplicity of \(G\), beginning with the easiest cases. Case I: Every edge of \(G\) has multiplicity either \(a\) or \(a+1\). From (5), we obtain that \(G\) has exactly 8 edges of multiplicity \(a+1\). We denote by \(G^{a+1}\) the graph spanned by edges of multiplicity \(a+1\) of \(G\). Then we claim that there is a cycle on 4 vertices in \(G^{a+1}\). **Claim 1**: _There must have a \(C_{4}\) in \(G^{a+1}\)._ _Proof._ Above all, there must be a path of length 3 in \(G^{a+1}\). Indeed, since \(G^{a+1}\) has 7 vertices and 8 edges, it must have a path of length 2. If there is not a path of length 3, then \(G^{a+1}\) must have as many as possible paths of length 2 to cover the 8 edges in \(G^{a+1}\). As there are 7 vertices, there must be a vertex adjacent to the midpoint of only one path of length 2. Then there are at most 5 edges in \(G^{a+1}\), a contradiction. So \(G^{a+1}\) must have a path of length 3. Suppose there is not a \(C_{4}\) in \(G^{a+1}\), we want to get a contradiction. We denote by \(P\) a path of length \(3\) in \(G^{a+1}\), and its vertices set is \(\{1,2,3,4\}\), such that \(i\) and \(i+1\) are adjacent in \(P\) for \(1\leq i\leq 3\). Then the endpoints of \(P\) are \(1\) and \(4\) and the remained vertices are \(5,6,7\). Since \(G^{a+1}\) does not have a \(C_{4}\), the vertices \(1\) and \(4\) are not adjacent in \(G^{a+1}\). If \(\{1,3\}\) and \(\{2,4\}\) are edges in \(G^{a+1}\), we find that the \(5\)-vertex set \(\{1,2,3,4,5\}\) spans edges with multiplicities at least \(10a+5\), a contradiction with \(G\in\mathcal{F}(7,5,\binom{2}{2}a+4)\). If one of \(\{1,3\}\) and \(\{2,4\}\) is an edge in \(G^{a+1}\), then there exists an edge between two sets \(\{1,2,3,4\}\) as \(\{5,6,7\}\) can span at most \(3\) edges in \(G^{a+1}\). Hence we can obtain a \(5\)-set containing vertices \(1,2,3,4\), which spans edges with multiplicities summation at least \(10a+5\). The statement above means that the vertices of \(P\) spanned no edges in \(G^{a+1}\) except the edges of the path. And we claim that for any vertex \(u\in\{5,6,7\}\), it can send at most one edge into \(P\) in \(G^{a+1}\). Otherwise \(\{u,1,2,3,4\}\) is a \(5\)-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction with \(G\in\mathcal{F}(7,5,\binom{5}{2}a+4)\). Note that there are \(8\) edges in \(G^{a+1}\) and any vertex in \(\{5,6,7\}\) can send at most one edge into \(P\) in \(G^{a+1}\) and \(\{5,6,7\}\) can span at most \(3\) edges in \(G^{a+1}\), then we find that either \(\{5,6,7\}\) span \(3\) edges or \(\{5,6,7\}\) span \(2\) edges. Case a: If \(\{5,6,7\}\) span \(2\) edges, then all vertices in \(\{5,6,7\}\) must send one edge into \(P\) in \(G^{a+1}\). Consequently, there must either exist a \(C_{4}\) or a \(5\)-set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Case b: If \(\{5,6,7\}\) span \(3\) edges, without loss of generality we suppose that the vertex \(5\) sends no edges into \(P\). There are two cases need to be consider. If there are two vertices in \(\{5,6,7\}\) are adjacent to the same vertex (without loss of generality we let it be \(1\)) in \(\{1,2,3,4\}\), then we find the two vertices with \(\{1,2,3\}\) form a \(5\)-set which spans edges with multiplicities summation at least \(10a+5\). Otherwise, without loss of generality we suppose the vertex \(6\) is adjacent to \(1\) and the vertex \(7\) is adjacent to \(4\) in \(G^{a+1}\). Now we find a \(5\)-set \(\{1,4,5,6,7\}\) which spans edges with multiplicities summation at least \(10a+5\). Both case make a contradiction with \(G\in\mathcal{F}(7,5,\binom{2}{2}a+4)\). Thus, there is a \(C_{4}\) in \(G^{a+1}\). Let \(C\) be a cycle of length \(4\) in \(G^{a+1}\), and its vertices set is \(\{1,2,3,4\}\), such that \(i\) and \(i+1\) are adjacent in \(C\) for \(1\leq i\leq 3\) and \(1\) is adjacent to \(4\). As \(G\in\mathcal{F}(7,5,\binom{5}{2}a+4)\), the spanning graph of \(\{1,2,3,4\}\) in \(G^{a+1}\) is only the \(C\) and \(\{5,6,7\}\) send no edges into \(C\) in \(G^{a+1}\). However, \(\{5,6,7\}\) can span at most \(3\) edges with multiplicity \(a+1\), which means there are at most \(7\) edges with multiplicity \(a+1\) a contradiction. As a result, edges in \(G\) can not only have multiplicity either \(a\) or \(a+1\). Case II \(:G\) contains an edge \(e_{0}\) with multiplicity \(a+2\). Since any 5 vertices can span edges with multiplicities at most \(10a+4\), then the multiplicity of other edges are one of \(a,a+1,a+2\). We consider the following three subcases. Subcase 1: Suppose no edge with multiplicity \(a+1\) or \(a+2\) is incident with \(e_{0}\). Then \[P(G) \leq\omega(e_{0})a^{10}ex_{\Pi}(5,5,{5\choose 2}a+4)\] \[=(a+2)a^{10}(a+1)^{4}a^{6}\] \[=a^{16}(a+1)^{4}(a+2).\] Subcase 2: Suppose there is some vertex \(v\) sending an edge of multiplicity \(a+2\) to one of endpoints of \(e_{0}\). Then the vertex is unique, it sends exactly one such edge into \(e_{0}\), and every other edges must have multiplicity \(a\) as any 5 points can span edges with multiplicities at most \(10a+4\). Thus \(P(G)\leq(a+2)^{2}a^{19}\). Subcase 3: Suppose there is some vertex \(v\) sending an edge \(e_{1}\) of multiplicity \(a+1\) to one of the endpoints of \(e_{0}\). Let the endpoints of \(e_{0}\) be \(1,2\) and \(v\) be 3. Without losing generality we suppose 2 is adjacent to 3. Denote by \(P\) the path with vertices 1, 2, 3. If 1 and 3 are adjacent by an edge \(e_{2}\) with multiplicity \(a+1\), then all other edges must have multiplicity \(a\). Otherwise we suppose the vertex 4 sends an edge with multiplicity more than \(a\), we find \(\{1,2,3,4\}\) with any vertex in \(\{5,6,7\}\) form a 5-vertex set which spans edges with multiplicities summation at least \(10a+5\). A contradiction with \(G\in\mathcal{F}(7,5,{5\choose 2}a+4)\). Then \(P(G)\leq(a+2)(a+1)^{2}a^{18}\). If 1 and 3 are adjacent by an edge \(e_{3}\) with multiplicity \(a\). Then there is at most one vertex in \(\{4,5,6,7\}\) which can send an edge to \(P\) with multiplicity at most \(a+1\). * If there is a vertex 4 sending an edge \(e_{4}\) with multiplicity \(a+1\) to \(P\), then the rest edges except \(\{e_{0},e_{1},e_{3},e_{4}\}\) in \(G[\{1,2,3,4\}]\) must have multiplicity \(a\) and the rest vertices \(5,6,7\) only can send edge with multiplicity \(a\) into \(G[\{1,2,3,4\}]\). Now we consider the edges in \(G[\{5,6,7\}]\). If there is one edge \(e_{5}\), without losing generality we assume that the endpoints of \(e_{5}\) are \(6,7\), and \(e_{5}\) has multiplicity \(a+2\). Then \(\{1,2,3,6,7\}\) is a 5-vertex set which spans edges with multiplicities summation exceed \(10a+4\), a contradiction. If there is no edge having multiplicity \(a+2\), then \(\{5,6,7\}\) can span at most 2 edges with multiplicity \(a+1\). So \(P(G)\leq a^{16}(a+1)^{4}(a+2)\). * If there is no vertex in \(\{4,5,6,7\}\) sends an edge with multiplicity \(a+1\) to \(P\). We claim that there is no edge with multiplicity \(a+2\) in \(G[\{5,6,7\}]\). Otherwise we suppose that \(G[\{5,6,7\}]\) has an edge \(e^{{}^{\prime}}\) incident with \(5,6\) with multiplicity \(a+2\), then \(\{1,2,3,5,6\}\) is a 5-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Meanwhile we find there are at most 3 edges with multiplicity \(a+1\), so \(P(G)\leq a^{17}(a+1)^{3}(a+2)\). In conclusion, if \(G\) contains a edge \(e_{0}\) with multiplicity \(a+2\), then we have the following inequalities \[\begin{cases}P(G)\leq a^{16}(a+1)^{4}(a+2),\\ P(G)\leq a^{19}(a+2)^{2},\\ P(G)\leq a^{18}(a+1)^{2}(a+2),\\ P(G)\leq a^{16}(a+1)^{4}(a+2),\\ P(G)\leq a^{17}(a+1)^{3}(a+2).\end{cases}\] Through comparing the right-hand side of these inequalities, we find \(a^{16}(a+1)^{4}(a+2)\) is the maximum value. So if \(G\) contains an edge \(e_{0}\) with multiplicity \(a+2\), then \(P(G)\leq a^{16}(a+1)^{4}(a+2)\). Case III : \(G\) contains an edge \(e_{0}\) of multiplicity \(a+3\). Since any 5 vertices can span edges with multiplicities at most \(10a+4\), every edge except for \(e_{0}\) has multiplicity either \(a\) or \(a+1\). Suppose there is some vertex \(v\) sending an edge of multiplicity \(a+1\) to one of the endpoints of \(e_{0}\). As any 5 vertices can span edges with multiplicities at most \(10a+4\), every other edge has multiplicity exactly \(a\). Then \[P(G)\leq P(G[e_{0}\cup v])a^{19}=a^{19}(a+1)(a+3).\] On the other hand suppose there is no edge with multiplicity \(a+1\) connect to \(e_{0}\). We claim that there is no path of length 2 in \(G^{a+1}\), otherwise there will be a 5-set which span edges with multiplicities at most \(10a+4\). Hence there is at most two edge of multiplicity \(a+1\). Then \[P(G)\leq a^{18}(a+1)^{2}(a+3).\] As \[\frac{a^{18}(a+1)^{2}(a+3)}{a^{19}(a+1)(a+3)}>1\ \ \text{when}\ \ a\geq 3,\] then if \(G\) contains an edge \(e_{0}\) of multiplicity \(a+3\), we have \[P(G)\leq a^{18}(a+1)^{2}(a+3).\] Case IV \(:G\) contains an edge of multiplicity \(a+4\). Since any \(5\) vertices can span edges with multiplicities at most \(10a+4\), every other edge has multiplicity exactly \(a\), and \(P(G)=(a+4)a^{20}\). Consider all the upper bounds for \(P(G)\) we obtained above, we have \[\begin{cases}P(G)\leq a^{14}(a+1)^{7},\\ P(G)\leq(a-1)a^{11}(a+1)^{9},\\ P(G)\leq a^{16}(a+1)^{4}(a+2),\\ P(G)\leq a^{18}(a+1)^{2}(a+3),\\ P(G)\leq a^{20}(a+4).\end{cases}\] Through comparison we finally get that \((a-1)a^{11}(a+1)^{9}\) is the maximum upper bound for \(P(G)\). However, this bound is not achievable. In fact, we obtained this bound \((a-1)a^{11}(a+1)^{9}\) when \(G\) contains at least one edge of multiplicity at most \(a-1\). The equality holds if and only if there is one edge, denoted by \(e\) with multiplicity \(a-1\) and \(11\) edges with multiplicity \(a\) and \(9\) edges with multiplicity \(a+1\). By claim \(1\), we know that \(G^{a+1}\) must have a \(C_{4}\). Let the vertex set of \(C_{4}\) be \(\{1,2,3,4\}\). Suppose \(\{1,2,3,4\}\) contains both of the endpoints of \(e\), let the endpoints of \(e\) be \(1,3\). As \(\{1,2,3,4\}\) can span at most \(5\) edges with multiplicity \(a+1\) and \(\{5,6,7\}\) can span at most \(3\) edges with multiplicity \(a+1\), there is at least one vertex \(u\in\{5,6,7\}\) sending edge with multiplicity \(a+1\) into \(\{1,2,3,4\}\). Actually, \(\{1,2,3,4,u\}\) is a \(5\)-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Suppose \(\{1,2,3,4\}\) contains one of the endpoints of \(e\), then let the endpoints of \(e\) be \(1,5\). As \(\{1,2,3,4\}\) can span only \(4\) edges with multiplicity \(a+1\) and \(\{5,6,7\}\) can span at most \(3\) edges with multiplicity \(a+1\). So there is at least one vertex in \(\{5,6,7\}\) sending edge with multiplicity \(a+1\) into \(\{1,2,3,4\}\). If \(6\) or \(7\) sends edges with multiplicity \(a+1\) into \(\{1,2,3,4\}\), then \(\{1,2,3,4\}\) with \(6\) or \(7\) form a \(5\)-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Otherwise \(5\) must send exactly \(2\) edges with multiplicity \(a+1\) into \(\{1,2,3,4\}\). Then \(\{5,6,7\}\) with the endpoints of the two edges form a \(5\)-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Suppose \(\{1,2,3,4\}\) contains no endpoints of \(e\), let the endpoints of \(e_{0}\) be \(5,6\). As \(\{1,2,3,4\}\) can span only \(4\) edges with multiplicity \(a+1\) and \(\{5,6,7\}\) can span at most \(2\) edges with multiplicity \(a+1\). So there is at least one vertex \(u\in\{5,6,7\}\) sending edge with multiplicity \(a+1\) into \(\{1,2,3,4\}\). Actually, \(\{1,2,3,4,u\}\) is a \(5\)-vertex set which spans edges with multiplicities summation at least \(10a+5\), a contradiction. Therefore, we have that \(P(G)\) is strictly less than \((a-1)a^{11}(a+1)^{9}\). As \(\Pi_{2,2}(a,7)\in\mathcal{F}(7,5,\binom{5}{2}a+4)\), we have \[\Pi_{2,2}(a,7) =(a-2)(a+1)^{10}a^{10}\] \[\leq ex_{\Pi}(7,5,\binom{5}{2}a+4)\] \[<(a-1)a^{11}(a+1)^{9},\] and \[\lim_{a\rightarrow+\infty}\frac{(a-1)a^{11}(a+1)^{9}}{(a-2)(a+1)^{10}a^{10}}=1\] which proves the case (ii) in Theorem 1. ### The case when \(n=6\) For \(n=6\), let \(H\) be a product extremal graph in \(\mathcal{F}(6,5,\binom{5}{2}a+4)\). By averaging over all \(5\)-sets, we see that \[e(H)\leq\frac{\binom{6}{5}}{\binom{4}{3}}\left(\binom{5}{2}a+4\right)=15a+6.\] By Lemma 1 (i), \(P(H)\leq a^{9}(a+1)^{6}.\) Therefore, \(ex_{\Pi}(6,5,\binom{5}{2}a+4)\leq a^{9}(a+1)^{6}.\) On the other hand, consider the \(6\)-vertex multigraph \(H^{{}^{\prime}}\) whose edges of multiplicity \(a+1\) form a \(6\)-cycle \(C_{6}\), and all other edges are of multiplicity \(a\). We have \(P(H^{{}^{\prime}})=a^{9}(a+1)^{6}\) and \(H\in\mathcal{F}(6,5,\binom{5}{2}a+4)\). Hence, \(ex_{\Pi}(6,5,\binom{5}{2}a+4)\geq a^{9}(a+1)^{6}.\) In conclusion, we have \(ex_{\Pi}(6,5,\binom{5}{2}a+4)=a^{9}(a+1)^{6}\). Let \(G\in\mathcal{T}_{2,2}(a,6)\). When \(a=3,4\), we have \[6\in\left(1,\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+2\right],\] which means the quantity \(P(G)\) is maximised when \(V_{0}=\{1\}\) and \(V_{1}=\{2,3,4,5,6\}\) by lemma 2. Then \(\mathcal{T}_{2,2}(a,6)=(a+1)^{5}a^{10}\) when \(a=3,4\). When \(a\geq 5\), we have \[6\in\left(\frac{\ln(1-\frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+2,2\frac{\ln(1- \frac{3}{a+1})}{\ln(1-\frac{1}{a+1})}+3\right],\] which means the quantity \(P(G)\) is maximised when \(V_{0}=\{1,2\}\) and \(V_{1}=\{3,4,5,6\}\) by lemma 2. Then \(\mathcal{T}_{2,2}(a,6)=(a-2)(a+1)^{8}a^{6}\) when \(a\geq 5\). So we have that \[\Pi_{2,2}(a,6)=\begin{cases}(a+1)^{5}a^{10},a=3,4;\\ (a-2)(a+1)^{8}a^{6},a\geq 5.\end{cases}\] Comparing \(\Pi_{2,2}(a,6)\) and \(a^{9}(a+1)^{6}\), we find that \((a+1)^{5}a^{10}\) is strictly less than \(a^{9}(a+1)^{6}\) when \(a=3,4\), and \((a-2)(a+1)^{8}a^{6}\) is strictly less than \(a^{9}(a+1)^{6}\) when \(a\geq 5\). Hence \[ex_{\Pi}(6,5,\binom{5}{2}a+4)=a^{9}(a+1)^{6}>\Pi_{2,2}(a,6),\ \ \text{when}\ \ a\geq 3.\] ### The case when \(n=5\) For \(n=5\), let \(G\) be a product-extremal graph from \(\mathcal{F}(5,5,\binom{5}{2}a+4)\). Then \(e(G)\leq\binom{5}{2}a+4=10a+4\). By Lemma 1 (i), we have \(P(G)\leq a^{(10-4)}(a+1)^{4}=a^{6}(a+1)^{4}\). Hence, \(ex_{\Pi}(5,5,\binom{5}{2}a+4)\leq a^{6}(a+1)^{4}\). On the other hand, partitioning [5] into \(V_{0}=\{1\}\) and \(V_{1}=\{2,3,4,5\}\), we have that \[(a+1)^{4}a^{6}\leq\Pi_{2,2}(a,5)\leq ex_{\Pi}(5,5,\binom{5}{2}a+4)\leq a^{6}(a +1)^{4}.\] Therefore, we have proved the case (iv) in Theorem 1. **Acknowledgments.** Ran Gu was partially supported by National Natural Science Foundation of China (No. 11701143).
2303.11157
Differentially Private Games via Payoff Perturbation
In this paper, we study network games where players are involved in information aggregation processes subject to the differential privacy requirement for players' payoff functions. We propose a Laplace linear-quadratic functional perturbation mechanism, which perturbs players' payoff functions with linear-quadratic functions whose coefficients are produced from truncated Laplace distributions. For monotone games, we show that the LLQFP mechanism maintains the concavity property of the perturbed payoff functions and produces a perturbed NE whose distance from the original NE is bounded and adjustable by Laplace parameter tuning. We focus on linear-quadratic games, which is a fundamental type of network games with players' payoffs being linear-quadratic functions, and derive explicit conditions on how the LLQFP mechanism ensures differential privacy with a given privacy budget. Lastly, numerical examples are provided for the verification of the advantages of the LLQFP mechanism.
Yijun Chen, Guodong Shi
2023-03-14T01:31:12Z
http://arxiv.org/abs/2303.11157v1
# Differentially Private Games via Payoff Perturbation ###### Abstract In this paper, we study network games where players are involved in information aggregation processes subject to the differential privacy requirement for players' payoff functions. We propose a Laplace linear-quadratic functional perturbation (LLQFP) mechanism, which perturbs players' payoff functions with linear-quadratic functions whose coefficients are produced from truncated Laplace distributions. For monotone games, we show that the LLQFP mechanism maintains the concavity property of the perturbed payoff functions, and produces a perturbed NE whose distance from the original NE is bounded and adjustable by Laplace parameter tuning. We focus on linear-quadratic games, which is a fundamental type of network games with players' payoffs being linear-quadratic functions, and derive explicit conditions on how the LLQFP mechanism ensures differential privacy with a given privacy budget. Lastly, numerical examples are provided for the verification of the advantages of the LLQFP mechanism. ## I Introduction Games on networks has gained increased traction in recent years. It has been applied in a variety of fields such as online E-commerce in social networks [1], route planning in transportation networks [2], and resource allocations in wireless communication networks [3]. There are typically three information aggregation processes in games for players to achieve network-level goals: the distributed Nash equilibrium (NE) seeking [4, 5, 6], best-response dynamics [7, 8, 9], and no-regret learning [10, 11, 12, 13]. What these frameworks have in common is that players need to share information with others in a dynamic process, such as their actions, payoff gradients, or payoffs, and then choose their actions for the next stage based on the information received and their own payoff functions. Clearly, players' payoff functions are encoded in the shared information. However, players' payoff functions are often sensitive and private [14]. As a result, players' payoff functions are at risk of privacy leakage. Owing to differential privacy [15, 16], it is possible for players to share information and decide their actions over time to achieve the desired outcome while keeping their payoff functions from being compromised. Differentially private systems have been well studied in the sense that lots of privacy algorithms are designed for various tasks such as average consensus [17, 18], estimation and filtering [19], and convex optimization [20, 21, 22]. As for differentially private games, the works of [23, 24] have focused on privacy-preserving distributed Nash seeking strategy design for aggregated games. Problem of InterestIn this paper, we consider a network game where players are interconnected through an interaction/communication network. Players are involved in information aggregation processes that requires them to share information to accomplish certain collective goal. The shared information that encodes the sensitive information of payoff functions is monitored by adversaries. As a result, we aim to protect the differential privacy of players' payoff functions. We are inspired by [22, 25]. We propose a Laplace linear-quadratic functional perturbation (LLQFP) mechanism, which perturbs players' original payoff functions with linear-quadratic functional perturbation. The coefficients of those perturbation are generated by truncated Laplace distributions. The idea is to let players participate in certain information aggregation process using the perturbed payoff functions. If the LLQFP mechanism preserves differential privacy, then it also enforces differential privacy of information aggregation processes by the resilience to post-processing of differential privacy [16]. In the literature of differentially private information aggregation processes, a common approach is to add noises to players' shared information [20, 21, 23, 24, 26]. For this approach, perturbation has to be designed in accordance with a diverse set of objectives during information aggregation processes. Moreover, perturbation has to be added at all time steps, and therefore the longer the operating time of information aggregation processes is, the more amount of perturbation is required to add. Functional perturbation is easier to implement since its design does not depend on specific tasks. In addition, functional perturbation only adds perturbation once to produce the perturbed payoff functions, regardless of the number of steps players participate in the following information aggregation processes. Functional perturbation was also proposed by [22, 25]. They studied the distributed optimization problem subject to the requirement of differential privacy. Their work decomposed the objective functions into an infinite sequence of coefficients corresponding to the elements of a orthogonal basis in a separable Hilbert space, and added noises to the infinite coefficient sequence. Unfortunately, truncation is inevitable in practical implementations. Our work focuses on generalizing functional perturbation to the differentially private game setting. Instead of considering infinite expansion, we propose a mechanism that does not involve the decomposition of function space, but directly apply linear-quadratic functions as perturbation avoiding the truncation problem. ContributionsIn this paper, we study network games under the differential privacy requirement for players' payoff functions. We make the following contributions: * We extend the notion of differential privacy to the network game setting, and propose a Laplace linear-quadratic functional perturbation (LLQFP) algorithm, which perturbs players' original payoff functions with linear-quadratic functional perturbation whose coefficients are generated according to truncated Laplace distributions. * For monotone games, we show that the LLQFP algorithm maintains the concavity property of the perturbed payoff functions and yields a \(\gamma\)-accurate perturbed NE whose distance from the original NE is upper bounded by any prescribed constant \(\gamma\). * We investigate LQ games that players' payoff functions are parameterized. It serves as a tutorial example showing how Laplace parameters are selected to ensure certain differential privacy requirement. * Experiments are conducted to verify the advantages of the LLQFP algorithm. OrganizationThe remainder of the paper is organized as follows. For privacy concerns about players' payoff functions in network games, we formalize our problem in Section II. In Section III, we propose the LLQFP algorithm. In Section IV, we consider monotone games, and show the advantages of the LLQFP algorithm. In Section V, we consider LQ games, and investigate Laplace parameter conditions that can guarantee certain different privacy requirement. Numerical examples are presented in Section VI. This paper ends with concluding remarks in Section VII. ## II Problem Formulation ### _Network Games_ Consider a network game with \(n\) players. The players are interconnected through an interaction/communication network. The interaction/communication network is associated with a graph \(\mathrm{G}(\mathrm{V},\mathrm{E})\), where \(\mathrm{V}:=\{1,2,\dots,n\}\) represents the nodes (players), and \(\mathrm{E}\) defines the links (the interdependency among players). Each player \(i\in\mathrm{V}\) holds an action \(x_{i}\) from a compact convex action space \(\mathcal{A}_{i}\subseteq\mathbb{R}\). The aggregated action profile of all players and the action profile excluding player \(i\) are denoted by \(\mathbf{x}:=[x_{1},\dots,x_{n}]^{\top}\) and \(\mathbf{x}_{-i}=[x_{1},\dots,x_{i-1},x_{i+1},\dots,x_{n}]^{\top}\), respectively. Each player \(i\) then receives her payoffs determined by a payoff function, i.e., \(u_{i}=f_{i}(x_{i},\mathbf{x}_{-i}),\) where the payoff function \(f_{i}\in C^{2}(\mathcal{A})\) is twice continuously differentiable over \(\mathcal{A}:=\Pi_{i\in\mathrm{V}}\mathcal{A}_{i}\). A common solution concept in game theory is called Nash equilibrium. It depicts an action profile under which no player may gain by simply modifying her action while others maintain theirs unaltered. We denote the NE by \(\mathbf{x}^{*}:=[x_{1}^{*},\dots,x_{n}^{*}]^{\top}\). **Definition 1** (Nash equilibrium).: _An action profile \(\mathbf{x}^{*}\) is said to be a pure-strategy NE of a game if \(f_{i}(x_{i}^{*},\mathbf{x}_{-i}^{*})\geq f_{i}(x_{i},\mathbf{x}_{-i}^{*}), \forall x_{i}\in\mathcal{A}_{i},\forall i\in\mathrm{V}\)._ Information Aggregation ProcessesIn network games, there are many network-level information aggregation operations that require players to share dynamical states over a horizon \(t\in\{0,1,\dots,T\}\) to accomplish collective goals such as the distributed Nash seeking [4, 5, 6], best-response dynamics [7, 8, 9], and no-regret learning [10, 11, 12, 13]. **Example 1** (Distributed Nash seeking [4, 5, 6]) At time \(t\), each player \(i\) holds a dynamical state \(\mathbf{y}_{i}(t)\) that typically consists of her action and her estimate of other players' actions. Then, each player \(i\) shares \(\mathbf{y}_{i}(t)\) with other players via certain interaction/communication network. Next, each player \(i\) updates her dynamical state for \(\mathbf{y}_{i}(t+1)\) based on the received players' dynamical states and her own payoff function \(f_{i}\). The network-level objective is for (perhaps part of) the sequence \([\mathbf{y}_{1}(t);\dots;\mathbf{y}_{n}(t)],t=0,1,\dots\) to converge to a NE. **Example 2** (Best-response Dynamics [7, 8, 9]) At time \(t\), each player \(i\) holds a dynamical state \(\mathbf{y}_{i}(t)\) that represents her action. Then, each \(\mathbf{y}_{i}(t)\) is observed by or communicated with other players. Next, each player \(i\) updates her state \(\mathbf{y}_{i}(t+1)\) as the action that maximizes her payoff function given other players' current actions. Best-response dynamics is a behavioral model depicting how players strategically make decisions in a sequential manner. Sometimes, best-response dynamics converge to a NE. **Example 3** (No-regret Learning [10, 11, 12, 13]) At time \(t\), each player \(i\) holds a dynamical state \(\mathbf{y}_{i}(t)\) that typically consists of her action and her estimate of other players' payoff gradients. Then, each player \(i\) shares \(\mathbf{y}_{i}(t)\) with other players via certain interaction/communication network. Next, each player \(i\) updates her dynamical state upon the received players' dynamical states and her own payoff function \(f_{i}\). The network-level objective of no-regret learning is for the sequence \(\mathbf{y}_{i}(t),t=0,1,\dots\) to minimize the regret of player \(i\) as the cumulative loss compared with a plain/single action in hindsight. ### _Problem Definition_ **Differentially Private Information Aggregation Processes** From the above network-level information aggregation operations, it is clear that the \(\mathbf{y}_{i}(t),i=1,\dots,n,t=0,1,\dots,T\) encode the information of payoff functions. Those states \(\mathbf{y}_{i}(t)\) are shared by player \(i\) with other players. However, players' payoff functions are often private and contains sensitive information [14]. As a result, payoff functions face privacy risk in the information aggregation processes. Differential privacy has been a standard tool to protect an individual's data privacy in a system where aggregate information is publicly published, but individual information is privately withheld [16]. Specifically, to protect the differential privacy of players' payoff functions, the mapping from \(\mathbf{f}:=[f_{1};\dots;f_{n}]\) to \(\mathbf{Y}:=[\mathbf{y}_{1}(0);\dots;\mathbf{y}_{1}(T);\dots;\mathbf{y}_{n}(0 );\dots;\mathbf{y}_{n}(T)]\) should satisfy the following differential privacy condition. **Definition 2** (\(\mathcal{W}\)-adjacency [25]).: _Given any normed vector space \((\mathcal{W},||\cdot||_{\mathcal{W}})\), \(\mathbf{f}\) and \(\mathbf{f}^{\prime}\) are said be \(\mathcal{W}\)-adjacent if there exists \(i_{0}\in\mathrm{V}\) such that_ \[f_{i}=f_{i}^{{}^{\prime}}\,,\qquad i\neq i_{0}; \tag{1a}\] \[f_{i_{0}}-f_{i_{0}}^{{}^{\prime}}\in\mathcal{W}. \tag{1b}\] The normed vector space \(\mathcal{W}\) is a design choice that we specify later according to the class of payoff functions. **Definition 3** (\((\epsilon,\delta)\)-differential privacy).: _The mapping \(\mathcal{M}\) is said to preserve \((\epsilon,\delta)\)-differential privacy if for any subset \(\mathcal{M}\subseteq\mathrm{range}\big{(}\mathcal{M}\big{)}\),_ \[\mathbb{P}(\mathcal{M}(\mathbf{f})\in\mathcal{M})\leq e^{\epsilon}\mathbb{P} (\mathcal{M}(\mathbf{f}^{\prime})\in\mathcal{M})+\delta, \tag{2}\] _holds for any two \(\mathcal{W}\)-adjacent payoff functions \(\mathbf{f}\) and \(\mathbf{f}^{\prime}\)._ ### _Functional Perturbation_ We propose a functional perturbation mechanism from \(\mathbf{f}\) to \(\hat{\mathbf{f}}\) where certain perturbation is added to produce \(\hat{\mathbf{f}}\), and then players use \(\hat{\mathbf{f}}\) to participate in information aggregation processes. As a result, the privacy of players' payoff functions \(\mathbf{f}\) may be protected in the sense that Definition 3 may be satisfied. If the mapping from \(\mathbf{f}\) to \(\hat{\mathbf{f}}\) preserves differential privacy, then differential privacy of the mapping from \(\mathbf{f}\) to \(\mathbf{Y}\) is also enforced by the immune to post-processing [16]. There are a few practical challenges in designing such a mechanism: * Differential privacy of the functional perturbation mechanism from \(\mathbf{f}\) to \(\hat{\mathbf{f}}\) should be provable. * The basic regularity property of the game should be maintained. In particular, if \(\mathbf{f}\) are concave, \(\hat{\mathbf{f}}\) should be also concave. * The distance between the NE of the original game and the NE of the perturbed game should be upper bounded and adjustable by parameter tuning. In this paper, we aim to develop a distributed algorithm to realize this functional perturbation mechanism that can address the above challenges. ## III The Proposed Algorithm The truncated Laplace distribution truncated by \([-a,a]\) with mean zero and scale parameter \(\lambda\), denoted \(\mathscr{L}_{tr}(a,\lambda)\), has probability density function \[p(x;a,\lambda)=\left\{\begin{array}{ll}Be^{-|x|/\lambda},&\text{for }x\in[-a,a],\\ 0,&\text{otherwise,}\end{array}\right. \tag{3}\] where \(B=\frac{1}{2\lambda(1-e^{-a/\lambda})}\). Denote the neighbors of player \(i\) by the set \(\mathrm{N}_{i}\subset\mathrm{V}\). We sort the indices of player \(i\)'s neighbors in ascending order in the set \(\mathrm{O}_{i}:=\{i_{1},i_{2},\ldots,i_{|\mathrm{N}_{i}|}\}\). For example, if player \(j\) is player \(i\)'s \(k\)th neighbor, then \(i_{k}=j\). ``` 0: Laplace parameters \(a,\lambda\); payoff functions \(f_{1},\ldots,f_{n}\); 0: perturbed payoff functions \(\hat{f}_{1},\ldots,\hat{f}_{n}\) 1: Each player \(i\in\mathrm{V}\) independently generates a sequence random numbers \(\omega_{i,k}\), \(k=1,\ldots,|\mathrm{N}_{i}|+2\), according to \(\mathscr{L}_{tr}(a,\lambda)\) in (3). 2: Each player \(i\in\mathrm{V}\) computes \[q_{ij}=\left\{\begin{array}{ll}\frac{\omega_{i,(|\mathrm{N}_{i}|+1)}}{2}+ \frac{a(|\mathrm{N}_{i}|+1)}{2},&j=i,\\ \omega_{i,k},&j=i_{k},\\ 0,&\text{otherwise,}\end{array}\right.\] (4a) \[\beta_{i}=\omega_{i,(|\mathrm{N}_{i}|+2)}.\] (4b) 3: Each player \(i\in\mathrm{V}\) employs a perturbed payoff function based on \(\mathbf{q}_{i}:=[q_{i1},\ldots,q_{in}]^{\top}\) and \(\beta_{i}\): \[\hat{f}_{i}(x_{i},\mathbf{x}_{-i})=f_{i}(x_{i},\mathbf{x}_{-i})-x_{i}\mathbf{ q}_{i}^{\top}\mathbf{x}-\beta_{i}x_{i}.\] (5) 4:return\(\hat{f}_{1},\ldots,\hat{f}_{n}\) ``` **Algorithm 1** Laplace Linear-quadratic Functional Perturbation Algorithm ### _LLQFP Algorithm_ We next propose a Laplace linear-quadratic functional perturbation Algorithm in Algorithm 1. ``` 0: Let \(\mathbf{f}\) be a function the network structure of the game may be maintained. ### _Positivity Guarantee_ We now present a property of the coefficients generated according to \(\mathscr{L}_{tr}(a,\lambda)\), which is necessary for the Theorems later. Denote \(\mathbf{d}_{i}=[q_{i1},\ldots,q_{i(i-1)},2q_{ii},q_{i(i+1)},\ldots,q_{in}]^{\top}\) and \(\mathbf{D}=[\mathbf{d}_{1}\ \mathbf{d}_{2}\ \ldots\ \mathbf{d}_{n}]\). Also denote \(\boldsymbol{\beta}=[\beta_{1},\ldots,\beta_{n}]^{\top}\). **Lemma 1**.: \(\mathbf{D}^{\top}\) _is a positive semidefinite matrix._ Proof.: We focus on the magnitude of the diagonal element in each row, and the sum of the magnitudes of all non-diagonal elements in that row. According to (4a), we have \(|2q_{ii}|\in[a|\mathrm{N}_{i}],a(|\mathrm{N}_{i}|+2)]\) and \(\sum_{i\neq j}|q_{ij}|\in[0,a|\mathrm{N}_{i}|],\forall i\in\mathrm{V}\). Since \(|2q_{ii}|\geq\sum_{i\neq j}|q_{ij}|\), \(\mathbf{D}^{\top}\) is diagonally dominant. A symmetric diagonally dominant real matrix with nonnegative diagonal entries is positive semidefinite. Hence, \(\mathbf{D}^{\top}\) is a positive semidefinite matrix. ## IV Monotone Games In what follows, we look at a class of strongly monotone games, present its basic properties, and show the advantages of Algorithm 1. For each \(i\in\mathrm{V}\), we denote the gradient of \(f_{i}\) with respect to \(x_{i}\) by \(\nabla_{x_{i}}f_{i}:=\frac{\partial f_{i}}{\partial x_{i}}\in\mathbb{R}\), and \(\boldsymbol{\phi}(\mathbf{x}):=[\nabla_{x_{1}}f_{1},\ldots,\nabla_{x_{n}}f_{n}] ^{\top}\in\mathbb{R}^{n}\). We impose the following assumption of a class of strongly monotone games. **Assumption 1** ([23]).: _For some \(l_{m}>0\) and for all \(\mathbf{x}^{\prime},\mathbf{x}\in\mathcal{A},\)_ \[\sum_{i\in\mathrm{V}}(\phi_{i}(\mathbf{x})-\phi_{i}(\mathbf{x}^{\prime}))(x_{ i}-x_{i}^{\prime})\leq-l_{m}\|\mathbf{x}-\mathbf{x}^{\prime}\|^{2}. \tag{6}\] Assumption 1 implies that each player's original payoff function \(\mathbf{f}\) is strictly concave in \(x_{i}\)[27]. We introduce the definition of concavity preservation. **Definition 4** (Concavity preservation [25]).: _Let Assumption 1 hold. Algorithm 1 is said to be concavity-preserving if each \(\hat{f}_{i}\) is strictly concave in \(x_{i}\) for all \(i\in\mathrm{V}.\)_ ### _Concavity Preservation_ The following result proves that under Assumption 1, Algorithm 1 is concavity-preserving. **Theorem 1**.: _Let Assumption 1 hold. Then, Algorithm 1 is concavity-preserving._ Proof.: Under Assumption 1, each player \(i\)'s original payoff function is strictly concave in \(x_{i}\). We consider \(\mathbf{x}=[x_{1},\ldots,x_{i},\ldots,x_{n}]^{\top}\) and \(\mathbf{x}^{\prime}=[x_{1},\ldots,x_{i}^{\prime},\ldots,x_{n}]^{\top}\) and obtain \((\phi_{i}(\mathbf{x})-\phi_{i}(\mathbf{x}^{\prime}))(x_{i}-x_{i}^{\prime})<0.\) We then check the sign of \((\phi_{i}(\mathbf{x})-\hat{\phi}_{i}(\mathbf{x}^{\prime}))(x_{i}-x_{i}^{ \prime})\) = \((\phi_{i}(\mathbf{x})-\phi_{i}(\mathbf{x}^{\prime}))(x_{i}-x_{i}^{\prime})-2q _{ii}(x_{i}-x_{i}^{\prime})^{2}<0\), which complies with Definition 4. The next result shows that under Assumption 1, both original game and perturbed game after Algorithm 1 admit a unique NE. **Theorem 2**.: _Let Assumption 1 hold. Then,_ _(i) the original game with the payoff functions \(\mathbf{f}\) admits a unique NE._ _(ii) after Algorithm 1, the perturbed game with the perturbed payoff functions \(\hat{\mathbf{f}}\) admits a unique NE._ Proof.: For (i), the class of strongly monotone games is a proper subclass of monotone games, first introduced in [27]. Instead of the stronger requirement in Assumption 1, the weaker assumption \(\sum_{i\in\mathrm{V}}c_{i}(\phi_{i}(\mathbf{x})-\phi_{i}(\mathbf{x}^{\prime}) )(x_{i}-x_{i}^{\prime})<0\) is imposed. Every monotone game admits a unique NE [27, Theorem 2]. Therefore, the original game under Assumption 1 also admits a unique NE. For (ii), we are going to check whether the perturbed game is a monotone game by investigating whether the sign of \(\sum_{i\in\mathrm{V}}(\hat{\phi}_{i}(\mathbf{x})-\hat{\phi}_{i}(\mathbf{x}^{ \prime}))(x_{i}-x_{i}^{\prime})<0,\forall\mathbf{x}^{\prime},\mathbf{x}\in \mathcal{A},\mathbf{x}^{\prime}\neq\mathbf{x}\). It is straightforward that \[\sum_{i\in\mathrm{V}}(\hat{\phi}_{i}(\mathbf{x})-\hat{\phi}_{i}( \mathbf{x}^{\prime}))(x_{i}-x_{i}^{\prime})\] \[= \sum_{i\in\mathrm{V}}(\phi_{i}(\mathbf{x})-\phi_{i}(\mathbf{x}^{ \prime}))(x_{i}-x_{i}^{\prime})-\sum_{i\in\mathrm{V}}2q_{ii}(x_{i}-x_{i}^{ \prime})^{2}<0.\] We draw the conclusion that the perturbed game is a monotone game, therefore admitting a unique NE. ### \(\gamma\)_-accurate Nash equilibrium_ According to Theorem 2, when Assumption 1 holds, both original game and perturbed game admit a unique NE. We denote the original NE of the original game by \(\mathbf{x}^{*}\) and the perturbed NE of the perturbed game by \(\hat{\mathbf{x}}^{*}\). We now introduce the definition of \(\gamma\)-accurate NE. **Definition 5** (\(\gamma\)-accurate NE).: _Let Assumption 1 hold. The perturbed NE \(\hat{\mathbf{x}}^{*}\) is said to be \(\gamma\)-accurate if \(\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|\leq\gamma\)._ In the following result, we derive an upper bound for the distance between the original NE and the perturbed NE after Algorithm 1. **Theorem 3**.: _Let Assumption 1 hold. Further suppose that the original NE and perturbed NE are both interior points in the action space \(\mathcal{A}\). Then, the perturbed NE is \(\gamma\)-accurate with_ \[\gamma=\frac{\sqrt{n}a+\sqrt{\sum_{i\in\mathrm{V}}(4|\mathrm{N} _{i}^{2}+5|\mathrm{N}_{i}|+4)}a\|\mathbf{x}^{*}\|}{l_{m}} \tag{7}\] Proof.: By taking the derivative of (5) w.r.t. \(x_{i}\) and rewriting it in the vector form, we obtain \[\hat{\boldsymbol{\phi}}(\mathbf{x})=\boldsymbol{\phi}(\mathbf{x})-\mathbf{D}^{ \top}\mathbf{x}-\boldsymbol{\beta}. \tag{8}\] We now turn to the original NE and perturbed NE, whose existence and uniqueness are guaranteed by Theorem 2. Moreover, in Theorem 3, we further impose the assumption of interior NE. Substituting \(\hat{\mathbf{x}}^{*}\) into Eq. (8), we have \(\boldsymbol{\phi}(\hat{\mathbf{x}}^{*})=\boldsymbol{\phi}(\hat{\mathbf{x}}^{*})+ \mathbf{D}^{\top}\hat{\mathbf{x}}^{*}+\boldsymbol{\beta}.\) The first-order condition for the interior NE is that \(\boldsymbol{\phi}(\mathbf{x}^{*})=0\) and \(\hat{\boldsymbol{\phi}}(\hat{\mathbf{x}}^{*})=0\). Then, it yields \(\left[\boldsymbol{\phi}(\mathbf{x}^{*})-\boldsymbol{\phi}(\hat{\mathbf{x}}^{*}) \right]-\mathbf{D}^{\top}(\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})=-\boldsymbol{ \beta}-\mathbf{D}^{\top}\mathbf{x}^{*}\). Multiplying by \((\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})^{\top}\), we observe that \[-(\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})^{\top}(\boldsymbol{\beta} +\mathbf{D}^{\top}\mathbf{x}^{*})\] \[= (\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})^{\top}\big{[}\boldsymbol{ \phi}(\mathbf{x}^{*})-\boldsymbol{\phi}(\hat{\mathbf{x}}^{*})\big{]}-(\mathbf{ x}^{*}-\hat{\mathbf{x}}^{*})^{\top}\mathbf{D}^{\top}(\mathbf{x}^{*}-\hat{ \mathbf{x}}^{*})\] \[\stackrel{{(a)}}{{\leq}} (\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})^{\top}\big{[}\boldsymbol{ \phi}(\mathbf{x}^{*})-\boldsymbol{\phi}(\hat{\mathbf{x}}^{*})\big{]}\] \[\stackrel{{(b)}}{{\leq}} -l_{m}\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|^{2}.\] The inequality (a) holds because \(\mathbf{D}^{\top}\) is designed to be a positive semidefinite matrix (See Lemma 1). The inequality (b) is exactly from Eq. (6). Thus there holds \((\mathbf{x}^{*}-\hat{\mathbf{x}}^{*})^{\top}(\boldsymbol{\beta}+\mathbf{D}^{ \top}\mathbf{x}^{*})\geq l_{m}\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|^{2}>0\). Further considering the Cauchy-Schwarz inequality, we finally obtain a bound on the distance \[\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\| \tag{9a}\] \[\leq \frac{\|\boldsymbol{\beta}\|+\|\mathbf{D}\|\|\mathbf{x}^{*}\|}{l_ {m}}\] (9b) \[\leq \frac{\sqrt{n}a+\sqrt{\sum_{i\in\mathrm{V}}(4|\mathrm{N}|_{i}^{2 }+5|\mathrm{N}_{i}|+4)}a\|\mathbf{x}^{*}\|}{l_{m}}. \tag{9c}\] The proof is now completed. **Remark 1**.: _The upper bound (9c) is very conservative because it considers the worst-case scenario in which all \(q_{ij},\beta_{i},\forall i,j\in\mathrm{V}\) take their maximum values. In comparison, the upper bound (9b) is a relatively small bound, which is more likely to happen with high probability in realization._ ## V Linear-quadratic Games A practical challenge of Algorithm 1 is how to select the Laplace parameters \(a\) and \(\lambda\) that can ensure certain differential privacy requirement. The selection depends on the class of payoff functions and the design choice of \(\mathcal{W}\)-adjacency. In this section, we analyze a benchmark game whose payoff functions are in the linear-quadratic form. Denote the adjacency matrix of the interaction/communication network by \(\mathbf{G}\in\mathbb{R}^{n\times n}\), with each entry \(g_{ij}\in\mathbb{R}\) denoting whether player \(j\in\mathrm{V}\) is linked to player \(i\in\mathrm{V}\) and also indicating the linkage intensity. We now impose the assumption of linear-quadratic games. **Assumption 2**.: _The payoff functions of a linear-quadratic game are set as_ \[f_{i}(x_{i},\mathbf{x}_{-i})=-\frac{1}{2}x_{i}^{2}+b_{i}x_{i}+\sum_{j\in \mathrm{V}}g_{ij}x_{i}x_{j},\quad\forall i\in\mathrm{V}, \tag{10}\] _where \(b_{i}\in\mathbb{R}^{\geq 0}\) represents the marginal benefit of player \(i\)._ Since each player \(i\)'s payoff functions is now parameterized by the parameters \(g_{ij}\) and \(b_{i}\). Hence, it is reasonable to specify \(\mathcal{W}\)-adjacency on these parameters. For example, we specify a definition of \(\mu\)-adjacency for LQ games. **Definition 6** (\(\mu\)-adjacency).: _Consider linear-quadratic payoff functions \(\mathbf{f}\) and \(\mathbf{f}^{\prime}\). They are said to be \(\mu\)-adjacent if there exists \(i_{0}\in\mathrm{V}\) such that_ \[g_{i1}=g_{i1}^{\prime},\ldots,g_{in}=g_{in}^{\prime},b_{i}=b_{i }^{\prime},\quad i\neq i_{0}; \tag{11a}\] \[\max\{g_{i0}1-g_{i_{0}1}^{\prime},\ldots,g_{i_{0}n}-g_{i_{0}n}^{ \prime},b_{i_{0}}-b_{i_{0}}^{\prime}\}\leq\mu. \tag{11b}\] In what follows, we first present how the Laplace parameters \(a\) and \(\lambda\) are selected to guarantee \((\epsilon,\delta)\)-differential privacy for one-dimensional truncated Laplace mechanism, and then generalize this result to differential private LQ games. ### _One-dimensional Truncated Laplace Mechanism_ The work of [28, 29, 30] investigated how the Laplace parameters are chosen to meet the differential privacy criterion in the one-dimensional case. As presented in (9c), Algorithm 1 leads to a biased perturbed NE. As a result, a stringent analysis is required to determine the lower bounds of the Laplace parameters that can produce a less biased perturbed NE. Compared with [28], the following result relaxes the requirement for the Laplace parameters to guarantee the differential privacy, and gives the tight lower bounds for \(a\) and \(\lambda\) in the one-dimensional case. We consider one-dimensional truncated perturbation mechanism. Let \(D\) be the space of datasets of interest. Suppose there is a query as a function \(y:D\rightarrow\mathbb{R}\). Given \(D\), a randomized mapping \(\mathcal{K}\) will release the one-dimensional response \(\mathcal{K}(D)\) that is the summation of the true query answer \(y\in\mathbb{R}\) and a random noise \(\eta\in[-a,a]\) following \(\mathscr{L}_{tr}(a,\lambda)\), \(\mathcal{K}(D)=y(D)+\eta\). The sensitivity of one-dimensional true query is then given by \(\Delta y=\max_{D_{1},D_{2}\in D}|y(D_{1})-y(D_{2})|\). The randomized mapping \(\mathcal{K}\) gives \((\epsilon,\delta)\)-differential privacy if for any two datasets \(D_{1},D_{2}\in D\) differing in at most one element, and all \(\mathscr{K}\subseteq\text{range}(\mathcal{K})\), there holds \[\mathbb{P}(\mathcal{K}(D_{1})\in\mathscr{K})\leq e^{\epsilon}\mathbb{P}( \mathcal{K}(D_{2})\in\mathscr{K})+\delta. \tag{12}\] **Lemma 2**.: _Given the privacy parameters \(0<\delta<\frac{1}{2},\epsilon>0\), the randomized mapping \(\mathcal{K}\) preserves \((\epsilon,\delta)\)-differential privacy if_ \[\lambda \geq\frac{\Delta y}{\epsilon-\ln(1-\delta)}, \tag{13a}\] \[a \geq\max\Big{\{}\Delta y,\lambda\ln\left(\frac{e^{\frac{\Delta y}{ \lambda}}-1}{2\delta}+1\right)\Big{\}}. \tag{13b}\] Proof.: We are seeking to show that for any \(D_{1},D_{2}\in D\) differing in at most one element, for any subset \(\mathscr{K}\subseteq\text{range}(\mathcal{K})\), Eq. (12) is satisfied. Without loss of generality, we let \(y(D_{1})\leq y(D_{2})\). Given \(\mathscr{K}\subseteq\text{range}(\mathcal{K})\), there are \(5\) cases to consider, each of which should render Eq. (12) to be satisfied. 1. \(\mathscr{K}\subseteq(-\infty,-a+y(D_{1})]\) : It is true that \(0\leq e^{\epsilon}\cdot 0+\delta\). 2. \(\mathscr{K}\subseteq[-a+y(D_{1}),-a+y(D_{2})]\) : First, since \(\delta<\frac{1}{2}\), it is impossible to find the configuration for \(a\) and \(\lambda\) that can make Eq. (12) valid when \(a\leq\Delta y\). Second, we now consider \(y(D_{2})-y(D_{1})\leq\Delta y<a\). For any \(D_{1}\) and \(D_{2}\) differing in at most one element, to satisfy Eq. (12), we are going to show that the probability mass in the interval \([-a+y(D_{1}),-a+y(D_{2})]\) does not exceed \(\delta\): \[\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{1})|}{\lambda}}dy= B\lambda(e^{\frac{-a+y(D_{2})-y(D_{1})}{\lambda}}-e^{\frac{-a}{ \lambda}})\] \[\leq B\lambda(e^{\frac{-a+a}{\lambda}}-e^{\frac{-a}{\lambda}})\leq\delta.\] The first inequality holds because \(e^{\frac{-a+y(D_{2})-y(D_{1})}{\lambda}}\) increases when \(y(D_{2})-y(D_{1})\) increases, while the second inequality comes from the condition (13b). 3. \(\mathcal{K}\subseteq[-a+y(D_{2}),a+y(D_{1})]:\) Equation (12) can be written as \(\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{1})|}{\lambda}}dy\leq e^{\epsilon}\int_{ \mathcal{K}}Be^{\frac{-|y-y(D_{2})|}{\lambda}}dy+\delta\). Using triangle inequality \(|y-y(D_{2})|\leq|y-y(D_{1})|+|y(D_{1})-y(D_{2})|\) and combining the condition (13a), it is sufficient to show that \(\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{1})|}{\lambda}}dy\leq e^{-\frac{a}{ \lambda}}\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{1})|}{\lambda}}dy+\delta\), or further, \(1\leq e^{\epsilon-\frac{a}{\lambda}}+\delta\leq e^{-\frac{a}{\lambda}}+\frac{ \delta}{\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{1})|}{\lambda}}dy}\). 4. \(\mathcal{K}\subseteq[a+y(D_{1}),a+y(D_{2})]:\) It is true that \(0\leq e^{\epsilon}\int_{\mathcal{K}}Be^{\frac{-|y-y(D_{2})|}{\lambda}}dy+\delta\). 5. \(\mathcal{K}\subseteq[a+y(D_{2}),\infty]:\) It is valid that \(0\leq e^{\epsilon}\cdot 0+\delta\). In all, under conditions (13a) and (13b), the randomized mapping \(\mathcal{K}\) preserves \((\epsilon,\delta)\)-differential privacy. ### _Differentially Private LQ Games_ In what follows, Laplace parameter conditions are given to ensure certain differential privacy requirement for LQ games. Stack the non-zero elements \(q_{ij}\) into \(\mathbf{q}\in\mathbb{R}^{m}\) with \(m=n+\sum_{i\in V}|N_{i}|\). Also stack \(g_{ij}\) into \(\mathbf{g}\in\mathbb{R}^{m}\) such that each element in \(\mathbf{g}\) is matched with the corresponding element in \(\mathbf{q}\). In particular, if the \(k\)th element of \(\mathbf{q}\) is \(q_{ij}\), then the \(k\)th element of \(\mathbf{g}\) is \(g_{ij}\). The dimension of \(\begin{bmatrix}\mathbf{g}\\ \mathbf{b}\end{bmatrix}\) is \(l=2n+\sum_{i\in V}|N_{i}|\). Further define \(p=1+\max_{i\in V}|N_{i}|\). **Theorem 4**.: _Consider a LQ game. Then given any \(\epsilon,\delta,\mu>0\), the mapping \(\widehat{\mathcal{M}}(\mathbf{g},\mathbf{b})=\begin{bmatrix}\mathbf{g}- \mathbf{q}\\ \mathbf{b}-\beta\end{bmatrix}\) achieves \((p\epsilon,p\delta)\)-differential privacy under \(\mu\)-adjacency if_ \[\lambda \geq\frac{\mu}{\epsilon-\ln(1-\delta)}, \tag{14a}\] \[a \geq\max\Big{\{}\mu,\lambda\ln\left(\frac{e^{\frac{\mu}{\lambda} }-1}{2\delta}+1\right)\Big{\}}. \tag{14b}\] Proof.: Consider two \(\mu\)-adjacent linear-quadratic payoff functions \(\mathbf{f}\) and \(\mathbf{f}^{{}^{\prime}}\) that are uniquely determined by the pairs \((\mathbf{g},\mathbf{b})\) and \((\mathbf{g}^{\prime},\mathbf{b}^{\prime})\), respectively. Denote \(\mathbf{v}=[\mathbf{g}^{\top}\ \mathbf{b}^{\top}]^{\top}\in\mathbb{R}^{l}\) and \(\mathbf{v}^{\prime}=[\mathbf{g}^{\prime\top}\ \mathbf{b}^{\prime\top}]^{\top}\in\mathbb{R}^{l}\). Also define \(\mathrm{V}_{perturbed}=\{1,2,\ldots,l\}\). Due to \(\mu\)-adjacency, there exists \(i_{0}\in\mathrm{V}\) such that the conditions (11a) and (11b) hold. We denote by \(\mathrm{V}_{diff}\) the indices of \(b_{i_{0}}\) and \(g_{i_{0}j},j\in\mathrm{N}_{i_{0}}\cup\{i_{0}\}\), in \(\mathbf{v}\). The conditions (11a) and (11b) indicate that 1) \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\) differ in at most \(|\mathrm{N}_{i_{0}}|+1\) elements; 2) and for any \(i\in\mathrm{V}_{diff}\), we have \(|v_{i}-v_{i}^{\prime}|\leq\mu\). Note that each \(v_{i},i\in\mathrm{V}_{diff}\), is independent of any other \(v_{j},j\in\mathrm{V}_{perturbed}\neq i\). We decompose \(\widehat{\mathcal{M}}\) and further notice that each component \(\widehat{\mathcal{M}}_{i}(v_{i}),i\in\mathrm{V}_{diff}\), can be viewed as a randomization of \(v_{i}\). We then apply Lemma 2 with \(\Delta y=\mu\). It is therefore straightforward that when the conditions (14a) and (14b) are satisfied, each component \(\widehat{\mathcal{M}}_{i}(v_{i}),i\in\mathrm{V}_{diff}\), preserves \((\epsilon,\delta)\)-differential privacy. We now examine the probability \(\mathbb{P}(\widehat{\mathcal{M}}(\mathbf{v})\in\widehat{\mathcal{M}})=\prod_{i \in\mathrm{V}_{perturbed}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v_{i})\in \widehat{\mathcal{M}_{i}})\). There are at most \(|\mathrm{N}_{i_{0}}|+1\) different elements indexed in \(\mathrm{V}_{diff}\) between \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\), while the remaining elements indexed in the set \(\mathrm{V}_{same}:=(\mathrm{V}_{perturbed}-\mathrm{V}_{diff})\) are the same. Also, combining the fact that each component \(\widehat{\mathcal{M}}_{i}(v_{i}),i\in\mathrm{V}_{diff}\), is \((\epsilon,\delta)\)-differentially private, as a consequence, we can substitute \[\prod_{i\in\mathrm{V}_{perturbed}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v_{i})\in \widehat{\mathcal{M}_{i}})\] \[=\prod_{i\in\mathrm{V}_{same}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v _{i})\in\widehat{\mathcal{M}_{i}})\prod_{i\in\mathrm{V}_{diff}}\mathbb{P}( \widehat{\mathcal{M}}_{i}(v_{i})\in\widehat{\mathcal{M}_{i}})\] \[\leq\prod_{i\in\mathrm{V}_{same}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v _{i}^{\prime})\in\widehat{\mathcal{M}_{i}})\prod_{i\in\mathrm{V}_{diff}}(e^{ \epsilon}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v_{i}^{\prime})\in\widehat{ \mathcal{M}_{i}})+\delta)\] If we focus on the second product term and look at the additive contribution of each of the \(\delta\) terms, of which there are \(|\mathrm{N}_{i_{0}}|+1\), we notice that they are only ever multiplied by probabilities that are at most one Therefore, each contributes at most an additive \(\delta\): \[\prod_{i\in\mathrm{V}_{diff}}(e^{\epsilon}\mathbb{P}(\widehat{ \mathcal{M}}_{i}(v_{i}^{\prime})\in\widehat{\mathcal{M}_{i}})+\delta)\] \[\leq e^{(|\mathrm{N}_{i_{0}}|+1)\epsilon}\prod_{i\in\mathrm{V}_{ diff}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v_{i}^{\prime})\in\widehat{\mathcal{M}_{i}})+(| \mathrm{N}_{i_{0}}|+1)\delta\] Then, we have \[\prod_{i\in\mathrm{V}_{perturbed}}\mathbb{P}(\widehat{\mathcal{M}}_{i}(v _{i})\in\widehat{\mathcal{M}_{i}})\] \[\leq e^{(|\mathrm{N}_{i_{0}}|+1)\epsilon}\mathbb{P}(\widehat{ \mathcal{M}}(\mathbf{v}^{\prime})\in\widehat{\mathcal{M}})+(|\mathrm{N}_{i_{0}}|+ 1)\delta.\] Note that \(i_{0}\) can be any \(i\in\mathrm{V}\). Therefore, \(\widehat{\mathcal{M}}\) releases \((p\epsilon,p\delta)\)-differential privacy with \(p=1+\max_{i\in\mathrm{V}}|N_{i}|\). **Remark 2**.: _Note that the privacy guarantee depends in a crucial way on how the notion of adjacency is defined. Although we only prove the differential privacy from \(\mathbf{f}\) to \(\hat{\mathbf{f}}\) for LQ games with \(\mu\)-adjacency in Definition 6, it serves as a tutorial example and can be applied to other monotone games as long as players' payoff functions are explicitly given._ ## VI Numerical Examples Consider a LQ game with \(10\) players. Players are arranged in a ring lattice with each player connected to \(|\mathrm{N}_{i}|=4,\forall i\in\mathrm{V}\), neighbors where \(a_{1},a_{2},\lambda_{1},\lambda_{2}\) are selected upon Theorem 4 to guarantee differential privacy. Under each parameter configuration, we conduct \(500\) executions, in each of which we apply Algorithm 1. Each perturbed NE is calculated by \(\hat{\mathbf{x}}^{*}=(\mathbf{I}-\mathbf{G}+\mathbf{D})^{-1}(\mathbf{b}- \boldsymbol{\beta})\). We then compute \(\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|\), and \(\gamma\) according to (9b). In Fig. 1, we plot the comparison of \(\gamma\) and \(\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|\) between two parameter configurations (S1) and (S2). The result of Fig. 1 is consistent with Theorem 3 that the distance between the original NE and the perturbed NE is bounded by \(\gamma\) while preserving differential privacy. **Experiment 2** (Benchmark with Existing Methods). Based on Experiment 1, we further plot each player's original NE and distribution of the perturbed NE under two parameter configurations (S1) and (S2) among \(500\) executions. The result of Fig. 2 shows that most perturbed NE under (S1) and (S2) are located to the left of the original NE. The parameter configuration (S2) has a weaker requirement for differential privacy. The perturbed NE under (S2) are closer to the original NE, implying that one has to sacrifice the differential privacy of payoff functions for the accuracy of NE. To further show the relevance among the accuracy of NE, the privacy requirement \(\epsilon\) and the Laplace parameter \(a\), we fix \(\delta=0\) and plot \(\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|/\|\mathbf{x}^{*}\|\) versus \(\epsilon\) and \(a\) each for \(50\) executions in Fig. 3. Roughly speaking, it shows that as \(\epsilon\) decreases (stricter privacy), the Laplace parameter \(a\) increases and thus the accuracy of NE decreases. Compared with existing methods of state/communication perturbation [20, 32, 23], we both show the tradeoff between the accuracy of the optimal points/NE and the privacy of objective functions/payoff functions, However, they prove that \(\lim_{k\to\infty}\mathbb{E}(\|\mathbf{x}(k)-\mathbf{x}^{*}\|^{2})\) has a upper bound depending on convergence rate and privacy level. According to the structure of their privacy algorithm, they might produce a asymptotically unbiased perturbed NE. Unlike those results, Algorithm 1 always produces a biased perturbed NE. It arises from \(\mathbb{E}(q_{ii})>0,\forall i\in\mathrm{V}\) that are necessary to ensure the concavity of the perturbed payoff function. However, they have to add perturbation to communications at all time steps. For example, in [32], it takes roughly \(4000\) steps to reach a close distance from \(\mathbf{x}^{*}\), in each of which perturbation is added to players' state. In contrast, we only add perturbation to the original payoff functions once, after which the computation of the distributed NE is deterministic. The number of linear-quadratic perturbation coefficients (non-zero \(q_{ij},\beta_{i},\forall i,j\in\mathrm{V}\)) generated by Algorithm 1 is only \(60\), which is far less than theirs. **Experiment 3** (Tradeoff between Privacy and Payoffs).: Upon Experiment 1, we further compute the players' payoffs at the original NE and the perturbed NE under two parameter configurations, \(\mathbf{f}(\mathbf{x}^{*})\) and \(\mathbf{f}(\hat{\mathbf{x}}^{*})\), and plot them in Fig. 4,. From the result of Fig. 4, it is not surprising that players' payoffs at the perturbed NE under (S1) and (S2) are always lower than those at the original NE. Players' payoffs at the perturbed under (S1) are lower than those at the perturbed Fig. 1: The comparison of \(\gamma\) and \(\|\mathbf{x}^{*}-\hat{\mathbf{x}}^{*}\|\) between two parameter configurations (S1) and (S2). NE under (S2). It indicates that the sacrifice of the accuracy of NE for payoff functions' privacy leads to the decline of players' payoffs. ## VII Conclusion In this work, we investigated network games in which players participated in information aggregation processes under the differential privacy requirement for players' payoff functions. The LLQFP mechanism was proposed. We turned to monotone games, demonstrating that the LLQFP mechanism preserved the concavity property and generated a bounded perturbed NE which was controllable by Laplace parameter tuning. We also looked at LQ games as a pedagogical example to explain, given what Laplace parameter conditions, differential privacy of the LLQFP mechanism could be ensured. Finally, numerical examples were presented to demonstrate the benefits of the LLQFP mechanism.
2308.13432
Dephasingless laser wakefield acceleration in the bubble regime
Laser wakefield accelerators (LWFAs) have electric fields that are orders of magnitude larger than those of conventional accelerators, promising an attractive, small-scale alternative for next-generation light sources and lepton colliders. The maximum energy gain in a single-stage LWFA is limited by dephasing, which occurs when the trapped particles outrun the accelerating phase of the wakefield. Here, we demonstrate that a single space-time structured laser pulse can be used for ionization injection and electron acceleration over many dephasing lengths in the bubble regime. Simulations of a dephasingless laser wakefield accelerator driven by a 6.2-J laser pulse show 25 pC of injected charge accelerated over 20 dephasing lengths (1.3 cm) to a maximum energy of 2.1 GeV. The space-time structured laser pulse features an ultrashort, programmable-trajectory focus. Accelerating the focus, reducing the focused spot-size variation, and mitigating unwanted self-focusing stabilize the electron acceleration, which improves beam quality and leads to projected energy gains of 125 GeV in a single, sub-meter stage driven by a 500-J pulse.
Kyle G. Miller, Jacob R. Pierce, Manfred V. Ambat, Jessica L. Shaw, Kale Weichman, Warren B. Mori, Dustin H. Froula, John P. Palastro
2023-08-25T15:25:17Z
http://arxiv.org/abs/2308.13432v1
# Dephasingless laser wakefield acceleration in the bubble regime ###### Abstract Laser wakefield accelerators (LWFAs) have electric fields that are orders of magnitude larger than those of conventional accelerators, promising an attractive, small-scale alternative for next-generation light sources and lepton colliders. The maximum energy gain in a single-stage LWFA is limited by dephasing, which occurs when the trapped particles outrun the accelerating phase of the wakefield. Here, we demonstrate that a single space-time structured laser pulse can be used for ionization injection and electron acceleration over many dephasing lengths in the bubble regime. Simulations of a dephasingless laser wakefield accelerator driven by a 6.2-J laser pulse show 25 pC of injected charge accelerated over 20 dephasing lengths (1.3 cm) to a maximum energy of 2.1 GeV. The space-time structured laser pulse features an ultrashort, programmable-trajectory focus. Accelerating the focus, reducing the focused spot-size variation, and mitigating unwanted self-focusing stabilize the electron acceleration, which improves beam quality and leads to projected energy gains of 125 GeV in a single, sub-meter stage driven by a 500-J pulse. Introduction In a laser wakefield accelerator (LWFA), the ponderomotive force of an ultrashort laser pulse propagating through plasma displaces electrons and excites a large-amplitude plasma wave.[1; 2] The fields of the plasma wave can exceed 100 GV/m and are orders of magnitude larger than those of conventional radio-frequency accelerators. The next generation of LWFAs may provide ultra-compact, high-energy colliders and advanced light sources.[3] To do so, however, these LWFAs will have to address three factors that can limit the maximum energy gain: diffraction, depletion, and dephasing. Of these three, dephasing--the advance of a high-energy electron from the accelerating to decelerating phase of the plasma wave--is typically the most difficult to address.[4; 5; 6; 7; 8; 9; 10; 11; 12] State-of-the-art, single-stage LWFAs operate at low density (10\({}^{17}\) cm\({}^{-3}\)) to achieve the highest electron energies (\(\lesssim\)10 GeV) over a single dephasing length (\(\sim\)20 cm).[12; 13; 7] Acceleration past these energies requires either multiple stages[13; 14] or some technique to circumvent dephasing.[15; 16; 17; 18] Spatiotemporal structuring of light can produce laser pulses that feature a programmable-trajectory "flying focus" that travels distances far greater than a Rayleigh range while maintaining a near-constant profile.[18; 19; 20; 21; 22] The ultrafast flying focus, in particular, uses an axiparabola to focus different near-field annuli of a laser pulse to different longitudinal locations and the radial group delay imparted by a radial echelon to control the timing of those foci.[18; 22; 23; 24; 25] The resulting ultrashort intensity peak can be made to travel at the vacuum speed of light inside a plasma, making it ideal for a dephasingless laser wakefield accelerator (DLWFA).[18; 26] As fresh light rays come into focus, they continually drive a luminal wake [Fig. 1(a)], simultaneously solving the issues of diffraction, depletion, and dephasing present in a traditional LWFA [Fig. 1(b)]. This allows DLWFAs to operate at high density (10\({}^{19}\) cm\({}^{-3}\)), where the accelerating fields are stronger. The DLWFA concept has been demonstrated in simulations of linear and quasi-linear wakes that used either an external beam or a density downramp to inject electrons into the wake.[17; 18; 26; 27; 28; 29] The first simulations of a DLWFA driven by an ultrafast flying focus showed energy gains of \(>\)1 GeV over \(\sim\)1 cm for externally injected beams.[18; 26] Further investigations yielded insight into the field stability[27] and used a 5-fold, 10-\(\upmu\)m density downramp to inject and accelerate 10 pC of charge to \(\sim\)400 MeV.[28] A DLWFA operating in the nonlinear bubble regime,[12; 30; 31] where plasma electrons are completely expelled from the path of the laser pulse, could take advantage of even larger accelerating fields. Operation in this regime would allow for self-injection from a uniform plasma or ionization injection, obviating the need for tailored density gradients. Ionization injection, in Figure 1: Comparison of dephasingless and traditional laser wakefield accelerators. (a) In a dephasingless laser wakefield accelerator, fresh light rays continually come into focus to produce a near-luminal intensity peak and wake, thereby preventing dephasing. (b) In a traditional laser wakefield accelerator, the trapped electrons eventually outrun the accelerating phase of the wakefield, limiting the maximum energy gain. Contours of laser intensity (red/yellow), electron density (gray), and accelerating/decelerating wakefield (teal/pink) are shown. The first bubble (white) trails the laser pulse and is devoid of all but the trapped electrons. particular, offers the potential for smaller-emittance beams with lower laser powers.[32; 33; 34; 35; 36; 37] Regardless of the injection mechanism, stable propagation of a flying focus has yet to be demonstrated in the bubble regime. In this work, we demonstrate ionization injection and stable acceleration of electrons in a bubble-regime dephasingless laser wakefield accelerator. Using a single 6.2-J pulse, 25 pC of charge are injected and accelerated over 20 dephasing lengths, or 1.3 cm, to a maximum (average) energy of 2.1 (1.7) GeV. Structuring the flying-focus pulse to control the motion of the bubble enables the generation of a high-quality electron beam with a 1.8% average energy spread and 2.2 mm-mrad normalized emittance. This is done by accelerating the focus to compensate for a changing spot size, masking the inner portion of the axiparabola-echelon pair to reduce the amount of light trapped in the bubble, and positioning the plasma to mitigate unwanted self-focusing. Scaling these results to near-term experiments with 500 J of laser pulse energy suggests that energy gains of 125 GeV over a distance of \(<\)1 m are possible. ## II Results To demonstrate ionization injection and acceleration in a DLWFA, particle-in-cell simulations were conducted for a flying-focus pulse generated by an axiparabola and a radial echelon [Fig. 2(a)]. The axiparabola produces an extended focal region, and the echelon imparts a radial group delay that provides control over the trajectory of the focus. Figure 2(b) shows a schematic of the nominal focal region, focal velocity, laser amplitude, and spot size produced by these optics along with the plasma density. Three key modifications to the original DLWFA design enable stable acceleration in the bubble regime: (i) accelerating the focus to maintain trapping and acceleration of the injected electron beam, (ii) masking the inner portion of the optics to eliminate laser light where the focused spot size is largest, and (iii) starting the plasma sufficiently upstream of the peak intensity to reduce unwanted self-focusing. In the simulations, a flying-focus pulse with wavelength \(\lambda_{0}=1.054\,\mathrm{\SIUnitSymbolMicro m}\), duration \(\tau=15\) fs matched to the plasma density (\(\pi/\tau\mathrm{\approx}\omega_{\mathrm{p}}\), where \(\omega_{\mathrm{p}}\) is the plasma frequency), and a peak vacuum intensity \(I_{0}=1.1\times 10^{19}\) W/cm\({}^{2}\) propagated through a preionized He\({}^{2+}\) plasma locally doped with Ar\({}^{8+}\). The pulse further ionized the argon and drove a nonlinear wake over 1.3 cm (20 dephasing lengths \(L_{\mathrm{d}}\)). The freed electrons were injected over 2.2 mm, and the resulting 25-pC beam was accelerated to a maximum (average) energy of 2.1 (1.7) GeV [Fig. 2(c)]. Near the end of the focal region, the normalized beam emittance was 2.2 mm-mrad with an average energy spread of 1.8%. The laser pulse had 6.2 J of energy, resulting in a laser-to-beam efficiency of 0.7%. The average accelerating gradient was approximately 1.5 GeV/cm (0.25 GeV/J) in terms of the accelerator length (pulse energy). This compares favorably to the record experimental result for traditional LWFA of 0.4 GeV/cm (0.25 GeV/J).[7] Ionization injection requires a sufficiently large laser electric field and bubble radius to trap electrons. Ionized electrons born near the peak of the laser pulse experience a drop in their potential energy and a corresponding increase in their longitudinal momentum as they drift to the rear of the bubble. The electrons are trapped if they move through a change in the wake potential \(\psi\) that satisfies[33; 38] \[\Delta\psi=\psi_{\mathrm{f}}-\psi_{\mathrm{i}}\lesssim-1, \tag{1}\] where \(\psi=e(\phi-\beta_{\mathrm{w}}A_{z})/mc^{2}\), \(\beta_{\mathrm{w}}=v_{\mathrm{w}}/c\) is the normalized wake speed, \(\phi\) is the electrostatic potential, and \(A_{z}\) is the longitudinal vector potential. In order to satisfy Eq. (1), ionization injection[33; 34; 37] typically requires laser pulses with amplitudes \(a_{0}\gtrsim 2\), spot sizes within the plasma \(k_{\mathrm{p}}w_{0}\gtrsim 2\), and powers \(P/P_{\mathrm{c}}\gtrsim 0.5\), where \(a_{0}\approx 8.55\times 10^{-10}\lambda_{0}\left(\upmu\mathrm{m}\right) \sqrt{I_{0}\left(\mathrm{W/cm^{2}}\right)}\) is the normalized vector potential of the laser pulse, \(k_{\mathrm{p}}=\omega_{\mathrm{p}}/c\), and \(P_{\mathrm{c}}\) is the critical power for relativistic self-focusing.[39] This contrasts evolving-bubble self-injection, where trapping is typically only observed if \(P/P_{\mathrm{c}}\gtrsim 1\).[30; 31] Producing a stable wake structure and controlling the focal trajectory enables the trapping, retention, and acceleration of ionized electrons with a single pulse. Although relativistic self-focusing can be used to guide a conventional laser pulse,[8; 9; 10; 11] the same process can disrupt the transverse structure of a flying-focus pulse and produce deleterious modulations in the spot size and on-axis intensity.[27] These modulations can perturb the electron sheath, change the bubble size and shape, and result in a loss of trapped charge or poor beam quality. In addition, self-focusing and refraction from the nonlinear plasma structure can cause the on-axis and first off-axis radial maxima of the pulse [Fig. 1(a)] to merge. This doubles the power in the radial core of the pulse and further exacerbates the effects of nonlinear propagation. The simulations performed in this work suggest that a condition for stable propagation of a flying-focus pulse is given by \[\frac{P_{0}}{P_{\mathrm{c}}}\approx\frac{\left(a_{0}k_{\mathrm{p}}w_{0}\right) ^{2}}{32}\lesssim 0.5, \tag{2}\] where \(P_{0}\) is the power integrated out to the first radial minimum of the intensity when the pulse first enters the plasma. To date, stable propagation in a DLWFA has only been demonstrated for \(a_{0}\leq 1.5\).[26; 27; 28] The remainder of this section describes the design of a stable DLWFA with \(\alpha_{0}\gtrsim 2.0\) and sufficient power for ionization injection. A subluminal and accelerating focal trajectory [Fig. 2(b)] prevents the back of the wake from eclipsing the trapped charge and positions the charge in the strongest accelerating field. The radially dependent focal length of the axiparabola, \(f(r)=f_{0}+(r/R)^{2}L\), focuses different near-field radii Figure 2: The ultrafast flying focus and electron acceleration in a bubble-regime dephasingless laser wakefield accelerator. (a) Schematic of the optical configuration for an accelerating focus, including the axiparabola and echelon. For illustrative purposes, the optics are shown in transmission, but experiments would likely be performed in reflection.[25] (b) The accelerator geometry showing the on-axis amplitude \(a_{0}\) and inner-core spot size \(w_{0}\) of the masked laser pulse—simulated in vacuum (solid) and plasma (dashed)—along with the designed focal velocity in the plasma \(\beta_{\rm f}\) (dot-dashed). (c) Energy gain of the ionization-injected electrons in the first bubble. After 20 dephasing lengths, 25 pC of charge was accelerated up to 2.1 GeV. The inset displays a snapshot of the He electron density along with the trapped (Ar) electron density. \(r\leq R\) to different longitudinal locations \(z=f(r)\) within the focal range \(L\). As a result, the inner core of the flying focus pulse has a vacuum spot size \(w_{\rm v}(z)=f(z)\lambda_{0}/\pi r(z)\) that decreases along the focal region: \[w_{\rm v}(z)\approx\frac{\lambda_{0}f_{0}}{\pi R}\sqrt{\frac{L}{z-f_{0}}}, \tag{3}\] where \(f_{0}\gg L>0\) is assumed. The radius of the bubble is approximately equal to this spot size. For a constant-velocity flying focus, the decreasing spot size causes the rear sheath of the bubble to accelerate and eventually overtake the trapped charge. This can be avoided by programming the focal trajectory so that the back of the bubble moves at the vacuum speed of light: \[\beta_{\rm f}(z)=\beta_{\rm v}\frac{v_{\rm g}}{c}=1+\alpha\frac{\mathrm{d}w_{ \rm v}}{\mathrm{d}z}=1-\alpha\frac{\lambda_{0}f_{0}}{2\pi R}\sqrt{\frac{L}{( z-f_{0})^{3}}}, \tag{4}\] where \(\beta_{\rm f}\equiv v_{\rm f}/c\) and \(\beta_{\rm v}\) are the normalized focal velocities in plasma and vacuum, respectively, \(v_{\rm g}=(1-\omega_{\rm p}^{2}/\omega_{0}^{2})^{1/2}\) is the group velocity of the laser pulse, \(\omega_{0}=2\pi c/\lambda_{0}\), and \(\alpha=0.6\) is a numerically determined factor that accounts for the reduction in spot size due to nonlinear propagation. The focal point accelerates so that \(\beta_{\rm f}(z)\) asymptotes to unity with increasing distance. Accounting for the evolution of the bubble when specifying the focal trajectory [as in Eq. (4)] prevents dephasing and a loss of trapped charge. Figure 3 shows the on-axis, longitudinal electric field of the wake \(E_{z}\) for three different focal trajectories: in (a), the focal velocity from Eq. (4) produced a luminal wake that is optimal for electron acceleration and maintaining the trapped charge [Fig. 2(c)]; in (b), a subluminal focus drove a subluminal wake that resulted in dephasing [Fig. 3(e)]; and in (c), a luminal focus drove a superluminal wake that overtook and lost the trapped charge [Fig. 3(f)]. The total trapped charge in the first bubble for these cases is displayed in Fig. 3(d). Only the accelerating focus, as specified by Eq. (4), both maintained and accelerated the trapped charge over the entire focal region. Masking an inner portion of the axiparabola and echelon reduces the spot-size variation of the focused pulse and stabilizes its propagation through the focal region. With the laser amplitudes \(a_{0}>2\) needed for ionization injection and operation in the bubble regime, stable propagation of the flying-focus pulse requires that \(k_{\rm p}w_{0}<2\) [Eq. (2)]. However, the spot size of an ultrashort flying-focus pulse varies significantly (often by a factor of 4) over the focal region,[22; 23; 24; 27] making it impossible to satisfy \(k_{\rm p}w_{0}<2\) everywhere. This can be resolved by eliminating the section of the focal region where the spot size is largest. For all results here, the axiparabola and echelon optics were masked from 0 to \(0.54R\) to remove the first 30% of the focal region. The resulting spot size varied by a factor of only \(\sim\)1.6 (\(\sim\)1.4) in vacuum (plasma) over the shortened region [Fig. 2(b)]. The position of the plasma relative to the focal region also plays a critical role in ensuring stable propagation and in meeting the requirement of Eq. (2). Figure 4 displays temporal snapshots of the laser pulse envelope and electron density for two plasma configurations. In (a)-(c), the plasma began 0.54 cm into the focal region (\(L\) = 2 cm). The large amplitude of the pulse as it entered the plasma resulted in strong self-focusing and the trapping of an intense sub-pulse that deformed the bubble [see (c)]. In (d)-(g), the plasma began earlier at 0.465 cm into the focal region. Starting Figure 3: Dependence of plasma wave and electron beam properties on the focal trajectory. The on-axis, longitudinal electric field of the wake \(E_{z}\) for normalized focal velocities (a) specified by Eq. (4), (b) set to 0.9995, and (c) set to 1.0. The resulting normalized velocity of the back of the wake \(\beta_{\mathrm{w}}\) (dashed-dot) and the vacuum speed of light trajectory (dashed) are also shown. (d) The total trapped charge in the first bubble for cases (a)–(c). (e) and (f) Energy gain of the ionization-injected electrons in the first bubble for the cases (b) and (c), respectively. Only for the accelerating focus was the trapped charge both accelerated and maintained over the entire focal region. the plasma at this location (or even earlier), where the amplitude of the pulse is smaller, mitigates self-focusing and the trapping of light within the bubble. This prevents significant deformation of the bubble, as demonstrated by comparing the two cases at equal distances [cf. (c) and (g)]. ## III Discussion The laser-to-beam energy efficiency of 0.7% quoted in the Results section could be improved upon by increasing the amount of trapped charge or its energy gain. In the simulations, the accelerating field varied longitudinally along the electron beam: the field was stronger near the rear of the bubble and weaker closer to the center [Fig. 3(a)]. Loading the wake would produce a flat accelerating field and potentially reduce the final electron energy spread.[40; 41; 42] The amount of trapped charge could be increased by extending the argon-doped region, enlarging the normalized spot size \(k_{\mathrm{p}}w_{0}\), or using a density downramp. Fine-tuning the focal velocity to position the beam closer to the back of the bubble could increase the energy gain [see Fig. 3(f), where electrons were accelerated to nearly 3 GeV with a faster focal velocity]. Experimentally, the beam charge and energy could be optimized in real time by adjusting the focal trajectory using a deformable mirror and spatial light modulator pair instead of an echelon.[43; 44] The efficiency could also be increased by structuring the transverse profile of the laser pulse. Figure 4: Laser pulse and bubble evolution in a dephasingless laser wakefield accelerator. Snapshots of the laser intensity (bottom) and plasma density (top) at various distances for the schematic in Fig. 2(b). In (a)–(c), the plasma begins 0.54 cm into the focal region. In (d)–(g), the plasma begins 0.465 cm into the focal region. Panels (c) and (g) correspond to the same spatial location, but the deformation of the bubble due to trapped light is only observed in (c). The radial intensity profile incident on the axiparabola could be shaped to reduce the value of \(a_{0}\) after the ionization-injection region. This would place the accelerator in a more-linear regime and increase the efficiency at the cost of a longer accelerator.[45] Nonlinear propagation and transverse structures in the plasma density reduce the laser pulse amplitude and spot size relative to their vacuum values [Fig. 2(b)]. The axiparabola maps different annuli in the near field to different longitudinal locations in the far field. The resulting interference produces a radial intensity profile with concentric maxima [Fig. 1(a)]. When the flying-focus pulse has sufficient amplitude, these maxima can ponderomotively drive ring-like plasma waves that channel, trap, and deplete some of the laser light. For the case considered here, this reduced the on-axis value of \(a_{0}\) from 3 in vacuum to \(\sim\)2.2 in the plasma. In addition, relativistic self-focusing and channeling in the ring-like structures caused a rapid and sustained decrease in the spot size of each maximum. A flying-focus pulse with an amplitude \(a_{0}\ll 1\) did not produce these structures and propagated identically to the vacuum case, but with the focal velocity reduced by a factor of \(v_{\mathrm{g}}/c\). The stability of the accelerating structure is expected to improve with accelerator length. The on-axis intensity modulations visible in the vacuum \(a_{0}\) shown in Fig. 2(b) restrict the rear positioning of the electron beam in the wake. The modulations cause the bubble to oscillate, which can result in the rear sheath overtaking and detrapping the electron beam. The amplitude of the intensity modulations decreases for longer focal regions (see Methods section), meaning that a more-optimal beam placement should be possible for larger accelerator lengths. Radial masking of the optics enhances the efficiency and stability of a DLWFA but is not strictly required to accelerate over many dephasing lengths. When the optics were left unmasked for the case shown in Fig. 2, the laser pulse and bubble exhibited highly nonlinear evolution that was isolated to the beginning of the focal region. The large spot size at the beginning of the focal region resulted in substantial self-focusing followed by stochastic trapping, acceleration, and the loss of ionized electrons. Any self-focused light that was trapped within the bubble [as in Fig. 4(c)] propagated slightly slower than the speed of light and was eventually left behind. Thus, stable propagation, ionization injection, and acceleration still occurred farther into the focal region, albeit with lower charge and energy gains than in the masked case. More experimentally feasible alternatives to a fully preionized plasma could also be investigated. The simulations presented here assumed a preionized plasma with a transverse and longitudinal extent of 2.6 mm and 1.5 cm, respectively. For a meter-long accelerator (\(L=1\) m) using an axiparabola with the same f-number \(f_{\#}=f_{0}/2R=7\), the plasma would have to be preionized over a 15 cm diameter, which may be experimentally infeasible. A DLWFA could instead be designed so that the flying-focus pulse itself ionizes the plasma. However, this would require adjustments to the focal velocity and may change the nonlinear plasma response, which will require further investigation via simulation. Finally, other realizations of a flying focus or use of structured light may allow for additional optimization.[46; 47; 48; 49]. In conclusion, ionization injection and stable acceleration over 20 dephasing lengths in a bubble-regime DLWFA has been demonstrated. A stable accelerating structure was attained by (i) prescribing an accelerating focal trajectory to compensate for the changing spot size produced by the axiparabola, (ii) masking the interior of the optics to reduce the variation of the spot size within the plasma, and (iii) beginning the plasma farther into the focal region to mitigate self-focusing. With the same accelerating gradient, a 500-J laser pulse driving a DLWFA over \(\sim\)80 cm could produce an energy gain of 125 GeV. Further optimization of the focal trajectory could result in even higher acceleration gradients and efficiencies. ## IV Methods ### Dephasing length In a traditional laser wakefield accelerator, the laser pulse travels slower than the vacuum speed of light. Near-luminal electrons trapped in the wake can advance relative to the pulse and outrun the accelerating phase of the wakefield, a process known as dephasing. In the bubble regime, the accelerating field changes sign near the center of the bubble.[50] Thus, the electrons reach their maximum energy after advancing approximately one bubble radius relative to the laser pulse, i.e., upon moving from the back to the center of the bubble. The distance over which this occurs--the dephasing length--depends on the velocity of the front edge of the laser pulse and the bubble radius.[12] Specifically, \[L_{\mathrm{d}}=\frac{2}{3}\frac{\omega_{0}^{2}}{\omega_{\mathrm{p}}^{2}}w_{0}, \tag{5}\] where it has been assumed that the bubble radius is approximately equal to the spot size. In Ref.[12], a matching condition \(k_{\mathrm{p}}w_{0}=2\sqrt{a_{0}}\) was determined that leads to stable propagation for a traditional LWFA in the bubble regime. If this condition is met, the dephasing length is then given by \(k_{\mathrm{p}}L_{\mathrm{d}}=\frac{4}{3}(\omega_{0}^{2}/\omega_{\mathrm{p}}^{ 2})\sqrt{a_{0}}\). When comparing the DLWFA [Fig. 2(b)] to a traditional LWFA, various choices can be made in determining an equivalent dephasing length. The vacuum values of \(a_{0}=3\) and \(k_{\mathrm{p}}w_{0}=3.5\) could be used (which are approximately matched), yielding a dephasing length of \(L_{\mathrm{d}}\approx 331k_{\mathrm{p}}^{-1}\approx 0.665\,\mathrm{mm}\). If the value of \(a_{0}=2.2\) in the plasma is used instead, then \(L_{\mathrm{d}}\approx 284k_{\mathrm{p}}^{-1}\approx 0.569\,\mathrm{mm}\) assuming a matched spot size. Alternatively, the value of \(k_{\mathrm{p}}w_{0}=2.2\) in the plasma could be used to obtain \(L_{\mathrm{d}}\approx 210k_{\mathrm{p}}^{-1}\approx 0.422\,\mathrm{mm}\). All comparisons between the DLWFA and a traditional LWFA made in this work use the first and most-conservative choice, \(L_{\mathrm{d}}\approx 0.665\,\mathrm{mm}\), which corresponds to the simulation shown in Fig. 1(b). ### Design of the axiparabola, radial group delay, and radial chirp The initial laser fields used in the PIC simulation were obtained by propagating the laser pulse from the flying-focus optical assembly to the start of the simulation domain using a frequency-domain Fresnel integral.[51] The optical assembly applied three modifications to the laser pulse: (i) the phase from an axiparabola to focus each annulus of the pulse to a different longitudinal location; (ii) the radial group delay from an echelon to control the focal trajectory; and (iii) a chirp that varied with radius to preemptively invert group-velocity dispersion in the plasma. The axiparabola essentially uses spherical aberration to extend the focal region.[23; 27] Here, a positive focal range (\(L>0\)) was used so that the largest spot size occurs at the beginning of the focal region to better facilitate ionization injection. The echelon consisted of concentric rings of half-wavelength (\(\lambda_{0}/2\)) depth and variable widths determined by the desired focal trajectory.[22; 18] The radial chirp can be introduced by applying a variable-thickness coating to the surface of the echelon. For more-adaptive control over the focal trajectory, the echelon can be replaced by a deformable mirror and spatial light modulator.[22] The lineouts of vacuum \(a_{0}\) and spot size \(w_{0}\) shown in Fig. 2(b) were computed by evaluating the Fresnel integral at an initial point in the far field, then using the unidirectional pulse propagation equation[52; 53; 54] to model the laser propagation. For all results shown, the modeled axiparabola had a radius \(R=5\) cm, a nominal focal length \(f_{0}=70\) cm, and a nominal focal range of \(L=2\) cm. The laser pulse had a wavelength of \(\lambda_{0}=1.054\,\mathrm{\SIUnitSymbolMicro m}\) and a Gaussian temporal profile with an intensity FWHM of 15 fs. ### Particle-in-cell simulations All PIC simulations were performed using the quasi-3D geometry of Osiris [55; 56; 57], where modes 0 and 1 were retained in the azimuthal expansion. A customized field solver that mitigates errors from the numerical dispersion relation and the time-staggering of the electromagnetic fields was employed [58; 59]. As a result, no extraneous numerical corrections had to be made to the focal trajectory of the pulse, as has been done in prior simulations of dephasingless laser wakefield accelerators [26; 27; 28]. For the simulation pictured in Fig. 2, the preionized background plasma was simulated with 32 particles per cell (\(2\times 2\times 8\)) out to a radius of \(40\,c/\omega_{\rm p}\) and 8 particles per cell (\(1\times 1\times 8\)) thereafter. The 9-14 levels of unionized argon electrons were simulated with a possible 8 particles per cell per level out to a radius of \(15\,c/\omega_{\rm p}\). The preionized plasma had an 80-\(\upmu\)m upramp (results were insensitive to the size of all upramps) followed by a uniform density of \(7\times 10^{18}\,{\rm cm}^{-3}\). In the ionization-injection region, this density was obtained via a 90% He\({}^{2+}\)/10% Ar\({}^{8+}\) mix (resulting in a 69%/31% respective contribution to the preionized electron background). The grid had \(4106\times 6570\) cells in \(z\times r\), with 30 points per laser wavelength and 10 points per plasma period, respectively. The time step was \(0.0102\,\omega_{\rm p}^{-1}\). Altogether, the simulation used a \(143\,\upmu{\rm m}\times 1.32\,{\rm mm}\) box and a total propagation distance (time) of 1.51 cm (50.2 ps). ### On-axis intensity modulations of the flying-focus pulse The on-axis electric field of a laser pulse focused by an axiparabola can be expressed as \[E(z,\omega_{0})=\frac{\omega_{0}}{icz}e^{-iA^{2}/4B}\int_{0}^{R}\exp\left[iB \left(r^{\prime 2}+\frac{A}{2B}\right)^{2}\right]r^{\prime}\,dr^{\prime}, \tag{6}\] where \(A=\omega_{0}/2c(z^{-1}-f_{0}^{-1})\) and \(B=\omega_{0}L/4cf_{0}^{2}R^{2}\). Integrating this expression and taking the squared norm (see Appendix C of Ref. [22]) results in an expression that exhibits modulations of scale length \[L_{\rm m}\approx\frac{2f_{0}}{R}\left(\frac{cL}{\omega_{0}}\right)^{1/2}. \tag{7}\] The modulation length increases with the focal length and focal range and decreases with the axiparabola radius. Suppressing the amplitude of the modulations requires a focal range \(L\gg L_{\rm m}\sqrt{2\pi}\) or \[L\gg 16f_{\rm ff}^{2}\lambda_{0}, \tag{8}\] where \(f_{\#}\) is the f-number. In the simulations presented here, \(L=20\,\mathrm{mm}\) and \(16f_{\#}^{2}\lambda_{0}=0.8\,\mathrm{mm}\). For the same \(f_{\#}\), longer acceleration lengths would reduce the amplitude of the modulations even further. ###### Acknowledgements. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof. This material is based upon work supported by the Office of Fusion Energy Sciences under Award Numbers DE-SC00215057 and DE-SC0010064, the Department of Energy National Nuclear Security Administration under Award Numbers DE-NA0003856 and DE-NA0004131, the National Science Foundation under Award Number 2108970, the University of Rochester, and the New York State Energy Research and Development Authority. Simulations were performed at NERSC under m4372. ## Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2304.14088
Hyperparameter optimization of orthogonal functions in the numerical solution of differential equations
This paper considers the hyperparameter optimization problem of mathematical techniques that arise in the numerical solution of differential and integral equations. The well-known approaches grid and random search, in a parallel algorithm manner, are developed to find the optimal set of hyperparameters. Employing rational Jacobi functions, we ran these algorithms on two nonlinear benchmark differential equations on the semi-infinite domain. The configurations contain different rational mappings along with their length scale parameter and the Jacobi functions parameters. These trials are configured on the collocation Least-Squares Support Vector Regression (CLS-SVR), a novel numerical simulation approach based on spectral methods. In addition, we have addressed the sensitivity of these hyperparameters on the numerical stability and convergence of the CLS-SVR model. The experiments show that this technique can effectively improve state-of-the-art results.
Alireza Afzal Aghaei, Kourosh Parand
2023-04-27T11:00:00Z
http://arxiv.org/abs/2304.14088v1
Hyperparameter optimization of orthogonal functions in the numerical solution of differential equations ###### Abstract This paper considers the hyperparameter optimization problem of mathematical techniques that arise in the numerical solution of differential and integral equations. The well-known approaches grid and random search, in a parallel algorithm manner, are developed to find the optimal set of hyperparameters. Employing rational Jacobi functions, we ran these algorithms on two nonlinear benchmark differential equations on the semi-infinite domain. The configurations contain different rational mappings along with their length scale parameter and the Jacobi functions parameters. These trials are configured on the collocation Least-Squares Support Vector Regression (CLS-SVR), a novel numerical simulation approach based on spectral methods. In addition, we have addressed the sensitivity of these hyperparameters on the numerical stability and convergence of the CLS-SVR model. The experiments show that this technique can effectively improve state-of-the-art results. _Keywords--_ Nonlinear differential equations, Hyperparameter optimization, Jacobi polynomials, Machine learning ## 1 Introduction Estimating the unknown dynamics of a given physical system is an essential task in science and engineering. Mathematicians simulate these systems after expressing them in functional equations such as differential and integral equations. The failure of the analytical approaches in solving these problems, which mainly contain nonlinear terms, has led researchers to develop various numerical techniques. Finite Elements (FEM), Finite Volume (FVM), Meshless, and Spectral methods are some well-known efforts. These techniques consider a simple mathematical model defined by some parameters and hyperparameters. The former are internal variables learned during the training process, whereas the latter are external configurations that should be optimized to find the best estimator. These terms are initially back in the machine learning literature. For example, in a supervised machine learning task, the Support Vector Machine (SVM) algorithm considers a hyperplane for separating the given data into different classes. The vector that defines the hyperplane is a parameter, whereas the kernel function is a hyperparameter. Likewise, a linear combination of unknown weights and basis functions is usually considered for approximating the solution to a differential equation. Here the unknown weights are parameters, while the basis functions are a hyperparameter. Choosing the best combination of hyperparameters is critical and may significantly affect the result. Scientists take advantage of prior knowledge to choose sub-optimal ones. As an example, it is common to use logarithmic-spaced values for the regularization parameter of a machine learning model. In a mathematics setting, the intrinsic nature of the problem is utilized to choose appropriate hyperparameters. For instance, the periodic problems are usually approximated by the Fourier basis functions, and the rational functions may approximate the problems that reach their steady state, and so on. Regarding the development of artificial intelligence, hyperparameter optimization or simply tuning and finding the optimal hyperparameters, has evolved. Various approaches and techniques are proposed to handle this issue. Exhaustive search [1], probabilistic algorithms [2], gradient-based [3] and meta-heuristic algorithms [4] are some of these efforts. In a similar manner, mathematicians tried to develop some routines to find optimal hyperparameters that arises in the numerical simulation techniques. To name a few, Boyd [5] proved some theorems for rational Chebyshev functions, Sanyasiraju et al. [6] used a local optimization algorithm for Radial Basis Functions (RBFs), Cavoretto et al. [7] combined leave-one-out cross-validation with univariate global optimization techniques for RBFs, Tanguy et al. [8] presented a quadratic minimization procedure for optimal placement of poles in rational approximations by Muntz-Laguerre functions and Mi at al. [9] developed two algorithms for the adaptive selection of continuous rational orthogonal basis functions. However, it is valuable to provide methods for solving problems in a broader range rather than just specific cases. A wide range of physical problems involving initial and boundary value problems are defined on the infinite or semi-infinite domain. Approximating an accurate solution on the corresponding domain is essential. These problems are mostly solved with orthogonal polynomials or rational functions with completeness and orthogonality properties on the problem domain. The Hermite, Laguerre, and rational Jacobi functions are the most widely-used functions applied to these problems. Computational and mathematical properties of the rational functions encouraged scientists to choose them as the basis functions [10, 11, 12, 13, 14, 15, 16, 17, 18]. However, these functions suffer of hyperparameters such as rational mapping and length scale parameter. These can disturb the numerical solution in some cases [19]. In this research, we develop some algorithms and investigate the applications of machine learning algorithms to optimize the hyperparameters that appear during the numerical simulation of differential equations arising on semi-infinite domains. In the continuation of the article, we will discuss the preliminaries (section 2), the proposed method (section 3), and the state-of-the-art numerical results (section 4). Finally, the concluding remarks will be discussed in the last section. ## 2 Preliminaries In this section, we explain some prerequisites needed in the following sections. To do so, we first explain the Jacobi polynomials, then the CLS-SVR method will be recalled. The hyperparameter optimization techniques used in the rest of the work will be discussed. ### Orthogonal Polynomials Hermite, Laguerre, and Jacobi polynomials are the most well-known orthogonal polynomials defined on infinite, semi-infinite, and finite intervals. They can be used to approximate functions in corresponding domains. However, approximating rational functions may not be very accurate using polynomials. Therefore, researchers proposed various rational mappings to handle this issue. In general, Jacobi polynomials with hyperparameters \(\alpha,\beta\) are defined on the interval \([-1,1]\) by the recursive expression \[\begin{split}& J_{i}^{\alpha,\beta}(x)=-\frac{(\alpha+i-1)(\beta+i-1)( \alpha+\beta+2i)}{i(\alpha+\beta+i)(\alpha+\beta+2i-2)}J_{i-2}^{\alpha,\beta}( x)\\ &+\frac{(\alpha+\beta+2i-1)\left\{\alpha^{2}-\beta^{2}+x(\alpha+ \beta+2i)(\alpha+\beta+2i-2)\right\}}{2i(\alpha+\beta+i)(\alpha+\beta+2i-2)} \\ &\quad\times J_{i-1}^{\alpha,\beta}(x),\quad i=2,3,\ldots,\end{split} \tag{1}\] where \[J_{0}^{\alpha,\beta}(x)=1,\quad J_{1}^{\alpha,\beta}(x)=\frac{\alpha+\beta+2} {2}x+\frac{\alpha-\beta}{2}\] Their orthogonality is defined using the \(L2\) inner product: \[\int_{-1}^{1}(1-x)^{\alpha}(1+x)^{\beta}J_{m}^{(\alpha,\beta)}(x)J_{n}^{( \alpha,\beta)}(x)\,dx=\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}\frac{ \Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+1)n!}\delta_{nm}.\] The Gegenbauer, Chebyshev, and Legendre polynomials are some special cases of Jacobi polynomials. For Legendre polynomials, the equation (1) with \(\alpha=\beta=0\) reduces to: \[\begin{split}& P_{0}(x)=1,\quad P_{1}(x)=x,\\ &(n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x),\quad n\geq 1,\end{split} \tag{2}\] and for Chebyshev with \(\alpha=\beta=-\nicefrac{{1}}{{2}}\) we have \[\begin{split}& T_{0}(x)=1,\quad T_{1}(x)=x,\\ & T_{n+1}(x)=2x\,T_{n}(x)-T_{n-1}(x),\quad n\geq 1.\end{split} \tag{3}\] Although these polynomials are defined for the bounded domain, researchers have used some nonlinear maps \(\phi\) with the property \(\phi:[-1,1]\rightarrow[0,\infty)\) to transform the orthogonality into the semi-infinite domain. To our best knowledge, these three mappings are the most used functions among researchers [20]: * Algebraic mapping: \[\phi(x)=\nicefrac{{(x-\theta)}}{{(x+\theta)}}.\] * Exponential mapping: \[\phi(x)=1-2\nicefrac{{\mathrm{e}^{-2}}}{{\theta}}.\] * Logarithmic mapping: \[\phi(x)=2\tanh\left(\nicefrac{{x}}{{\theta}}\right)-1.\] The rational Jacobi functions is defined by a simple transformation \(\tilde{J}(x)=J(\phi(x))\) where \(\phi(x)\) is a nonlinear rational mapping. Therefore, orthogonality takes the form \[\int_{0}^{\infty}\tilde{J}_{m}^{(\alpha,\beta)}(x)\tilde{J}_{n}^{(\alpha,\beta) }(x)w(x)\,dx=\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}\frac{\Gamma(n+ \alpha+1)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+1)n!}\delta_{nm}, \tag{4}\] where \(w(x)\) is the corresponding weight function of the inner product. ### Collocation Least-Squares Support Vector Regression Collocation Least-Squares Support Vector Regression (CLS-SVR) is a novel formulation of support vector machines for solving functional equations [21, 22, 23]. In this machine learning model, the unknown solution of a differential equation is approximated by a linear combination of unknown coefficients and some known basis functions. The basis functions, also known as feature maps, transform the input data into a nonlinear space in which we hope the approximation accuracy will increase. In the following, we recall the formulation of CLS-SVR which is based on the paper [24]. Suppose that we want to solve an arbitrary differential equation in the form of \(\mathcal{N}(u)=f\). Approximating the solution by \(m\) basis functions, we have: \[\tilde{u}(x)=w^{T}\varphi(x)+b=\sum_{i=1}^{m}w_{i}\varphi_{i}(x)+b.\] The primal optimization form of CLS-SVR takes the form: \[\begin{split}\min_{w,e}&\quad\frac{1}{2}w^{T}w+ \frac{\gamma}{2}e^{T}e\\ \mathrm{s.t.}&\quad\mathcal{N}(\tilde{u})(x_{k})-f( x_{k})=e_{k},\quad k=1,\ldots,n,\\ &\quad\tilde{u}(c_{j})=u_{j},\quad j=1,\ldots,d.\end{split} \tag{5}\] where \(n\) is the number of training points, \(e_{k}\) is the value of the residual function at \(k\)-th training point, and \(\tilde{u}(c_{j})=u_{j}\) is the set of initial and boundary conditions. The regularization hyperparameter \(\gamma\) controls the fluctuations of the learned solution and reduces the overfitting on the training data. However, this hyperparameter may not be optimized in the case of problems with unique solutions. The dual form of this optimization problem leads to a linear or nonlinear system of equations. This system can be obtained using the Lagrangian function for (5): \[\mathscr{L}(w,e,\alpha)=\frac{1}{2}w^{T}w+\frac{\gamma}{2}e^{T}e-\sum_{k=1}^{ n}\alpha_{k}\left[\mathcal{N}(\tilde{u})(x_{k})-f(x_{k})-e_{k}\right]-\sum_{j=1}^{ d}\beta_{j}\left[\tilde{u}(c_{i})-u_{i}\right].\] The saddle point of this function satisfies the solution of the problem: \[\left\{\frac{\partial\mathscr{L}}{\partial w_{i}}=0,\frac{\partial\mathscr{L }}{\partial e_{k}}=0,\frac{\partial\mathscr{L}}{\partial\alpha_{k}}=0,\frac{ \partial\mathscr{L}}{\partial\beta_{j}}=0\right\},\] where \(i=1,\cdots,m\), \(j=1,\cdots,d\), and \(k=1,\cdots,n\). After solving this system, we can use the obtained weights \(w\) to approximate the equation or use the dual variables in a kernel method sense. ### Hyperparameter optimization The problem of hyperparameter optimization is usually expressed as a non-convex minimization or maximization problem. Various algorithms proposed to solve this problem can be categorized into gradient-based and gradient-free sets. The former uses the gradient of a loss function to find an optimum value. At the same time, by generating some candidates, the latter tries to approximate the search space to find a local or possibly global optimum. Grid search, random search, Bayesian optimization, and meta-heuristic algorithms are some examples of gradient-free methods. Grid and random search have been widely used because of their simplicity and acceptable performance. Moreover, they can be massively run in parallel. Briefly, the grid search seeks the Cartesian product of given parameter sets, whereas the random search samples a fixed number of parameters from user-defined distributions. For categorical cases, the hyperparameter is chosen uniformly. In both methods, the best parameter set would be measured on a test dataset usually generated by the K-Fold cross-validation algorithm. Figure 1 compares these two algorithms. A key difference between grid and random search is their computational complexity. The grid search seeks all possible values in \(O(n^{k})\) time, having \(k\) hyperparameters and \(n\) different values for each of them, while for a random search, the user can define a budget based on the available resources so that the algorithm will run on the time complexity of \(O(n)\). For a more precise comparison, we refer to the authors [25, 26]. ## 3 Method and Results In this section, we explain the proposed algorithm and then provide examples to show the methods' efficiency. Here, we focus on the hyperparameter optimization of orthogonal rational Jacobi functions, whereas the presented algorithm can be easily extended to other mathematical models. The optimal set of hyperparameters should be found on a test dataset to prevent the overfitting problem. In machine learning, this is easily done using cross-validation techniques. Nevertheless, in the mathematical equations, there is no data to split into train and test sets. However, there are three alternative options to handle this issue. The first option is to use a set of new nodes (different from training points) in the domain and calculate the prediction error on these nodes. The second is employing some physical properties of the model. The last one is to use some criteria found by mathematical techniques, including analytical or accurate numerical ones. The last option can be seen as a composition of previous ones. Most of the criteria measures used by the authors have important physical meanings which are good enough to be accuracy test criteria. Therefore, we use the absolute difference value between the exact or the state-of-the-art approximations and the predicted one given by CLS-SVR. The proposed grid and random search algorithms are presented in algorithms 1 and 2. We will obtain the optimal numerical results for some well-known benchmark nonlinear differential equations using the proposed method in the following sections. The configuration used to find an accurate numerical solution for these problems is reported in tables 1 and 2. It is seen that the grid search seeks 600 different configurations to find the optimal set. To have a fair comparison, we set the maximum iterations for random search the same as the number of grid nodes. In addition, the roots of Legendre and Chebyshev polynomials are utilized as the training data. ``` Data: The differential equation As \(ODE\) 1\(S_{i}\leftarrow\) Set of all desired values for \(i\)-th hyperparameter \(CriteriaList\leftarrow[]\) 2\(Parameter\_Set\gets CartesianProduct(\{S_{i}:\forall i\})\) 3for\(param\_set\)in\(Parameter\_Set\)in paralleldo 4\(\tilde{u}\gets CLS-SVR(ODE)\) 5 Compute Criteria for \(\tilde{u}\) 6 Push Criteria to \(CriteriaList\) 7 end for Result:\(parameter\_set\)associated with best criteria ``` **Algorithm 1**Grid search algorithm ``` Data: The differential equation As \(ODE\) 1\(S_{i}\leftarrow\) A suitable distribution for \(i\)-th hyperparameter \(CriteriaList\leftarrow[]\) 2for\(iter\)from\(1\)to\(MAX\_ITER\)in paralleldo 3\(\tilde{u}\gets CLS-SVR(ODE)\) 4 Compute Criteria for \(\tilde{u}\) 5 Push Criteria to \(CriteriaList\) 6 end for Result:\(parameter\_set\)associated with best criteria ``` **Algorithm 2**Random search algorithm Figure 1:. 1: A comparison between random and grid searches for a simple search space with only two different parameters. It can be seen that grid search may fail to explore the search space efficiently. The figure is adapted from Bergstra et al. [1]. ### Volterra's population model Volterra's population model is a nonlinear integro-differential equation that describes population growth of a species within a closed system [24]. It is common to express this problem into an equivalent nonlinear differential equation [16]: \[\begin{split}&\kappa u^{\prime\prime}(x)=u^{\prime}(x)-u^{\prime}(x )^{2}-u(x)u^{\prime}(x),\\ & u(0)=0,u^{\prime}(0)=0.1.\end{split} \tag{6}\] Here \(\kappa\) is a non-dimensional parameter. The criteria for the prediction correctness of this problem is the maximum value of the approximated function. TeBeest [27] showed that the maximum peak is: \[u_{max}=1+\kappa\ln(\frac{\kappa}{1+\kappa+u(0)}). \tag{7}\] Considering (7) as the exact value for (6), the absolute error for this equations is defined as \[\text{Absolute Error}:=\mid u_{max}-\max_{t\in(0,\infty)}\tilde{u}(t)\mid. \tag{8}\] To find a reasonable range for the length scale parameter and the effect of the other hyperparameters, we first ran a sensitivity analysis on these hyperparameters on a large domain. The results are reported in the figure 2. It can be seen that the large values for the length scale do not yield a good approximation. The maximum reasonable value for this task is about 10. In addition, the basis functions do not impose any significant difference on this parameter. Moreover, the nonlinear mappings can affect the computational procedure of the optimization problem (5). This issue has resulted a discontinuity in these figures. Figure 3 plots some of the successfully learned approximated solutions. It is seen that the equation may not be accurately simulated using improper hyperparameters. After the sensitivity analysis, we simulate this problem with five different most used values for the non-dimensional parameter \(\kappa\) using the grid and random search algorithms. As reported in tables 1 and 2, the interval \((0,10]\) is chosen for the length scale parameter. Tables 3 and 4 report the best-obtained hyperparameters. From there, it can be inferred that algebraic mapping is the best choice for small values of \(\kappa\) while for the larger values, the exponential mapping could obtain better approximations. Moreover, the kernel function and its nonlinear mapping can result in a bit different accuracy. To show the effectiveness of the proposed algorithm, we have compared the absolute error found by various authors in table 5. Rational Legendre (RL) and Rational Chebyshev (RC) pseudospectral method [17], Sinc collocation method (SCM) [16], and Fractional Rational Legendre (FRL) [24] are compared to each other. The length scale values used by Parand et al. [24] are optimized in a similar manner proposed by Boyd [19]. ### Kidder equation The unsteady isothermal flow of a gas through a micro-nano porous medium can be expressed as a nonlinear differential equation [18]. This equation which is defined on a semi-infinite domain is modeled as: \[\begin{split}& u^{\prime\prime}(x)+\frac{2x}{\sqrt{1-\kappa u(x)}}u^{ \prime}(x)=0,\\ & u(0)=1,u(\infty)=0.\end{split} \tag{9}\] \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Values \\ \hline Kernel & \{Legendre, Chebyshev\} \\ Mapping & \{Algebraic, Exponential, Logarithmic\} \\ \(\theta\) & \(\{0.1,0.2,\cdots,9.9,10\}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The search space of the grid search \begin{table} \begin{tabular}{l l l l l l} \hline \hline \(\kappa\) & Kernel & Mapping & \(\theta\) & Exact [27] & Approximate & Error \\ \hline 0.02 & Legendre & \({}^{(x-\theta)/(x+\theta)}\) & 0.10000 & 0.92342717207022 & 0.92342711545307 & \(5.662\times 10^{-08}\) \\ 0.04 & Legendre & \({}^{(x-\theta)/(x+\theta)}\) & 0.70000 & 0.873719983000000 & 0.873719980000000 & \(3.090\times 10^{-09}\) \\ 0.10 & Legendre & \(1-2\exp({}^{-x}\!/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\kappa\) & Kernel & Mapping & \(\theta\) & Exact [27] & Approximate & Error \\ \hline 0.02 & Chebyshev & \(\nicefrac{{(x-\theta)}}{{(x+\theta)}}\) & 0.539501186666072 & 0.92342717207022 & 0.92342733722160 & \(1.652\times 10^{-07}\) \\ 0.04 & Chebyshev & \(\nicefrac{{(x-\theta)}}{{(x+\theta)}}\) & 0.318328463774207 & 0.87371998315400 & 0.87371998508417 & \(1.930\times 10^{-09}\) \\ 0.10 & Chebyshev & \(1-2\exp(\nicefrac{{x}}{{\theta}})\) & 1.626117351946306 & 0.769741490706060 & 0.76974149073275 & \(3.216\times 10^{-11}\) \\ 0.20 & Chebyshev & \(1-2\exp(\nicefrac{{x}}{{\theta}})\) & 2.510838579760311 & 0.65905038155232 & 0.65905038153414 & \(1.818\times 10^{-11}\) \\ 0.50 & Legendre & \(2\tanh(\nicefrac{{x}}{{\theta}})-1\) & 6.797026768536748 & 0.48519029140942 & 0.4851902914142 & \(2.007\times 10^{-12}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The obtained results for Volterra population equation using a random search algorithm. Figure 3: A set of learned solutions with different values for length scale \(\theta\) for Volterra’s population model with \(\kappa=0.5\). \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \(\kappa\) & RL [17] & RC [17] & FRL [24] & SCM [16] & Presented Method \\ \(m\) & 50 & 50 & 40 & 35 & 40 \\ \hline 0.02 & \(3.72\times 10^{-07}\) & \(7.51\times 10^{-07}\) & \(6.33\times 10^{-06}\) & \(7.00\times 10^{-08}\) & \(5.66\times 10^{-08}\) \\ 0.04 & \(1.43\times 10^{-08}\) & \(5.27\times 10^{-08}\) & \(6.75\times 10^{-08}\) & \(3.00\times 10^{-08}\) & \(1.93\times 10^{-09}\) \\ 0.10 & \(1.07\times 10^{-10}\) & \(2.13\times 10^{-10}\) & \(2.73\times 10^{-08}\) & \(8.00\times 10^{-08}\) & \(7.89\times 10^{-12}\) \\ 0.20 & \(3.53\times 10^{-11}\) & \(2.33\times 10^{-10}\) & \(8.57\times 10^{-10}\) & \(3.30\times 10^{-07}\) & \(1.82\times 10^{-11}\) \\ 0.50 & \(2.44\times 10^{-09}\) & \(4.87\times 10^{-09}\) & \(2.73\times 10^{-10}\) & \(3.40\times 10^{-07}\) & \(2.01\times 10^{-12}\) \\ \hline \hline \end{tabular} \end{table} Table 5: A comparison among some mathematical methods solved Volterra’s population model on the semi-infinite domain. Figure 2: The effect of length scale parameter \(\theta\) on a large domain for Volterra’s population model with \(m=25\) and \(\kappa=0.5\). The initial slope of the approximated solution is an essential measure of the accuracy of the problem. Up to now, no exact solution or initial slope has been found for this problem. However, some researchers developed advanced techniques to find an accurate solution. In this research, we use the values obtained by Parand et al. [20] as an exact approximation. This paper has obtained the exact initial slope up to 38 digits of precision in a machine or software that supports arbitrary precision arithmetic. Here, we are just focusing on the problem of choosing the best hyperparameters. Thus, we compare our results on a small number of basis and an almost similar number of basis functions. Furthermore, they have utilized the Quasi-Linearization Method (QLM) to convert the problem of approximating solutions for nonlinear Ordinary Differential Equations (ODEs) into a sequence of dependent linear ODEs. This increases the computational complexity to the number of QLM iterations. Here we have just solved the original nonlinear problem, which is more computationally efficient. Same as the previous example, we first analyze the effect of the hyperparameters. Figure 4 demonstrates the absolute error obtained by different sets of hyperparameters. It is seen that the Legendre kernel can reach better results in comparison to the Chebyshev functions. Furthermore, some of the hyperparameters lead to numerical issues with the Legendre basis functions, and hence the plot is discontinuous. Likewise in the previous example, the interval \((0,10]\) for the length scale contains the best results, therefore we focus on this range in the next experiments. Tables 6 and 7 reported the best results obtained by the grid and random search, respectively. From there, it can be seen that the Legendre with algebraic mapping is the best-obtained hyperparameters in all of the configurations for different parameters. Furthermore, the random search algorithm overcame the grid search in all of the experiments. A comparison is made between the presented method and other related works in table 8. Some of the simulated approximations are plotted in the figure 5. ## 4 Conclusion This paper developed two machine learning techniques for increasing the accuracy of the numerical simulation of functional equations. The presented algorithms 1 and 2 are general tuning algorithms capable of hyperparameter optimization of various mathematical approaches such as spectral methods, RBFs, and wavelets. However, in this research, we have just focused on the spectral method for approximating the solution of nonlinear differential equations on the semi-infinite domain. To do so, we configured the search space to the hyperparameters of rational Jacobi functions, including basis functions, nonlinear rational mappings, and the length scale parameter. Finally, in the numerical results, various experiments have been conducted to measure the search capability of the proposed algorithms. We discussed the role of the length scale parameter on the stability and convergence of the method. In addition, some comparisons among related works were carried out to show the superiority of these algorithms over traditional mathematical procedures. This property, along with the small computation complexity handled with parallel programming approaches, made this process efficient and easy to use for other researchers. However, modern gradient-free global optimization techniques, such as Bayesian optimization and Tree-structured Parzen estimator, can be developed to get better approximations.
2308.05250
Discovery of a variable multi-phase outflow in the X-ray-emitting tidal disruption event ASASSN-20qc
Tidal disruption events (TDEs) are exotic transients that can lead to temporary super-Eddington accretion onto a supermassive black hole. Such accretion mode is naturally expected to result in powerful outflows of ionized matter. However, to date such an outflow has only been directly detected in the X-ray band in a single TDE, ASASSN-14li. This outflow has a low velocity of just a few 100 km/s, although there is also evidence for a second, ultra-fast phase. Here we present the detection of a low-velocity outflow in a second TDE, ASASSN-20qc. The high-resolution X-ray spectrum reveals an array of narrow absorption lines, each blueshifted by a few 100 km/s, which cannot be described by a single photo-ionization phase. For the first time, we confirm the multiphase nature of a TDE outflow, with at least two phases and two distinct velocity components. One highly ionized phase is outflowing at $910^{+90}_{-80}$ km/s, while a lower ionization component is blueshifted by $400_{-120}^{+100}$ km/s. We perform time-resolved analysis of the X-ray spectrum and detect that, surprisingly, the mildly ionized absorber strongly varies in ionization parameter over the course of a single 60 ks observation, indicating that its distance from the black hole may be as low as 400 gravitational radii. We discuss these findings in the context of TDEs and compare this newly detected outflow with that of ASASSN-14li.
P. Kosec, D. Pasham, E. Kara, F. Tombesi
2023-08-09T22:50:04Z
http://arxiv.org/abs/2308.05250v1
Discovery of a variable multi-phase outflow in the X-ray-emitting tidal disruption event ASASSN-20qc ###### Abstract Tidal disruption events (TDEs) are exotic transients that can lead to temporary super-Eddington accretion onto a supermassive black hole. Such accretion mode is naturally expected to result in powerful outflows of ionized matter. However, to date such an outflow has only been directly detected in the X-ray band in a single TDE, ASASSN-14li. This outflow has a low velocity of just a few 100 km/s, although there is also evidence for a second, ultra-fast phase. Here we present the detection of a low-velocity outflow in a second TDE, ASASSN-20qc. The high-resolution X-ray spectrum reveals an array of narrow absorption lines, each blueshifted by a few 100 km/s, which cannot be described by a single photo-ionization phase. For the first time, we confirm the multiphase nature of a TDE outflow, with at least two phases and two distinct velocity components. One highly ionized phase is outflowing at \(910^{+90}_{-80}\) km/s, while a lower ionization component is blueshifted by \(400^{+100}_{-120}\) km/s. We perform time-resolved analysis of the X-ray spectrum and detect that, surprisingly, the mildly ionized absorber strongly varies in ionization parameter over the course of a single 60 ks observation, indicating that its distance from the black hole may be as low as 400 gravitational radii. We discuss these findings in the context of TDEs and compare this newly detected outflow with that of ASASSN-14li. Accretion (14), Supermassive black holes (1663), Tidal disruption (1696) 0000-0002-4818-8082]P. Kosec 0000-0002-3072-888X]D. Pasham 0000-0002-4883-0887]E. Kara 0000-0002-4883-0887]F. Tombesi ## 1 Introduction A tidal disruption event (TDE) is an exotic transient during which a star is disrupted as it ventures too close to a supermassive black hole (Rees, 1988). A significant fraction of the star's mass is accreted, which can lead to temporary super-Eddington accretion rates onto the black hole. In recent years, dozens of TDEs were discovered, in the optical band (van Velzen et al., 2021; Hammerstein et al., 2023) as well as in the X-rays (Sazonov et al., 2021). For recent reviews, see Gezari (2021) and Saxton et al. (2020). These events open a unique window into the lives of supermassive black holes in galaxies, the majority of which are inactive and thus challenging to study through other means. The violent, supercritical nature of this phenomenon is naturally expected to result in massive and high-velocity (\(\sim 0.1c\)) outflows of ionized matter from the accretion flow (Shakura and Sunyaev, 1973). Such outflows are observed in simulations of supercritical flows (Ohsuga et al., 2009; Ohsuga and Mineshige, 2011; Takeuchi et al., 2013) and were observationally confirmed in other super-Eddington or highly accreting systems such as ultraluminous X-ray sources (Pinto et al., 2016; Kosec et al., 2018; Pinto et al., 2021) and active galactic nuclei (e.g. Pounds et al., 2003; Tombesi et al., 2010). Physical models and simulations of super-Eddington accretion predict the existence of a geometrically and optically thick accretion flow (Ohsuga et al., 2005). The appearance and emission pattern of such a flow is thus inherently strongly non-isotropic. This is a possible explanation for why some TDEs are primarily bright in the optical band, and some others instead in the X-rays (Dai et al., 2018). It is thus of great importance to detect and un derstand the properties of outflows launched by TDEs, as they may strongly modify the inner accretion flow properties. Observational evidence for these outflows in TDEs has however been sparse so far. The first TDE with a confirmed outflow in the X-rays was the nearby event ASASSN-14li (Miller et al., 2015). The properties of this detected outflow phase are rather puzzling - Miller et al. (2015) found that the outflow had a low systematic velocity of just \(200-300\) km/s, in contrast with the expected velocities in excess of 5% of the speed of the light, as seen in other supercritical systems mentioned above. Kara et al. (2018) later found evidence for a second, high-velocity component of the outflow at \(\sim 0.2\)c in ASASSN-14li, and Kara et al. (2016) found evidence for a high-velocity outflow originating from the inner accretion flow of the jetted TDE Swift J1644+57 through X-ray reverberation. However, it is unclear what is the driving mechanism of any of these X-ray outflows, and what is their relationship with the TDE. Clearly, the detection of outflows in further TDEs is necessary to understand their nature, physics, and impact on the accretion flow as well as the black hole surroundings. Outside the X-ray band, an outflow was detected in the TDE AT2019qiz using the optical line shape evolution (Nicholl et al., 2020), and likely corresponds to the expanding TDE photosphere, which is the dominant source of the optical and UV radiation. Here we study ASASSN-20qc (z=0.056, Stanek, 2020; Hinkle, 2022), previously a low-luminosity AGN, which turned into an X-ray bright TDE (Pasham et al. submitted), sharing a number of similarities with ASASSN-14li. ASASSN-20qc was the target of a large multi-wavelength campaign in 2021. It was detected in the X-rays by the _eROSITA_ survey and observed by _XMM-Newton_ and _NICER_ observatories. Its X-ray spectrum reveals a soft X-ray continuum, which can be broadly described by a disk blackbody with a temperature of \(\sim\)90 eV. Similar to ASASSN-14li (Kara et al., 2018), the spectrum appears to show a significant broad dip around the Wien tail. This feature, if modeled as an absorption line, suggests an outflow velocity of 0.3c. We refer to Pasham et al. (submitted) for a detailed analysis of this ultra-fast outflow component. In this paper, we instead focus on the high-resolution _XMM-Newton_ RGS spectra of ASASSN-20qc, which reveal an array of absorption lines, similar to those found in ASASSN-14li. ASASSN-20qc is only the second TDE to exhibit such narrow lines. The structure of this paper is as follows. Our data preparation and reduction is summarized in Section 2. The spectral modelling and the results are described in Section 3. We discuss the results and their implications in Section 4 and conclude in Section 5. ## 2 Data Reduction and Preparation _XMM-Newton_(Jansen et al., 2001) observed ASASSN-20qc six times to date. Here we analyze observation 0852600301, the only observation which occurred when the source was in a high flux state, allowing us to use the high-spectral resolution data from the Reflection Grating Spectrometers (RGS). During the remaining five _XMM-Newton_ observations, ASASSN-20qc reached an order of magnitude lower count rates. Observation 0852600301 occurred on March 14 2021 and had a duration of \(\approx\)60 ks. The data were downloaded from the XSA archive and reduced using a standard pipeline with sas v20, caldb as of 2022 April. We use data from RGS (den Herder et al., 2001) and from European Photon Imaging Camera (EPIC) pn (Struder et al., 2001). The RGS data were reduced following standard routines using the rgspproc procedure, centering the extraction regions on the coordinates of ASASSN-20qc. We filtered for any periods of high background, but the RGS detectors were not significantly affected by any major flares, and the clean exposure of each detector is about 58.5 ks. RGS 1 and 2 data were not stacked, but were fitted simultaneously in all spectral fits using a cross-calibration constant. The value of this parameter was always close to 1, indicating \(<5\%\) calibration difference between the two instruments. We binned the RGS spectra by a factor of 3 to achieve only mild oversampling of the instrumental spectral resolution. This was achieved with the 'bin' command in the SPEX fitting package (Kaastra et al., 1996). We use the RGS data in the 15 A (0.83 keV) to 36 A (0.34 keV) wavelength range. The lower limit is set by the data quality - there is no signal in RGS below 15 A due to the softness of the ASASSN-20qc X-ray spectrum. To constrain the continuum model, we also examined the EPIC PN data that extends to higher energies. The EPIC PN instrument was operated in the Small Window mode. The data were reduced using the epproc procedure, and only events of PATTERN\(\leq\)4 (single/double) were accepted. We screened for background flares with a threshold of 0.15 ct/s in the 10-12 keV lightcurve, keeping in mind the small area of the active CCD during Small Window mode operation. This resulted in a clean exposure time of 35.5 ks. The source region was a circle with a radius of 15 arcsec centered on ASASSN-20qc position. We specifically chose a small source region size to maximize the signal-to-noise ratio and decrease the background importance at higher energies (\(>1\) keV) considering the extreme softness of ASASSN-20qc. The background region was a polygon on the other side of the pn CCD as the source, at least 150 arcsec away. The source has a count rate of 3 ct/s, and so pile-up should not be an issue during the observation. We confirmed this by assessing the pile-up plots produced by the epatplot routine. The data were grouped using the specgroup procedure to at least 25 counts per bin and at the same time to oversample the instrumental resolution by at most a factor of 3. We use EPIC pn in the wavelength range between 8 A (1.55 keV, limited by data quality) and 15 A (0.83 keV, RGS data available above this limit). We do not use any EPIC pn data beyond 1.55 keV as the spectrum is strongly background-dominated in that range. We fit the spectra in the specx(Kaastra et al., 1996) fitting package. All reduced spectra were converted from ogip format into spec format using the trafo routine. We use Cash statistic (Cash, 1979) to analyze the spectra. All uncertainties are provided at 1\(\sigma\) significance. ## 3 Spectral Modelling and Results ### RGS analysis The RGS spectrum of ASASSN-20qc, shown in Fig. 1 reveals a plethora of absorption lines which cannot be resolved with EPIC pn due to its poor spectral resolution below 1 keV. Many of the lines appear highly significant - we particularly note the absorption feature at 26 A. Most of the lines are narrow, showing widths of the order of a few 100s km/s (full width at half maximum) at most, indicating an ionized absorber with low velocity width. We begin the spectral modelling with a baseline continuum fit. The broadband continuum is described with a disk blackbody, the dbb component within specx, with a temperature of around 0.18 keV. The definition of the disk blackbody temperature is different in specx and xspec, resulting in specx dbb temperatures being higher by roughly a factor of 2. The disk blackbody is redshifted by z=0.056 using the reds model. Finally, Galactic absorption is applied using the hot component. We fix the neutral column density to \(1.2\times 10^{20}\) cm\({}^{-2}\)(HI4PI Collaboration et al., 2016). The final fit statistic of the continuum model is C-stat=1900.68 with 1190 degrees of freedom (DoF). To determine the absorber properties, we add a pion photo-ionized absorption component. pion(Miller et al., 2015; Mehdipour et al., 2016) self-consistently calculates absorption line strengths using the ionizing balance determined from the currently loaded spectral continuum model. The ionizing balance is calculated on the fly as the continuum changes during spectral fitting. The pion component allows us to recover the ionized absorber properties such as its column density, ionization parameter \(\log(\xi/\mathrm{erg\ cm\ s^{-1}})\), outflow velocity and velocity width. The plasma elemental abundances are fixed to Solar values. The addition of one pion component is highly significant (\(>>5\sigma\) using F-test) and improves the fit quality to C-stat=1748.85 (\(\Delta\)C-stat=151.83 over the baseline continuum for 4 additional DoF). It requires an outflow with a mild velocity of \(\sim 800\) km/s, and an ionization parameter \(\log(\xi/\mathrm{erg\ cm\ s^{-1}})\) of 3.5. However, many of the absorption lines in the RGS data are still not well fitted in this spectral fit with a single absorber. This could indicate that a single photoionized absorber is insufficient to fit the observed absorption lines. In other words, the outflow is likely multi-phase. To test this hypothesis, we add a second pion component to the previous spectral fit. This step again improves the spectral fit, now to C-stat=1669.58, a further \(\Delta\)C-stat=79.27 over the single phase absorber fit (for 4 extra DoF), and \(\Delta\)C-stat=231.1 over the baseline continuum fit (for 8 extra DoF in total). The best-fitting absorber and continuum parameters are listed in Table 1, and the spectral fit is shown in Fig. 1. We find one highly ionized absorber with an ionization parameter \(\log(\xi/\mathrm{erg\ cm\ s^{-1}})\) of \(3.75^{+0.15}_{-0.22}\), and a high column density of \(0.25^{+0.14}_{-0.09}\times 10^{24}\) cm\({}^{-2}\). The second component is much more mildly ionized at \(\log(\xi/\mathrm{erg\ cm\ s^{-1}})\)\(\sim 1.39^{+0.25}_{-0.26}\), and has a much lower column density of \(1.5^{+0.4}_{-0.3}\times 10^{21}\) cm\({}^{-2}\). Both show low velocity widths, and outflow velocities comparable to those of warm absorbers in regular AGN, but also comparable to the ionized absorber found in the TDE ASASSN-14li (with an outflow velocity of 100-500 km/s, Miller et al., 2015). The highly ionized component is significantly faster at \(910^{+80}_{90}\) km/s, while the second one has a velocity of \(380^{+120}_{-100}\) km/s. We note that the column density of the highly ionized absorber is very high (above \(10^{23}\) cm\({}^{-2}\)), and has significant uncertainties. This value is more than 10\(\times\) higher than the column density found by Miller et al. (2015) in ASASSN-14li. It is possible that this column density is incorrectly determined from the limited RGS spectrum. Specifically, the spectrum is lacking any information below 15 A. This spectral region would be important in placing strong upper limits on the outflow column density and the ionization parameter, thanks to the many Fe transitions in the 8-12 A band. Unfortunately, ASASSN-20qc is spectrally very soft and RGS does not offer sufficient collecting area at higher energies. In the following section, we will use simultaneous EPIC pn coverage at higher energies (0.8-1.5 keV) to obtain a more reliable measurement of the highly ionized absorber properties. We further test the properties of the absorbers by freeing their covering fractions (fcov parameter in spex). No significant evidence is found for the covering fraction being lower than 1 for any of the two absorbing phases. We also tried adding a third photoionization phase to the spectral fit. This improves the statistic moderately to C-stat=1645.75, a \(\Delta\)C-stat=23.83 fit improvement (for 4 extra degrees of freedom) over the previous spectral fit. The best-fitting absorber has a column density of \(2.6^{+0.8}_{-0.6}\times 10^{20}\) cm\({}^{-2}\), \(\log(\xi/\)erg cm s\({}^{-1})\) of \(-1.15^{+0.23}_{-0.17}\), outflow velocity of \(630^{+130}_{-140}\) km/s and a velocity width of \(200^{+170}_{-80}\) km/s. It improves the spectral fit particularly around 24 A (Fig. 1). The evidence for the third phase indicates that the ionized outflow in ASASSN-20qc is likely highly complex and strongly multi-phase. However, only the first two phases strongly modify the high-resolution spectrum and are unambiguously detected at high significance. For this reason, in all our following spectral fits we only include two low-velocity absorber phases. Finally, we test for the presence of a broad ultra-fast outflow, found in the EPIC spectrum (Pasham et al. \begin{table} \begin{tabular}{c c c} \hline \hline Component & Parameter & Value \\ \hline disk & norm & \((320\pm 40)\times 10^{16}\) m\({}^{2}\) \\ blackbody & kT & \(0.197^{+0.004}_{-0.003}\) keV \\ \hline highly ionized & N\({}_{H}\) & \(2.5^{+1.4}_{-0.9}\times 10^{23}\) cm\({}^{-2}\) \\ absorber & \(\log(\xi/\)erg cm s\({}^{-1})\) & \(3.75^{+0.15}_{-0.22}\) \\ & outflow velocity & \(910^{+90}_{-80}\) km/s \\ & velocity width & \(80\pm 20\) km/s \\ & \(\Delta\)C-stat & \(151.83\) \\ \hline mildly ionized & N\({}_{H}\) & \(1.5^{+0.4}_{-0.3}\times 10^{21}\) cm\({}^{-2}\) \\ absorber & \(\log(\xi/\)erg cm s\({}^{-1})\) & \(1.39^{+0.25}_{-0.26}\) \\ & outflow velocity & \(380^{+120}_{-120}\) km/s \\ & velocity width & \(240^{+70}_{-80}\) km/s \\ & \(\Delta\)C-stat & \(79.27\) \\ \hline \end{tabular} \end{table} Table 1: Best-fitting properties of the continuum and the two ionized absorbers in the spectrum of ASASSN-20qc, determined from an RGS-only analysis. Figure 1: _XMM-Newton_ RGS spectrum of ASASSN-20qc. RGS 1 and RGS 2 spectra are stacked and over-binned for visual purposes only. The best-fitting ionized absorption model, consisting of two photo-ionized components is shown in red. The most notable elemental transitions are shown with green labels. Fe UTA denotes the unresolved transition array of Fe absorption lines. submitted), in the high-resolution RGS spectra. We take the baseline double slow absorber spectral fit and add a third pion component, now strongly broadened with an FWHM velocity width of about 70500 km/s (30000 km/s at 1\(\sigma\), same as used in Pasham et al.), outflowing with a large velocity (\(<0.1\)c). The addition of such a component is not highly significant in the RGS data alone, with a fit statistic C-stat=1651.90, a \(\Delta\)C-stat=17.68 fit improvement. This outcome is not surprising since the residual attributed to the UFO in EPIC pn data extends between 0.7 and 1.0 keV. RGS statistics are low in the 0.7-0.8 keV range and data are completely missing above 0.8 keV. ### Simultaneous RGS and EPIC pn spectral modelling The spectral analysis of the RGS data alone reveals many narrow absorption lines indicating the presence of a low velocity, multi-phase outflow in ASASSN-20qc. At the same time, the RGS results are potentially limited - we are unable to reliably measure the outflow properties due to the lack of RGS data above 0.8 keV. To perform the best possible measurement, we need to combine the RGS and EPIC pn datasets and fit them simultaneously. Unfortunately, it is impossible to simply fit these spectra simultaneously over the full energy band. The pn data dominate the fitting statistic, with a count rate \(>10\times\) higher than that of both RGS detectors summed. At the same, the pn data offer a much poorer spectral resolution, and all the individual narrow absorption lines are blended together. Therefore, the fit is driven by broad continuum-like shapes instead of the individual line positions, shapes and optical depths. This can lead to incorrect results, especially considering that the outflow is multi-phase. Furthermore, there are residual calibration differences between the RGS and EPIC pn instruments, which vary across the overlapping energy band of the instruments (Detmers et al., 2010). These can be important in bright sources such as ASASSN-20qc (with small Poisson errorbars on individual EPIC pn data points) and can systematically skew the best-fitting outflow properties. A common way to avoid this issue is to ignore the EPIC pn data in the energy band where RGS has sufficient statistics (around 0.8-0.85 keV in this study), and use a cross-calibration constant to account for any residual calibration uncertainties between the two instruments at the contact energy where the two datasets meet, fitting for the value of this constant. However, this cannot be done in the case of ASASSN-20qc. The EPIC pn spectrum shows an absorption residual extending from 0.7 to 1.0 keV, i.e. the residual extends across the band where RGS loses statistics, and continues into the EPIC pn-only band. Therefore, it is possible that the cross-calibration constant in fact might fit the shape of the absorption residual at the point of contact between the two datasets, instead of broad instrumental normalization differences. For this reason it is challenging to use the two datasets simultaneously. On the one hand, we do not want to ignore RGS data below the absorption residual (\(<0.7\) keV), thus losing all the spectral resolution in the important 0.7-0.8 keV region, but on the other hand the unknown cross-calibration difference between the two instruments at the contact point can introduce systematic error in the spectral fit. Thankfully, the cross-calibration differences between RGS and EPIC pn across the energy band are unlikely to be too large, and are at most 15 % (Detmers et al., 2010). We can therefore explore the parameter space of these possible differences, to see how much our uncertainty in understanding the instrument cross-calibration affects our outflow modelling and its best-fitting properties. We repeat the same spectral fit for different values of the cross-calibration constant, kept frozen in the range between 0.85 and 1.15, with a step size of 0.05 (7 spectral fits in total for each spectral model). As mentioned in Section 1, a single blackbody is insufficient to describe the broadband X-ray (0.3-1.5 keV) spectrum of ASASSN-20qc. To improve the description of the continuum, we use two different spectral models describing the combined dataset, which have previously been used to fit TDE X-ray spectra. The first model includes a disk blackbody, two slow absorbers producing the narrow absorption lines, and a fast UFO phase with a large velocity broadening (30000 km/s at 1\(\sigma\) = 70500 km/s full width half maximum). This model (blackbody + UFO absorption) model was previously used by Kara et al. (2018) and is also employed to describe the EPIC-pn spectrum of ASASSN-20qc by Pasham et al. (submitted). The model represents standard TDE disk blackbody emission from a compact accretion disk, modified by absorption from a high-velocity outflow, launched by the extreme mass accretion rate during the tidal disruption. We also use an alternative model which does not contain any UFO absorption, and instead applies a continuum consisting of two regular blackbodies, obscured by the two low-velocity absorbers. A disk blackbody and additional warmer continuum has been employed in several recently discovered nuclear transients, including ASASSN-14li (Kara et al., 2018), ASASSN-18el (Ricci et al., 2020; Masterson et al., 2022) and in two TDE candidates later found to also exhibit quasi-periodic eruptions (Miniutti et al., 2019; Chakraborty et al., 2021). This model represents a physical scenario where two different physical components are responsible for the observed spectral continuum. One of these components could be a regular blackbody from an accretion disk, the second one could originate from shocks, or from Comptonization within a corona. This model is motivated by the broad spectral shape of the UFO model (with a large velocity broadening producing no discrete narrow features), which could in principle be reproduced by a more complex emission continuum spectral shape. By applying these two different spectral models we are as agnostic as possible to the interpretation of the underlying spectral continuum (which is discussed elsewhere), and show that the determined low-velocity absorber properties are not dependent on this interpretation. The absorber properties versus the value of the cross-calibration constant are shown in Fig. 2. We find no large steps in the absorber properties over the explored range of cross-calibration constants. All parameters are varying smoothly, with no sudden jumps. Similarly, the fit quality C-stat does not vary significantly with the cross-calibration constant, spanning at most \(\Delta\)C-stat\(\sim 15\) across the studied range of cross-calibration constant values (Fig. 3). In the UFO spectral model, the best-fitting UFO component outflow velocity is around 0.32c, fully consistent with the results of Pasham et al. (submitted). It does not vary significantly with the RGS-PN cross-calibration constant. These investigations give us confidence that our results on the properties of the multi-phase warm absorber are robust to uncertainties in the continuum model and the cross calibration constant. We find that the best-fitting parameters of the highly ionized absorber are consistent between the two spectral models, and do not vary very strongly with the RGS-PN cross-calibration constant. Conservatively, we can therefore conclude that the best-fitting highly ionized absorber column density is in the range of \((0.9-3.3)\times 10^{23}\) cm\({}^{-2}\), and its ionization parameter \(\log(\xi/\)erg cm s\({}^{-1})\) in the range of \(3.0-3.4\). We note that these limits are not to be taken at \(1\sigma\) confidence given the unknown exact value of cross-calibration between RGS and pn. These results, especially the ionization parameter, are somewhat lower than the absorber properties recovered from RGS alone, confirming that RGS over-estimates the highly ionized absorber, most likely due to its lack of signal in the important wavelength band below 15 A. The outflow velocity and the velocity width is consistent with the RGS-only analysis, and does not vary with the RGS-PN cross-calibration constant. We find larger differences between the two models when comparing the mildly ionized absorber. The UFO model results in a somewhat lower column density as well as the ionization parameter. We conclude that the column density of the absorber is most likely in the range of \((1.3-2.6)\times 10^{21}\) cm\({}^{-2}\), and its ionization parameter \(\log(\xi/\)erg cm s\({}^{-1})\) is in the range of \(1.0-1.8\). We find that the best-fitting parameters do not vary with the RGS-PN cross-calibration constant. The best-fitting Figure 2: The best-fitting properties of the low-velocity absorbers (assuming photo-ionization balance) versus the value of the cross-calibration constant between RGS and PN instruments. The left panels show the results for the UFO model, while the right panels show the double blackbody model. The top two panels of each column show the column density and ionization parameter for the highly ionized absorber, while the lower two panels show the column density and the ionization parameter for the mildly ionized absorber phase. outflow velocity and velocity width are again fully consistent with the RGS-only spectral analysis, and do not vary with the cross-calibration constant. The total C-stat for both spectral models is between 1689 and 1711. The UFO model has 1190 degrees of freedom (DoF), while the double blackbody model has 1192 DoF. We find that the UFO model is preferred to the double blackbody model by about \(\Delta\)C-stat\(\sim 8-9\) depending on the exact value of the cross-calibration constant. Simply comparing the difference in DoF and in C-stat values, the UFO model is preferred by our _XMM-Newton_ data by \(\sim 2.5\sigma\). However, this simplified comparison does not take into account the great number of DoFs in both of these models and is thus only a very rough estimate. ### Time-resolved spectral analysis Miller et al. (2015) found that the ionized absorption in ASASSN-14li is variable on the timescale of a single \(\sim 100\) ks _XMM-Newton_ observation. The detection of such fast variability puts strong upper limits on the distance of the ionized outflow from the black hole. Below we investigate whether variability can be detected on single observation timescales in ASASSN-20qc as well. As a first order approach, we split the _XMM-Newton_ observation into two segments of equal exposure and perform the spectral fit again, tying certain parameters (which are unlikely to vary) together. Given the difficulties discussed above in the spectral model choice and the cross-normalization value when analyzing RGS and EPIC pn data together, we perform the outflow variability test for the RGS data alone. Because we are only fitting the RGS data, the parameters recovered for the highly ionized outflow phase might be less trustworthy than in the analysis above, however the variability (in the narrow absorption lines) would be detected nevertheless. We fit both observation segments simultaneously, employing the two phase outflow spectral model from Section 3.1. We allow the disk blackbody properties to vary between the segments, as well as the absorber column densities and ionization parameters. We couple the outflow velocities and velocity widths as they are unlikely to significantly change between the two segments, and since we do not visually observe any apparent line shifts between the segments. The fitting results are shown in Fig. 4 and in Table 2. We find that some of the absorber parameters change significantly between the two segments. The largest variation (\(>3\sigma\)) is surprisingly seen in the mildly ionized absorber ionization parameter \(\log(\xi/\rm{erg~{}cm~{}s^{-1}})\), which changes from \(1.02^{+0.13}_{-0.12}\) to \(1.89^{+0.19}_{-0.20}\) over the course of the 60 ks _XMM-Newton_ observation. We also detect possible variability in the mildly ionized absorber column density (\(\sim 1.5\sigma\) significance), and in the highly ionized absorber column density (\(\sim 2\sigma\) significance). Finally, a large difference is also observed in the disk blackbody normalization, however this is necessarily correlated with the apparent variation in the highly ionized absorber column density (change in continuum absorption is significant as the \(N_{H}\) value is more than \(10^{23}\) cm\({}^{-2}\) in one of the segments). The observation with the higher highly ionized absorber column density (and greater blackbody luminosity) also shows a slightly greater ionization parameter \(\log(\xi/\rm{erg~{}cm~{}s^{-1}})\), however its variation is not statistically significant. Finally, we consider the variation of the absorber outflow velocities and velocity widths between the two observation segments. We untie all the spectral parameters of the two phase model and re-fit. We found that both the outflow velocities and velocity widths of both absorbers are consistent at \(1\sigma\) confidence between the two segments. ## 4 Discussion We study the RGS and PN spectra of the TDE ASASSN-20qc and significantly detect a multi-phase, low-velocity ionized absorber. Two distinct velocity and ionization components are confirmed, but with further evidence for a third ionization phase, the true nature of this outflow is likely much more complex. The highly Figure 3: The C-stat fit statistic for each model versus the assumed value of the RGS-PN cross-calibration constant. The UFO models are shown in red, while the double blackbody models are in blue. The right Y-axis shows the relative C-stat difference (in %) from the best-fitting UFO model assuming perfect RGS-PN cross-calibration. ionized phase is faster at 900 km/s, while the mildly ionized phase is outflowing with a velocity of 400 km/s. Both of these values are similar to the outflow detected in ASASSN-14li with a velocity of \(100-500\) km/s (Miller et al., 2015). The ionization parameter of the highly ionized phase, \(\log(\xi/\mathrm{erg~{}cm~{}s^{-1}})\) of \(3.0-3.4\) is comparable with the ionized outflow of ASASSN-14li, but it has a much higher column density of \(\sim 10^{23}\) cm\({}^{-2}\) versus \((0.5-1.3)\times 10^{22}\) cm\({}^{-2}\) in ASASSN-14li. Such a high column density is surprising, and more similar to the column densities measured in ionized obscurers (e.g. Partington et al., 2023) and UFOs (e.g. Tombesi et al., 2011) in AGN. However, those outflows show much higher systematic velocities than observed in ASASSN-20qc, with obscurers reaching thousands km/s, and UFOs exceeding 0.05c. Nevertheless, the absorber of ASASSN-20qc is relatively highly ionized, and so it is still transparent to even soft X-rays, as opposed to obscurers seen in AGN, which tend to absorb most of the source soft X-ray continua. The mildly ionized absorber phase (alongside with the possible third absorption phase) with an ionization parameter of around 1.5 and a column density of around \(10^{21}\) cm\({}^{-2}\) is novel and has not been previously detected in TDEs. In its properties, it is very similar to warm absorbers in AGN (Blustin et al., 2005; Laha et al., 2014). The best-fitting time-averaged properties of this phase might suggest that it is a remnant outflow from previous black hole activity, only re-illuminated by the current TDE outburst. However, the fast time variability of this component argues against such interpretation. We can use the best-fitting ionization properties (assuming photo-ionization equilibrium) of the two phases to estimate their distance from the black hole. Following Kosec et al. (2020), we can use the definition of the ionization parameter \(\log(\xi/\mathrm{erg~{}cm~{}s^{-1}})\) and the definition of the outflow column density \(N_{\mathrm{H}}\) to get the following expression for the distance \(R\) of the outflow from the ionizing source: \[R=\frac{L_{\mathrm{ion}}}{N_{\mathrm{H}}\xi}\frac{\Delta R}{R} \tag{1}\] where \(L_{\mathrm{ion}}\) is the 1-1000 Ryd ionizing luminosity, and \(\Delta R\) is the thickness of the absorbing layer. By taking \(\frac{\Delta R}{R}=1\) as the relative thickness of the absorbing layer cannot be larger than unity, we can estimate the maximum distance of the absorber from the black hole. The ionization luminosity of ASASSN-20qc, from our X-ray spectral fits, is about \(5\times 10^{44}\) erg/s. The maximum distance for the highly ionized phase is thus \(2\times 10^{18}\) cm = 0.6 pc and for the mildly ionized phase is around \(1.1\times 10^{22}\) cm = 4000 pc. We convert these into gravitational radii (R\({}_{\mathrm{G}}\)) units assuming a black hole mass of \(3\times 10^{7}\)\(M_{\odot}\)(Pasham et al. submitted, we note that Hinkle, 2022, estimated a similar black hole mass of \(2\times 10^{7}\)\(M_{\odot}\)). For the highly ionized phase, we calculate a maximum distance of \(4\times 10^{5}\) R\({}_{\mathrm{G}}\) from the black hole, and for the mildly ionized phase we estimate \(2\times 10^{9}\) R\({}_{\mathrm{G}}\). These are pc-scale distances, and serve as an absolute upper limit on the location of the absorbers given their ionization properties. If the absorbers have low relative thicknesses (\(\frac{\Delta R}{R}<<1\), i.e. low volume filling factors), they will be located much closer to the black hole than our estimates. Instead, we could assume that the outflow velocity of each phase is comparable with the escape velocity at its \begin{table} \begin{tabular}{c c c c} \hline \hline Component & Parameter & Segment 1 & Segment 2 \\ \hline disk & normalization & \((390\pm 50)\times 10^{16}\) m\({}^{2}\) & \((280\pm 30)\times 10^{16}\) m\({}^{2}\) \\ blackbody & kT & \(0.196^{+0.005}_{-0.004}\) keV & \(0.194\pm 0.004\) keV \\ highly ionized & N\({}_{H}\) & \(2.4^{+1.4}_{-0.9}\times 10^{23}\)cm\({}^{-2}\) & \(0.7^{+0.8}_{-0.4}\times 10^{23}\) cm\({}^{-2}\) \\ absorber & \(\log(\xi/\mathrm{erg~{}cm~{}s^{-1}})\) & \(3.79^{+0.16}_{-0.22}\) & \(3.67^{+0.21}_{-0.28}\) \\ & outflow velocity & \(910\pm 90\) km/s \\ & velocity width & \(90\pm 20\) km/s \\ mildly ionized & N\({}_{H}\) & \(2.1^{+0.5}_{-0.4}\times 10^{21}\)cm\({}^{-2}\) & \(1.3^{+0.5}_{-0.4}\times 10^{21}\)cm\({}^{-2}\) \\ absorber & \(\log(\xi/\mathrm{erg~{}cm~{}s^{-1}})\) & \(1.02^{+0.13}_{-0.12}\) & \(1.89^{+0.19}_{-0.20}\) \\ & outflow velocity & \(410\pm 90\) km/s \\ & velocity width & \(270\pm 50\) km/s \\ \hline \end{tabular} \end{table} Table 2: Time-resolved analysis of the ionized absorption in ASASSN-20qc. Best-fitting properties of the continuum and the two ionized absorbers, recovered by fitting RGS data only. The full _XMM-Newton_ observation was split into two segments with roughly equal exposure. location. By making this assumption, the distance of the outflow from the black hole is: \[R=\frac{2GM}{v^{2}}=2\frac{c^{2}}{v^{2}}R_{\rm G} \tag{2}\] where \(R_{\rm G}\) is the gravitational radius of the black hole. This assumption results in a distance of \(\sim 2\times 10^{5}\)\(R_{\rm G}\) (0.4 pc) for the highly ionized phase, and a distance of \(\sim 10^{6}R_{\rm G}\) (2 pc) for the mildly ionized absorber phase. The velocity distance estimate is similar to the upper limit from the ionization parameter for the highly ionized component, but wildly different for the mildly ionized component. This result suggests that the mildly ionized component indeed has a very low relative thickness \(\frac{\Delta R}{R}<10^{-3}\), and thus a low volume filling factor. Importantly, we also detect significant time variability over the course of the single _XMM-Newton_ observation. Similar variation was previously detected also in ASASSN-14li. Surprisingly, the statistically strongest variation is detected in the mildly ionized absorber properties, rather than in the highly ionized absorber. However, this may be due to the current dataset quality, where the variation in the mildly ionized absorber lines is more easily detected than the variation in the highly ionized absorber. This variation puts an important upper limit on where the outflow can physically reside. If the outflow transverse velocity (with respect to the X-ray source) is too low, it is unable to cross the X-ray source in the limited time (here \(\sim\)30 ks) between the two segments of our observation. Recent results on the size of X-ray emitting regions show that the region is very compact, in most cases with a radius smaller than 10 \(R_{\rm G}\)(Morgan et al., 2008; Dai et al., 2010; Sanfrutos et al., 2013; Chartas et al., 2016). We note that if the emitting region is in fact larger, the resulting distance of the absorber from the black hole is even smaller than the estimate below. Conversely, if the X-ray emitting region is smaller, the absorber can be located farther from the black hole. For the ASASSN-20qc black hole mass estimate (\(3\times 10^{7}\)\(M_{\odot}\)), the radius of \(10R_{\rm G}\) is \(4\times 10^{13}\) cm. To cross this radius in 30 ks (or 60 ks for the full diameter) and introduce time variability in the ionized absorption, the absorber needs to move with a transverse velocity of at least \(1.5\times 10^{9}\) cm/s = 15000 km/s. To estimate a rough distance of this absorber from the black hole, we assume that the transverse velocity of the absorber is comparable with the Keplerian velocity at its location. Then its distance from the black hole is roughly: \[R=\frac{GM}{v^{2}}=\frac{c^{2}}{v^{2}}R_{\rm G}=400R_{\rm G} \tag{3}\] The time variability puts a much stronger upper bound on the location of the absorber than the ionization balance and the outflow velocity estimates. It indicates that the absorber cannot be a remnant warm absorber from the previous black hole activity, located at pc scales away from the X-ray source. The outflow hence most likely originates by some launching mechanism from the TDE. Miller et al. (2015) reached a similar conclusion based on the properties of the outflow in ASASSN-14li. We note that given the black hole mass and the X-ray emitting region size estimates, the outflow has unusual Figure 4: Variation of the best-fitting properties of the low-velocity absorbers during the 60 ks _XMM-Newton_ observation, investigated by splitting it into two segments. The top two panels show the column density and the ionization parameter for the highly ionized absorber phase, while the lower two panels show the column density and the ionization parameter for the mildly ionized phase. velocity components - a very high toroidal component (15000 km/s) and a very low line-of-sight velocity (400-900 km/s). If the assumptions of our calculation hold, our finding likely indicates that the absorber has not reached an escape velocity, and is thus not a true outflow. Its kinematic properties (low line of sight velocity, high toroidal velocity component) are then more similar to Broad Line Region clouds in regular AGN (Peterson, 2006), although the ionization parameter of the absorber in ASASSN-20qc is much higher. Alternatively, perhaps this suggests that the black hole mass or emitting region size (in \(R_{\rm G}\)) is significantly smaller than we assumed, thus decreasing the necessary transverse velocity requirement and increasing the estimate of the maximum absorber distance from the X-ray source. Given this uncertain estimate of the emitting region size, we caution against using this result as a hard upper bound on the absorber location. The ionized absorber could be launched directly from the newly-formed accretion disk of the TDE. It would probably have to originate in its outer part given the low projected outflow velocity of just a few 100s km/s. The launching mechanism is unclear but could be similar to the mechanism powering warm absorbers in regular AGN - possibly radiation line pressure (Proga et al., 2000), magnetic fields (Fukumura et al., 2018) or thermal driving (Waters et al., 2021). Alternatively, the absorber could originate from shocked plasma in the stream-stream collisions of the TDE (Jiang et al., 2016; Lu and Bonnerot, 2020). If this is the case, the photo-ionization calculation based on the assumption of photo-ionization equilibrium would not hold. However, the important limit on the location inferred from the time variability remains. At this time, the origin of the ionized absorber remains unknown. Similar absorber detections in further TDEs are needed for a population study to resolve this issue. To our knowledge, ASASSN-20qc is the first TDE to show a multi-phase low-velocity outflow in absorption. In particular, the low-ionization component has not been observed elsewhere. We note that Miller et al. (2015) found 3\(\sigma\) evidence for a second, redshifted ionized component in ASASSN-14li, but that component is seen in emission and may form a P-Cygni profile with the primary ionized absorber. Low-velocity ionized absorbers could be common among the TDE population, but no systematic, sample ionized outflow searches have been published to date. Such studies are challenging due to the data quality of the present high-spectral resolution TDE observations. Currently, only _XMM-Newton_ and _Chandra_ gratings are capable of performing these studies. ASASSN-20qc (\(\sim 8000\) source counts) and ASASSN-14li (\(>20000\) source counts) are two of the highest-quality datasets among the small number (\(\lesssim 10\)) of TDEs with usable _XMM-Newton_ or _Chandra_ grating spectra. Long exposure observations of bright sources, yielding at least a few thousand source counts in the gratings instruments, are necessary to perform a search for ionized outflow signatures. Low-velocity outflow signatures in TDEs may also be recognized through spectral curvature in lower resolution CCD spectra, between 0.5 and 1.0 keV. However, it is challenging to confirm this interpretation with CCD-quality data alone. Similar spectral curvature can also be produced by more complex emission continua (double blackbody or blackbody+powerlaw versus a single blackbody), and other spectral features such as high-velocity absorbers (UFOs) in absorption. New X-ray telescopes, with higher effective area and better spectral resolution in the soft X-ray band (\(<1\) keV) are required. Two mission concepts are particularly well suited for observations of soft X-ray TDEs: the proposed X-ray probes Light Element Mapper (LEM, Kraft et al., 2022) and Arcus (Smith, 2020). Either one would allow us to study the ionized absorption in TDEs in much greater detail (improved effective area and spectral resolution), and at greater distances (improved effective area), expanding the presently small population of TDEs with X-ray detected ionized absorbers. ## 5 Conclusions We analyze _XMM-Newton_ RGS and PN spectra of the tidal disruption event ASASSN-20qc. The RGS spectrum reveals an array of narrow absorption lines, indicating the presence of an ionized absorber. Our conclusions are as follows: * The absorption lines cannot be described by a single photo-ionization phase, confirming a multi-phase nature of this plasma. There are at least 2 distinct phases: a highly ionized component with a column density of \(\sim 10^{23}\) cm\({}^{-2}\) and log(\(\xi\)/erg cm s\({}^{-1}\)) of 3.2, outflowing at 900 km/s, and a mildly ionized component with a column density of \(\sim 10^{21}\) cm\({}^{-2}\), an ionization parameter of \(\sim 1.5\) and a velocity of 400 km/s. * The ionized absorption varies in time during the single 60 ks _XMM-Newton_ exposure. The statistically strongest variation is observed in the ionization parameter of the mildly ionized component, but tentative variability is also detected in the highly ionized component. * From the best-fitting parameters of the absorbers and their variability, we constrain the location of the ionized absorption to be as low as \(\sim\)400 \(R_{\rm G}\) from the black hole. Consequently, we cannot be observing a (pc-scale) remnant outflow launched during previous black hole activity. The origin of the absorbers can be in a disk wind driven from the outer part of the TDE accretion disk, or in the shocked plasma created by stream-stream collisions of the tidally disrupted star. Support for this work was provided by the National Aeronautics and Space Administration through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for Support of the Chandra X-Ray Center and Science Instruments. PK and EK acknowledge support from NASA grants 80NSSC21K0872 and DD0-21125X. This work is based on observations obtained with _XMM-Newton_, an ESA science mission funded by ESA Member States and USA (NASA).
2301.09161
Multiparametric robust solutions for combinatorial problems with parameterized locally budgeted uncertainty
In this paper we studied combinatorial problems with parameterized locally budgeted uncertainty. We are looking for a solutions set such that for any parameters vector there exists a solution in the set with robustness near optimal. The algorithm consists of applying a multiparametric algorithm to obtain a near optimal multiparametric solution relative to the objective function for a combinatorial problem defined to find a robust solution for parameters fixed. As far as we know this is the first algorithm presented to do that task. Computational experience is presented to shortest path and $p$-medians problems
Alejandro Crema
2023-01-22T17:03:46Z
http://arxiv.org/abs/2301.09161v2
Multiparametric robust solutions for combinatorial problems with parameterized locally budgeted uncertainty ###### Abstract In this paper we studied combinatorial problems with parameterized locally budgeted uncertainty. We are looking for a solutions set such that for any parameters vector there exists a solution in the set with robustness near optimal. The algorithm consists of applying a multiparametric algorithm to obtain a near optimal multiparametric solution relative to the objective function for a combinatorial problem defined to find a robust solution for parameters fixed. As far as we know this is the first algorithm presented to do that task. Computational experience is presented to shortest path and \(p\)-medians problems. keywords: Combinatorial optimization, Locally budgeted uncertainty, Robust solutions, Multiparametric programming ## 1 Introduction Let \(X\subseteq\left\{0,1\right\}^{n}\) with \(X\neq\emptyset\). Let us suppose that \(X\) is 0-1-Mixed Integer Linear Programming (0-1-MILP) representable and let \(P(c)\) be a combinatorial problem in \(x\), parameterized in \(c\in\mathbb{R}_{+}^{n}\), defined as follows: \[\min_{x\in X}\ c^{t}x\] Data uncertainty appears in many optimization problems. There are several options to model the uncertainty of the cost vector of a combinatorial optimization problem. Many examples may be seen in [1],[2],[3],[4]. Let \(\Omega\subseteq\mathbb{R}_{+}^{K}\) and let us suppose that \(\Omega\) is 0-1-MILP-representable, \(\operatorname{let}\mathcal{U}\in\mathbb{R}_{+}^{K}\) such that \(\Gamma\leq\mathcal{U}\) for all \(\Gamma\in\Omega\), let \(\underline{c},d\in\mathbb{R}_{+}^{n}\), let \([q]=\left\{1,\cdots,q\right\}\) for all \(q\in\mathbb{N}\) and let \(\left\{P_{1},\cdots,P_{K}\right\}\) be a partition of \([n]\). In this paper we consider locally budgeted uncertainty sets parameterized with \(\Gamma\in\Omega\) as follows ([5]): \[\Lambda(\Gamma)=\left\{c\in\mathbb{R}_{+}^{n}:c=\underline{c}+\lambda,\ \sum_{j\in P_{k}}\lambda_{j}\leq\Gamma_{k}\ \forall k\in[K],\ \lambda_{j}\in[0,d_{j}]\ \forall j\in[n]\right\}\] Some examples of practical interest for \(\Omega\) are the interval case, the line segment case and the budgeted case as follows: * Interval case: let \(\mathcal{L}\in\mathbb{R}_{+}^{K}\) with \(\mathcal{L}\leq\mathcal{U}\) and let \(\Omega=[\mathcal{L},\mathcal{U}]=\{\Gamma\in\mathbb{R}_{+}^{K}:\mathcal{L}\leq \Gamma\leq\mathcal{U}\}\) * Line segment case: let \(\Gamma^{0}\in\mathbb{R}_{+}^{K}\), let \([\underline{\alpha},\overline{\alpha}]\subseteq[0,1]\) and let \(\Omega=\{\Gamma\in\mathbb{R}_{+}^{K}:\Gamma=\alpha\Gamma^{0},\ \alpha\in[ \underline{\alpha},\overline{\alpha}]\}\) with \(\mathcal{U}=\overline{\alpha}\Gamma^{0}\) * Budgeted case: let \(\underline{\Gamma},\mathcal{D}\in\mathbb{R}_{+}^{K}\), let \(\Delta\in\mathbb{R}_{+}\) and let \(\Omega=\{\Gamma\in\mathbb{R}_{+}^{K}:\Gamma=\underline{\Gamma}+\beta,\ \sum\limits_{k\in[K]}\beta_{k}\leq \Delta,\ \beta_{k}\in[0,\mathcal{D}_{k}]\ \forall k\in[K]\}\) with \(\mathcal{U}=\underline{\Gamma}+\mathcal{D}\) In recent decades Robust ([1], [3]), Stochastic ([6]), Multiparametric [7]) and Fuzzy programming ([8]) approaches have been developed to deal with such uncertainties. In this paper we use the Robust approach. There are several robust optimization concepts you may select. Some examples are ([1]): classic robustness, absolute or relative regret robustness, adjustable robustness, recoverable robustness, light robustness, soft robustness, lexicographic \(\alpha\)-robustness, recovery-to-optimality, or similarity-based robustness. In this paper we consider the classic robustness. Let \(\Gamma\in\Omega\) and let \(x\in X\). Let \(W(x,\Gamma)\) be a Linear Programming (LP) problem in \(c\) defined to compute the _robustness of \(x\) for \(\Gamma\)_, as follows: \[\max_{c\in\Lambda(\Gamma)}\ c^{t}x W(x,\Gamma)\] In the rest of the paper if \(S\) is an optimization problem then \(v(S)\) is its optimal value. Classical robust approach is to find \(x\in X\) with minimal robustness for \(\Gamma\) by solving the following problem in \(x\): \[\min_{x\in X}\ v(W(x,\Gamma))=\min_{x\in X}\ \left(\max_{c\in\Lambda( \Gamma)}c^{t}x\right) R(\Gamma)\] If \(x_{R}(\Gamma)\) is an optimal solution for \(R(\Gamma)\) then \(x_{R}(\Gamma)\) is a _robust solution for \(\Gamma\)_. In this paper we consider the question how robust solutions change when the uncertainty set changes. In our study the structure of the uncertainty is always the same but some parameters change. Our goal is to compute a set \(\{x^{i}\}_{1}^{r}\subseteq X\) such that for any \(\Gamma\in\Omega\), there exists \(i(\Gamma)\in[r]\) such that \(x^{i(\Gamma)}\) is near optimal for \(R(\Gamma)\). A pionner work with a single parameter \(\lambda\) controlling the size of the uncertainty set, defined as \(\Lambda(\lambda)=\underline{c}+\lambda B\) with \(B\) a convex set containing the origin, may be seen in [2]. Beyond the theoretical interest of a work like the one presented, the practical idea behind this approach is to find a set of solutions and choose the best one each time a new scenario \(c\) appears. As an application, imagine a public service system designed based on the shortest path (or \(p\)-medians) problem. Each time that changes the current situation a new path (or a new set of medians) could be computed. Even if the computational effort is not large, an excessive number of solutions may be unacceptable for human users. Instead, with this approach a set of solutions is computed once and then we can choose the best one in real time taken from a relatively small set of solutions. The parameters of the uncertainty set may change over the time and the scenarios appear according it. In that case a solution with robustness close to \(v(R(\Gamma))\) will be available any time for the _true_\(\Gamma\). In practice, neither the true parameters nor a corresponding robust solution need to be known for the decision maker at any time. Obviously we need to estimate \(\Omega\). The approach to be presented is complementary to the min-max-min approach to find \(k\) solutions that work well for a fixed \(\Gamma\) ([9],[10],[11]). Formally: let \(\left\{x^{i}\right\}_{1}^{r}\subseteq X\) and let \(\epsilon\geq 0\). We say that \(\left\{x^{i}\right\}_{1}^{r}\)_is an \(\epsilon\)-optimal multiparametric robust solution for \(\Omega\)_(\(\epsilon,\Omega\)-mprs) if \[\min_{i\in[r]}v(W(x^{i},\Gamma))-v(R(\Gamma))\leq\epsilon\ \forall\Gamma\in\Omega\] Some problems will be presented several times with equivalent formulations. In some cases the name of the problem will be the same and the specific formulation used would be clear from the context. In order to clarify the exposition in some cases we will use different names for the same problem according the formulation used. The paper is organized as follows: in section 2 we present an algorithm to find an \(\epsilon,\Omega\)-mprs. The algorithm consists of applying an algorithm to obtain an \(\epsilon,\Omega\)-_optimal_ multiparametric solution relative to the objective function ([12],[13]) for a 0-1-Integer Linear Programming (0-1-ILP) problem equivalent to \(R(\Gamma)\). The case in which the matrix that represents \(X\) is totally unimodular is considered in a subsection. In section 3 we present the details for the cases by interval, linear segment and budgeted. Computational experience is presented in section 4 for Shortest Path and \((l,p)\)-Medians problems by using ILOG-Cplex 12.10 from a DOcplex Python code ([14]). In section 5 we present a 0-1-MILP problem equivalent to \(R(\Gamma)\) for the variant for the uncertainty set defined as follows ([5]): \[\Lambda^{+}(\Gamma)=\{c\in\mathbb{R}_{+}^{n}:c_{j}=\underline{c}_{j}+\lambda_{ j}d_{j}\ \forall j\in[n],\ \sum_{j\in P_{k}}\lambda_{j}\leq\Gamma_{k}\ \forall k\in[K],\ \lambda\in[0,1]^{n}\}\] in such a manner that a multiparametric analysis may be performed to find an \(\epsilon,\Omega\)-mprs. Conclusions and further extensions may be seen in section 6. Appendices may be seen after references. In Appendix A we present a summary of a multiparametric approach relative to the objective function for 0-1-ILP problems. In Appendix B we present a toy example to show that with low uncertainty a large number of solutions may be necessary to define a \(0,\Omega\)-mprs and with large uncertainty a single solution may be enough to define a \(0,\Omega\)-mprs. In Appendix C we present a proof of properties for an auxiliary function that appears in section 5. A remark about the computational complexity is presented in Appendix D. Tables may be seen after appendices. ## 2 Algorithm to find an \(\epsilon,\Omega\)-multiparametric robust solution Let \(x\in X\) and let \(\Gamma\in\Omega\), the dual problem of \(W(x,\Gamma)\) is defined as a LP problem in \((\pi,\rho)\) as follows \[\underline{c}^{t}x+\min \Gamma^{t}\pi+d^{t}\rho\] \[s.t. \pi_{k}+\rho_{j}\geq x_{j} \forall k\in[k]\ \forall j\in P_{k}\] \[\pi\geq 0,\ \rho\geq 0\] \[\pi\in\mathbb{R}^{K},\ \rho\in\mathbb{R}^{n}\] There exists an optimal solution for \(DW(x,\Gamma)\) with \(\pi\in\left\{0,1\right\}^{K}\) ([5]), therefore there exists an optimal solution for \(DW(x,\Gamma)\) with \((\pi,\rho)\in\left\{0,1\right\}^{K}\times\left\{0,1\right\}^{n}\), hence \(R(\Gamma)\) may be rewritten as a 0-1-ILP problem in \((\pi,\rho,x)\) as follows: \[\min \Gamma^{t}\pi+d^{t}\rho+\underline{c}^{t}x\] \[s.t. \pi_{k}+\rho_{j}-x_{j}\geq 0 \forall k\in[k]\ \forall j\in P_{k}\] \[\pi\in\left\{0,1\right\}^{K},\ \rho\in\left\{0,1\right\}^{n},\ x\in X\] Let \(\epsilon\geq 0\). We may use a multiparametric algorithm (see Appendix A) to find an \(\epsilon,\Omega\)-optimal multiparametric solution for \(R(\Gamma)\), that is we may find \(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r}\subseteq\left\{0,1\right\}^{K }\times\left\{0,1\right\}^{n}\times X\) such that: \(\pi_{k}^{i}+\rho_{j}^{i}-x_{j}^{i}\geq 0\ \forall k\in[k]\ \forall j\in P_{k},\ \forall i\in[r]\) and \(\min\limits_{i\in[r]}\ \left\{\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+\underline{c}^{t}x^{i} \right\}-v(R(\Gamma))\leq\epsilon\ \ \forall\Gamma\in\Omega\). In that case because of \(v(W(x^{i},\Gamma))=v(DW(x^{i},\Gamma))\leq\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\) for all \(i\in[r]\) we have that \(\left\{x^{i}\right\}_{1}^{r}\) is an \(\epsilon,\Omega\)-mprs Next we present the multiparametric algorithm following Appendix A applied to \(R(\Gamma)\). Let \(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r}\subseteq\left\{0,1\right\}^{K }\times\left\{0,1\right\}^{n}\times X\) and let \(Q(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\) be a problem in \((\Gamma,\pi,\rho,x)\) defined as follows: \[\max\left(\min_{i\in[r]}\left\{\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\right\}-\left(\Gamma^{t}\pi+d^{t}\rho+\underline{c}^{t}x \right)\right) Q(\left\{((\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\] \[s.t. \pi_{k}+\rho_{j}-x_{j}\geq 0 \forall k\in[k]\;\forall j\in P_{k}\] \[\Gamma\in\Omega,\;\pi\in\{0,1\}^{K},\;\rho\in\{0,1\}^{n},\;x\in X\] **Algorithm \(Q\) (A-\(Q\)) to find an \(\epsilon,\Omega\)-multiparametric robust solution** Let \(\epsilon\geq 0\) and let \(\Gamma\in\Omega\). Solve \(R(\Gamma)\), let \((\pi^{1},\rho^{1},x^{1})\) be an optimal solution and let \(r=1\). 1. Solve \(Q(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\) and let \((\Gamma^{*},\pi^{*},\rho^{*},x^{*})\) be an optimal solution 2. If \(v(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r}))\leq\epsilon\) STOP 3. Let \((\pi^{r+1},\rho^{r+1},x^{r+1})=(\pi^{*},\rho^{*},x^{*})\), let \(r=r+1\) and return to step 1 If \((\Gamma,\pi,\rho,x)\) is an optimal solution for \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) then \((\pi,\rho,x)\) is an optimal solution for \(R(\Gamma)\). Therefore with \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) we are looking for \(\Gamma\in\Omega\) that maximizes \(\min_{i\in[r]}\left\{\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+\underline{c}^{t}x^{i} \right\}-v(R(\Gamma))\). If the difference is less or equal to \(\epsilon\) we are done. Otherwise we updated \(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r}\) by introducing \((\pi,\rho,x)\). Since \(\left\{0,1\right\}^{K}\times\left\{0,1\right\}^{n}\times X\) is a finite set **A-\(Q\)** stops in a finite number of iterations. If \(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r}\) is the output then \(\left\{x^{i}\right\}_{1}^{r}\) is an \(\epsilon,\Omega\)-mprs. In order to solve \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) we can rewrite it as a 0-1-MILP problem in \((\Gamma,\pi,\rho,x,w,\sigma)\) as follows: \[\max \sigma-\left(\sum_{k\in[K]}\!w_{k}+d^{t}\rho+\underline{c}^{t}x\right) Q(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\] \[s.t. \sigma-\Gamma^{t}\pi^{i}\leq d^{t}\rho^{i}+\underline{c}^{t}x^{i} \forall i\in[r]\] \[\pi_{k}+\rho_{j}-x_{j}\geq 0 \forall k\in[k]\;\forall j\in P_{k}\] \[w_{k}-\Gamma_{k}-\mathcal{U}_{k}\pi_{k}\geq-\mathcal{U}_{k} \forall k\in[K]\] \[\Gamma\in\Omega,\;\pi\in\{0,1\}^{K},\;\rho\in\{0,1\}^{n},\;x\in X\] \[w\in\mathbb{R}_{+}^{K},\;\sigma\in\mathbb{R}\] Let \((\Gamma,\pi,\rho,x,w,\sigma)\) be an optimal solution for \(Q(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\). If \(\pi_{k}=1\) then \(w_{k}\geq\Gamma_{k}\) and because of the maximization criterium we have \(w_{k}=\Gamma_{k}=\Gamma_{k}\pi_{k}\). If \(\pi_{k}=0\) then \(w_{k}\geq\Gamma_{k}-\mathcal{U}_{k}\) and because of the maximization criterium we have \(w_{k}=0=\Gamma_{k}\pi_{k}\). Therefore we have \(w_{k}=\Gamma_{k}\pi_{k}\). Since maximization is the criterium we have that \(\sigma=\min_{i\in[r]}\;\left\{\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+\underline{c}^{t }x^{i}\right\}\). Therefore, the reformulation of the \(Q\) problem is valid. ### Totally unimodular case Let \(c(\pi)_{j}=(\underline{c}_{j}\pi_{k})+(\underline{c}_{j}+d_{j})(1-\pi_{k})\)\(\forall j\in P_{k}\)\(\forall k\in[K]\). After some algebraic manipulations we have that \(R(\Gamma)\) may be rewritten as follows ([5]): \[\min \Gamma^{t}\pi+v(P(c(\pi))) \hat{R}(\Gamma)\] \[s.t. \pi\in\left\{0,1\right\}^{K}\] and we know that if \((\pi,\rho,x)\) is an optimal solution for \(R(\Gamma)\) then \(\pi\) is an optimal solution for \(\hat{R}(\Gamma)\) and \(x\) is an optimal solution for \(P(c(\pi))\). By the other hand if (i) \(\pi\) is an optimal solution for \(\hat{R}(\Gamma)\), (ii) \(x\) is an optimal solution for \(P(c(\pi))\) and (iii) \(\rho_{j}=(1-\pi_{k})x_{j}\)\(\forall j\in P_{k}\)\(\forall k\in[K]\) then \((\pi,\rho,x)\) is an optimal solution for \(R(\Gamma)\). Let \(\overline{P}(c)\) be the linear relaxation of \(P(c)\) with \(\overline{X}\) instead of \(X\) by using \(x\in\left[0,1\right]^{n}\) instead of \(x\in\left\{0,1\right\}^{n}\). Let \(\overline{R}(\Gamma)\) be a 0-1-MILP problem in \((\pi,\rho,x)\) defined from \(R(\Gamma)\) with \((\rho,x)\in[0,1]^{n}\times[0,1]^{n}\) instead of \((\rho,x)\in\left\{0,1\right\}^{n}\times\left\{0,1\right\}^{n}\). We have that \(\overline{R}(\Gamma)\) may be rewritten as follows: \[\min \Gamma^{t}\pi+v(\overline{P}(c(\pi))) \hat{\overline{R}}(\Gamma)\] \[s.t. \pi\in\left\{0,1\right\}^{K}\] and we know that if \((\pi^{*},\bar{\rho},\bar{x})\) is an optimal solution for \(\overline{R}(\Gamma)\) then \(\pi^{*}\) is an optimal solution for \(\hat{\overline{R}}(\Gamma)\) and \(\bar{x}\) is an optimal solution for \(\overline{P}(c(\pi^{*}))\). By the other hand if (i) \(\pi^{*}\) is an optimal solution for \(\hat{\overline{R}}(\Gamma)\), (ii) \(\bar{x}\) is an optimal solution for \(\overline{P}(c(\pi^{*}))\) and (iii) \(\bar{\rho}_{j}=(1-\pi_{k}^{*})\bar{x}_{j}\)\(\forall j\in P_{k}\)\(\forall k\in[K]\) then \((\pi^{*},\bar{\rho},\bar{x})\) is an optimal solution for \(\overline{R}(\Gamma)\). Let us suppose that \(P(c)\) is defined as follows: \[\min c^{t}x \quad P(c)\] \[s.t. Ax=b\] \[x\in\left\{0,1\right\}^{n}\] with \(b\in\mathbb{Z}^{m}\) and \(A\in\mathbb{R}^{m\times n}\). Let us suppose that \(A\) is totally unimodular. We know that in the totally unimodular case we have that \(v(P(c))=v(\overline{P}(c))\) for all \(c\) and if \(\bar{x}\) is an optimal solution for \(\overline{P}(c)\) then \(\bar{x}\in\left\{0,1\right\}^{n}\) and \(\bar{x}\) is an optimal solution for \(P(c)\). Therefore, in the totally unimodular case if \((\pi^{*},\bar{\rho},\bar{x})\) is an optimal solution for \(\overline{R}(\Gamma)\) we have that \(\bar{x}\) is an optimal solution for \(\overline{P}(c)\) with \(\bar{x}\in\left\{0,1\right\}^{n}\). If \(\rho_{j}^{*}=(1-\pi_{k}^{*})\bar{x}_{j}\) for all \(j\in P_{k}\) and for all \(k\in[K]\) then \(\rho^{*}\in\left\{0,1\right\}^{n}\), therefore : \((\pi^{*},\rho^{*},\bar{x})\) is an optimal solution for \(R(\Gamma)\) Let \(\overline{Q}(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\) be a 0-1-MILP problem in \((\Gamma,\pi,\rho,x,w,\sigma)\) defined from \(Q(\left\{(\pi^{i},\rho^{i},x^{i})\right\}_{1}^{r})\) with \((\rho,x)\in[0,1]^{n}\times\overline{X}\) instead of \((\rho,x)\in\left\{0,1\right\}^{n}\times X\). If \((\Gamma,\pi,\rho,x,w,\sigma)\) is an optimal solution for \(\overline{Q}(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) then \((\pi,\rho,x)\) is an optimal solution for \(\overline{R}(\Gamma)\). Since we have the totally unimodular case \((\pi,\rho^{*},x)\) is an optimal solution for \(R(\Gamma)\) with \(\rho_{j}^{*}=(1-\pi_{k})x_{j}\) for all \(j\in P_{k}\) and for all \(k\in[K]\), hence \((\Gamma,\pi,\rho^{*},x,w,\sigma)\) is an optimal solution for \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\). ## 3 Specific cases for \(\Omega\) Next we present the \(Q\) problem formulation for three \(\Omega\)-cases of practical interest. We use standard formulation procedures and some algebraic manipulations. ### Interval case Let \(\mathcal{L}\in\mathbb{R}_{+}^{K}\) with \(\mathcal{L}\leq\mathcal{U}\) and let \(\Omega=[\mathcal{L},\mathcal{U}]=\{\Gamma\in\mathbb{R}_{+}^{K}:\mathcal{L}\leq \Gamma\leq\mathcal{U}\}\) Let \(\Gamma^{+}(\pi)_{k}=\mathcal{L}_{k}\pi_{k}+\mathcal{U}_{k}(1-\pi_{k})\) for all \(k\in[K]\), for all \(\pi\in\{0,1\}^{K}\). According to appendix A: \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) is equivalent to \(Q^{+}(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) defined as follows: \[\max \min_{i\in[r]}\left\{\Gamma^{+}(\pi)^{t}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\right\}-\left(\mathcal{L}^{t}\pi+d^{t}\rho+\underline{c }^{t}x\right)\qquad Q^{+}(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\] \[s.t. \pi_{k}+\rho_{j}-x_{j}\geq 0\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall k\in[k]\ \forall j\in P_{k}\] \[\pi\in\{0,1\}^{K},\ \rho\in\{0,1\}^{n},\ x\in X\] In order to solve \(Q^{+}(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) we may rewrite it as a 0-1-ILP in \((\pi,\rho,x,\sigma)\) as follows: \[\max \sigma-\left(\mathcal{L}^{t}\pi+d^{t}\rho+\underline{c}^{t}x\right) Q^{+}(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\] \[s.t. \sigma+{f^{i}}^{t}\pi\leq d^{t}\rho^{i}+\underline{c}^{t}x^{i}+ \mathcal{U}^{t}\pi^{i}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall i\in[r]\] \[\pi_{k}+\rho_{j}-x_{j}\geq 0\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\forall k\in[k]\ \forall j\in P_{k}\] \[\pi\in\{0,1\}^{K},\ \rho\in\{0,1\}^{n},\ x\in X,\ \sigma\in\mathbb{R}\] with \(f_{k}^{i}=(\mathcal{U}_{k}-\mathcal{L}_{k})\pi_{k}^{i}\ \forall k\in[K],\ \forall i\in[r]\) Since maximization is the criterium we have that \(\sigma=\min_{i\in[r]}\ \left\{d^{t}\rho^{i}+\underline{c}^{t}x^{i}+\mathcal{U}^{t} \pi^{i}-{f^{i}}^{t}\pi\right\}=\min_{i\in[r]}\ \left\{\Gamma^{+}(\pi)^{t}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\right\}\) for all optimal solution. Therefore, the reformulation of the \(Q^{+}\) problem is valid. ### Line segment case Let \(\Gamma^{0}\in\mathbb{R}_{+}^{K}\), let \([\underline{\alpha},\overline{\alpha}]\subseteq[0,1]\) and let \(\Omega=\{\Gamma\in\mathbb{R}_{+}^{K}:\Gamma=\alpha\Gamma^{0},\ \alpha\in[\underline{\alpha},\overline{\alpha}]\}\) with \(\mathcal{U}=\overline{\alpha}\Gamma^{0}\) \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) becomes a 0-1-MILP problem in \((\alpha,\pi,\rho,x,w,\sigma)\) defined as follows: \[\max \sigma-\left(\sum\limits_{k\in[K]}w_{k}+d^{t}\rho+\underline{c}^{t}x\right) Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\] \[s.t. \sigma-\alpha\Gamma^{0^{t}}\pi^{i}\leq d^{t}\rho^{i}+\underline{c }^{t}x^{i}\] \[\pi_{k}+\rho_{j}-x_{j}\geq 0 \forall k\in[K]\ \forall j\in P_{k}\] \[w_{k}-\alpha\Gamma_{k}^{0}-\overline{\alpha}\Gamma_{k}^{0}\pi_ {k}\geq-\overline{\alpha}\Gamma_{k}^{0} \forall k\in[K]\] \[\alpha\in\left[\underline{\alpha},\overline{\alpha}\right],\ \pi\in\left\{0,1\right\}^{K},\ \rho\in\left\{0,1\right\}^{n},\ x\in X\] \[w\in\mathbb{R}_{+}^{K},\ \sigma\in\mathbb{R}\] Let \((\alpha,\pi,\rho,x,w,\sigma)\) be an optimal solution for \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\). If \(\pi_{k}=1\) then \(w_{k}-\alpha\Gamma_{k}^{0}\geq 0\) and because of the maximization criterium we have \(w_{k}=\alpha\Gamma_{k}^{0}=\alpha\Gamma_{k}^{0}\pi_{k}=\Gamma_{k}\pi_{k}\). If \(\pi_{k}=0\) then \(w_{k}\geq(\alpha-\overline{\alpha})\Gamma_{k}^{0}\) and because of the maximization criterium we have \(w_{k}=0=\Gamma_{k}\pi_{k}\). Therefore we have \(w_{k}=\Gamma_{k}\pi_{k}\) for all \(k\in[K]\). Since maximization is the criterium we have that \[\sigma=\min\limits_{i\in[r]}\ \Big{\{}\alpha\Gamma^{0^{t}}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\Big{\}}=\min\limits_{i\in[r]}\ \big{\{}\Gamma^{t}\pi^{i}+d^{t}\rho^{i}+ \underline{c}^{t}x^{i}\big{\}}.\] Therefore, the reformulation of the \(Q\) problem is valid. ### Budgeted case Let \(\underline{\Gamma},\mathcal{D}\in\mathbb{R}_{+}^{K}\), let \(\Delta\in\mathbb{R}_{+}\) and let \(\Omega=\{\Gamma\in\mathbb{R}_{+}^{K}:\Gamma=\underline{\Gamma}+\beta,\ \ \sum\limits_{k\in[K]} \beta_{k}\leq\Delta,\ \beta_{k}\in[0,\mathcal{D}_{k}]\ \forall k\in[K]\}\) with \(\mathcal{U}=\underline{\Gamma}+\mathcal{D}\) \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\) becomes a 0-1-MILP problem in \((\beta,\pi,\rho,x,w,\sigma)\) defined as follows: \[\max \sigma-\left(\sum\limits_{k\in[K]}w_{k}+d^{t}\rho+\underline{c}^{t }x\right) Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{r})\] \[s.t. \sigma-\beta^{t}\pi^{i}\leq d^{t}\rho^{i}+\underline{c}^{t}x^{i}+ \underline{\Gamma}^{t}\pi^{i} \forall i\in[r]\] \[\pi_{k}+\rho_{j}-x_{j}\geq 0 \forall k\in[K]\ \forall j\in P_{k}\] \[w_{k}-\beta_{k}-(\underline{\Gamma}+\mathcal{D})_{k}\pi_{k}\geq- \mathcal{D}_{k} \forall k\in[K]\] \[\beta_{k}\leq\mathcal{D}_{k} \forall k\in[K]\] \[\sum\limits_{k\in[K]}\beta_{k}\leq\Delta\] \[\beta\in\mathbb{R}_{+}^{K},\ \pi\in\left\{0,1\right\}^{K},\ \rho\in\left\{0,1\right\}^{n},\ x\in X\] \[w\in\mathbb{R}_{+}^{K},\ \sigma\in\mathbb{R}\] Let \((\beta,\pi,\rho,x,w,\sigma)\) be an optimal solution for \(Q(\{(\pi^{i},\rho^{i},x^{i})\}_{1}^{\intercal})\). If \(\pi_{k}=1\) then \(w_{k}\geq\underline{\Gamma}_{k}+\beta_{k}\) and because of the maximization criterium we have \(w_{k}=\underline{\Gamma}_{k}+\beta_{k}=(\underline{\Gamma}_{k}+\beta_{k})\pi_ {k}\) for all \(k\in[K]\). If \(\pi_{k}=0\) then \(w_{k}\geq\beta_{k}-\mathcal{D}_{k}\) and because of the maximization criterium we have \(w_{k}=0=(\underline{\Gamma}_{k}+\beta_{k})\pi_{k}\). Therefore we have \(w_{k}=\Gamma_{k}\pi_{k}\) for all \(k\in[K]\). Since maximization is the criterium we have that \[\sigma=\min_{i\in[r]}\;\big{\{}(\underline{\Gamma}+\beta)^{t}\pi^{i}+d^{t}\rho ^{i}+\underline{c}^{t}x^{i}\big{\}}=\min_{i\in[r]}\;\big{\{}\Gamma^{t}\pi^{i}+d ^{t}\rho^{i}+\underline{c}^{t}x^{i}\big{\}}.\] Therefore, the reformulation of the \(Q\) problem is valid. ## 4 Computational experience Our algorithms have been performed on a personal computer as follows: * Intel(R)Core(TM) i7-9750H CPU, @ 2.60 GHz Lenovo ThinkPad X1 Extreme Gen 2, 32.00 GB Ram and Windows 10 Pro Operating System * All the instances have been processed through ILOG-Cplex 12.10 from a DOcplex Python code * All the parameters of ILOG-Cplex 12.10 are in their default values ### Shortest path problem #### 4.1.1 Data generation Graphs \(G=(V,E)\) are generated following [15] as follows: in each problem instance, the nodes correspond to \(|V|\) points with coordinates that are chosen uniformly at random in the square \([0,10]\times[0,10]\). We choose the pair of nodes with the largest Euclidean distance as \(v_{0}\) and \(w_{0}\) (the start and terminal nodes). In order to generate \(E\), we begin with a fully connected graph and remove 70% of the arcs in order of decreasing Euclidean distance, that is, starting with the longest arcs. If \(e=(v,w)\in E\) then \(\underline{c}_{e}\) is the euclidean distance from \(v\) to \(w\) and \(d_{e}=0.5\underline{c}_{e}\). Let \(b_{v_{0}}=1,\;b_{w_{0}}=1\) and \(b_{v}=0\;\forall v\in V-\{v_{0},w_{0}\}\). For each \(v\in V\) let \(E^{+}(v)=\{(v,w):(v,w)\in E\}\) and let \(E^{-}(v)=\{(w,v):(w,v)\in E\}\). The shortest path (SP) problem for \(c\) is defined as follows: \[\min \sum_{e\in E} c_{e}x_{e} SP(c)\] \[s.t. \sum_{e\in E^{+}(v)}x_{e}-\sum_{e\in E^{-}(v)}x_{e}=b_{v} \forall v\in V\] \[x\in\{0,1\}^{|E|}\] #### 4.1.2 Partitions definitions * Random generation of the partitions (marked with \(r\)): for each \(e\in E\) an index is randomly selected in \([K]\), if the selected index is \(k\) then \(e\in P_{k}\) * Generation of the partitions according values of a minimal path to destination (marked with \(p\)): let \(cost(v)\) the cost of a minimum path from \(v\) to \(w_{0}\) for all \(v\in V\). Let \(\hat{cost}=\max\limits_{v\in V}\;cost(v)\) and let \(\lambda=\hat{cost}/K\). If \(cost(v)\in[(k-1)\lambda,k\lambda]\) then \(e\in P_{k}\) for all \(e\in\delta^{+}(v)\) * Generation of the partitions according distances to destination (marked with \(d\)): let \(dist(v)\) the euclidean distance from \(v\) to \(w_{0}\) for all \(v\in V\). Let \(\hat{dist}=\max\limits_{v\in V}\;dist(v)\) and let \(\lambda=\hat{dist}/K\). If \(dist(v)\in[(k-1)\lambda,k\lambda]\) then \(e\in P_{k}\) for all \(e\in\delta^{+}(v)\) #### 4.1.3 \(\Omega\) definition: * Interval case: let \([\mathcal{L}_{k}(\delta),\mathcal{U}_{k}(\delta)]=\max\limits_{e\in P_{k}}\{d _{e}\}\times[\delta,\delta+1]\;\;\forall k\in[K]\) with \(\delta\geq 0\) a parameter to be presented in the tables. Note that the volumes of the \(\Omega\) sets used are equals \((\prod\limits_{k\in[K]}(\mathcal{U}_{k}(\delta)-\mathcal{L}_{k}(\delta))= \prod\limits_{k\in[K]\in P_{k}}\{d_{e}\}\;\forall\delta\geq 0)\) * Line segment case: \(\Gamma^{0}_{k}=\max\limits_{e\in P_{k}}\{d_{e}\}\;\;\forall k\in[K],\;\alpha \in[\underline{\alpha},\overline{\alpha}]=[0,1]\) * Budgeted case: \(\underline{\Gamma}_{k}=\beta_{1}\times\max\limits_{e\in P_{k}}\{d_{e}\}\;\; \forall k\in[K]\), \(\mathcal{D}=\beta_{2}\underline{\Gamma}\), \(\Delta=\delta\max\limits_{k\in[K]}\{\mathcal{D}_{k}\}\) with \(\beta_{1}\geq 0,\beta_{2}\geq 0,\delta\geq 0\) parameters to be presented in the tables ### (l,p)-Medians problem #### 4.2.1 Data generation Let \([l]\) a set of demand locations. Each demand location is a candidate to be a service location (a median). If the demand location \(j\) is assigned to the median \(i\) the cost is \(c_{ij}\). Let \(p\) the number of medians to be selected. The data were generated at random as follows: locations \(j\) were taken from \(U((0,100)\times(0,100))\), let \(D_{j}\) be the demand of location \(j\) taken from \(U(0,100)\), let \(dist_{ij}\) be the euclidean distance from location \(i\) until location \(j\) and let \(\underline{c}_{ij}=dist_{ij}D_{j}\;\;\forall i,j\in[l]\). If \((i,j)\in[l]\times[l]\) then \(d_{ij}=0.5\underline{c}_{ij}\). Let \(y_{i}\in\{0,1\}\;\;(i\in[l])\) with \(y_{i}=1\) if and only if the demand location \(i\) is selected to be a median. Let \(x_{ij}\in\{0,1\}\;\;(i,j)\in[l]\times[l])\) with \(x_{ij}=1\) if and only if the demand location \(j\) is assigned to a median located at \(i\). The \((l,p)\)-medians (\((l,p)\)M) problem with cost \(c\) is a problem in \((y,x)\) defined as follows ([16]): \[\min \sum_{i\in[l]}\sum_{j\in[l]}c_{ij}x_{ij} (l,p)M(c)\] \[s.t. x_{ij}\leq y_{i} \forall i,j\in[l]\] \[\sum_{i\in[l]}y_{i}=p\] \[\sum_{i\in[l]}x_{ij}=1 \forall j\in[l]\] \[y_{i}\in\{0,1\},\ x_{ij}\in\{0,1\} \forall i,j\in[l]\] #### 4.2.2 Partitions definition * Generation of the partitions guided by locations (marked with \(lo\)): for each \(i\in[l]\) an index is randomly selected in \([K]\), if the selected index is \(k\) then \((i,j)\in P_{k}\) for all \(j\in[l]\) * Generation of the partitions according the sum of perturbations (marked with \(g\)): let \(cost(i)=\sum\limits_{j\in[l]}d_{ij}\). Let \(\hat{cost}=\max\limits_{i\in[l]}cost(i)\) and let \(\lambda=\hat{cost}/K\). If \(cost(i)\in[(k-1)\lambda,k\lambda]\) then \((i,j)\in P_{k}\) for all \(j\in[l]\) #### 4.2.3 \(\Omega\) definition: * Interval case: \([\mathcal{L}_{k}(\delta),\mathcal{U}_{k}(\delta)]=\max\limits_{(i,j)\in P_{k}} \{d_{ij}\}\times[\delta,\delta+1]\ \ \forall k\in[K]\) with \(\delta\geq 0\) a parameter to be presented in the tables. Note that the volumes of the \(\Omega\) sets used are equals \((\mathop{\Pi}\limits_{k\in[K]}(\mathcal{U}_{k}(\delta)-\mathcal{L}_{k}( \delta))=\mathop{\Pi}\limits_{k\in[K]}\max\limits_{(i,j)\in P_{k}}\{d_{ij}\}\ \forall\delta\geq 0)\) * Line segment case: \(\Gamma^{0}_{k}=n\times\max\limits_{(i,j)\in P_{k}}\{d_{ij}\}\ \ \forall k\in[K],\ \alpha\in[\underline{\alpha},\overline{\alpha}]=[0,1]\) * Budgeted case: \(\underline{\Gamma}_{k}=\beta_{1}\times\max\limits_{(i,j)\in P_{k}}\{d_{ij}\} \ \ \ \forall k\in[K]\), \(\mathcal{D}=\beta_{2}\underline{\Gamma}\), \(\Delta=\delta\max\limits_{k\in[K]}\{\mathcal{D}_{k}\}\) with \(\delta\geq 0,\beta_{1}\geq 0,\beta_{2}\geq 0\) parameters to be presented in the tables. ### Conditions to start and stop the algorithm * Interval case: solve \(R(\mathcal{L})\) and let \((x^{1},\pi^{1},\rho^{1})\) be an optimal solution * Budgeted case: solve \(R(\underline{\Gamma})\) and let \((x^{1},\pi^{1},\rho^{1})\) be an optimal solution * Line segment case: solve \(R(0)\) and let \((x^{1},\pi^{1},\rho^{1})\) be an optimal solution We use \(\epsilon\) as follows: * Interval case: \(\epsilon=0.01v(R(\mathcal{L}))\) * Budgeted case: \(\epsilon=0.01v(R(\underline{\Gamma}))\) * Line segment case: \(\epsilon=0.01v(R(0))=0.01v(P(\underline{c}))\) Note that if \(\left\{x^{i}\right\}_{1}^{\tau}\) is the output generated by the algorithm then we have: \[v(R(\Gamma))\leq\min_{i\in[r]}v(W(x^{i},\Gamma))\leq(1+0.01)v(R(\Gamma))\; \forall\Gamma\in\Omega\] Note that in each iteration \(v(Q(\left\{\pi^{i},\rho^{i},x^{i}\right\}_{1}^{\tau}))/v(R(\mathcal{L}))\) is an upper bound of the maximal relative error defined as \(\max\limits_{\Gamma\in\Omega}\frac{\min\limits_{i\in[r]}v(W(x^{i},\Gamma))-v(R( \Gamma))}{v(R(\Gamma))}\) ### Values to be presented Each line in the tables refer to a set of problems under conditions presented and we present: * \(\overline{t_{*}}\) and \(\hat{t_{*}}\): the average and maximun time in seconds (s.) to find an \(\epsilon,\Omega\)-mprs when the partitions are generated according \(*\in\{r,p,d,lo,g\}\), including the time to find a robust solution (\(x^{1}\)) * \(\underline{s_{*}},\;\overline{s_{*}}\) and \(\hat{s}_{*}\): the minimum, average and maximun number of different solutions (paths to SP problem and \(p\)-medians for \((l,p)\)M-problem) to define an \(\epsilon,\Omega\)-mprs when the partitions are generated according \(*\in\{r,p,d,lo,g\}\). Do not confuse these values with the corresponding minimum, average and maximum number of iterations executed by the algorithm, which can be considerably higher in some cases. The reason is that if \((\pi^{1},\rho^{1},x^{1})\) and \((\pi^{2},\rho^{2},x^{2})\) are different solutions generated by the algorithm, it often happens that \(x^{1}=x^{2}\). The decision maker is only interested in bailing out \(x=x^{1}=x^{2}\) in cases like that. For \((l,p)\)M-problems we may find a lot of cases with different solutions with the same \(p\)-medians set. For the decision maker the hard decision is to choice a \(p\)-medians set because of the assignment of locations to medians is very simple in real time. Hence, we report minimum, average and maximun number of different \(p\)-medians set to define an \(\epsilon,\Omega\)-mprs * \(\underline{s_{d,q}}\), \(\overline{s_{d,q}}\) and \(\hat{s_{d,q}}\): the minimum, average and maximun number of different solutions (paths) to define a \(\epsilon(q),\Omega\)-mprs with \(\epsilon(q)=(q/100)\times v(R(\mathcal{L}))\) when the \(P_{k}\) definition corresponds to \(d\) for \(q\in\{1,2,3,5\}\) * \(\overline{t_{d,q}}\): the average time in seconds to find \(\epsilon(q),\Omega\)-mprs with \(\epsilon(q)=(q/100)\times v(R(\mathcal{L}))\) when the \(P_{k}\) definition corresponds to \(d\) for \(q\in\{1,2,3,5\}\), including the time to find a robust solution (\(x^{1}\)) * We present, according the case: 1. \(\overline{\epsilon_{\mathcal{L}}}\): the average of \(100\times\frac{v(Q(\pi^{1},\rho^{1},x^{1}))}{v(R(\mathcal{L}))}\) 2. \(\overline{\epsilon_{\Gamma}}\): the average of \(100\times\frac{v(Q(\pi^{i},\rho^{i},x^{i}))}{v(R(\Gamma))}\) 3. \(\overline{\epsilon_{0}}\): the average of \(100\times\frac{v(Q(\pi^{i},\rho^{i},x^{i}))}{v(R(0))}\) 4. \(\overline{\epsilon_{0}^{(2)}}\): the average of \(100\times\frac{v(Q(\{\pi^{i},\rho^{i},x^{i}\}_{1}^{2}))}{v(R(0))}\) All values were rounded to one decimal place. ### Tables definition Tables 1 through 4 refer to SP problems. Tables 1,2 and 3 refers to sets of 30 graphs with \(|V|\in\{50,75,100,150\}\). Table 4 refers to sets of 30 graphs with \(|V|\in\{50,75,100\}\) generated independently of tables 1,2 and 3. There is one exception: in table 2 the line with \((|V|,K,\delta)=(100,20,0.25)\) refers to 5 problems. We are considering the interval case (tables 1 and 2), the line segment case (table 3) and the budgeted case (table 4) according the parameters presented. Partitions generation procedures are identified with \(r,p\) or \(d\). Tables 5 trough 7 refer to \((l,p)M\) problems with 30 cases for each \(l\in\{50,60,70,80,90\}\). In each case we are looking for \(p=l/10\) medians. There are some exceptions: in table 5 lines with \(((l,p),\delta)=((80,8),0.25)\) and \(((l,p),\delta)=((90,9),0.25)\) refers to 10 problems, in table 6 line with \((l,p)=(90,9)\) refers to 10 problems and in table 7 lines in the right side with \(((l,p),K)=((80,8),15)\) refers to 15 cases. We are considering the interval case (table 5), the line segment case (table 6) and the budgeted case (table 7) according the parameters presented. Partitions generation procedures are identified with \(lo\) or \(g\). ### Performance of the algorithm The experimental results are far from being exhaustive. There are many factors involved: the dimensions of the problems (\(|V|\) and \((l,p)\)), the dimension of the partition (\(K\)), the definitions used for \(P_{k}\) (\(r,p,d\) for SP problems and \(lo,g\) for \((l,p)\)M problems), the definitions used for \(\Omega\) (interval,line segment and budgeted) and the parameters that we need to define them \((\delta,\underline{\alpha},\overline{\alpha},\beta_{1},\beta_{2})\). We try to show that the multiparametric analysis to find an \(\epsilon,\Omega\)-mprs is computationally possible for moderate \(|V|,l,p,K\) values and for this we define a variety of reasonable situations for the data used to define the cases. Remember that the algorithm will run offline and the decision maker will choose a solution from the generated set when a new scenario appears. Thus, a tolerable execution time may be hours and not minutes as usual. Also, the trade off analysis between the number of generated solutions and the relative error used defined with \(\epsilon\) depends on the decision maker. #### 4.6.1 General comments * As we can expect the computational effort and the number of generated solutions increase if either \((|V|,K)\) or \(((l,p),K)\) increase. * The known upper bound for the relative error at the first (or second) iteration may be very high which justify the use of more solutions. * Execution time seems to be tolerable (from seconds up to 6 hours). The number of generated solutions were low in some cases and tolerable in general although the trade off analysis between the number of solutions and the relative error depends on decision maker. #### 4.6.2 Remarks * We can see in tables 1,2 and 5 for the interval case that the computational effort and the number of generated solutions increase if \(\delta\) decreases. We can see in tables 2 and 5 that the known upper bound for the relative error on the first iteration increases if \(\delta\) decreases which is consistent with that behavior. In Appendix B we present an easy problem with \(n+1\) variables called \(Toy_{n}(c)\) to illustrate that behavior. For \(Toy_{n}(c)\) we have that with low uncertainty (see Appendix B) we need \(n\) solutions to find a \(0,\Omega\)-mprs and with high uncertainty (see Appendix B) one solution is enough to find a \(0,\Omega\)-mprs. That only suggests that finding \(\epsilon,\Omega\)-mprs may be easy with high uncertainty and may be hard with low uncertainty. We are not comparing the goodness of both situations. Doing so would surely lead to a redefinition of the objective: with low uncertainty it is most likely not necessary to be so picky about the value of \(\epsilon\) and although we might be relatively far from robust solutions, in practice we might have reasonable solutions. * In table 2 we show that the decision maker can observe the increasing of the number of necessary solutions (\(\overline{s_{d,q}}\)) and the time (\(\overline{t_{d,q}}\)) to find an \((q/100)\times v(R_{\mathcal{L}}),\Omega\)-mprs while it is more demanding for stopping the algorithm (\(q\in\{5,3,2,1\}\)). * Note that for the previously defined budgeted case for \((l,p)M\) problems, if \(\delta\) increases the volume of \(\Omega\) increases with the interval case as its limit, hence the generated solutions must increase. However, the computational effort can increase and then decrease and, as expected, there are some cases with that behavior. Problems \(W(x,\Gamma)\) and \(R(\Gamma)\) are now defined as follows: \[\max_{c\in\Lambda^{+}(\Gamma)}\ c^{t}x W(x,\Gamma)\] \[\min_{x\in X}\ v(W(x,\Gamma))=\min_{x\in X}\ \left(\max_{c\in\Lambda^{+}(\Gamma)}c^{t}x\right) R(\Gamma)\] As before, dualizing the inner problem in \(R(\Gamma)\) lead us to a 0-1-MILP problem in \((\pi,\rho,x)\) defined as follows ([5]): \[\min \ \Gamma^{t}\pi+\sum_{j\in[\pi]}\rho_{j}+\underline{c}^{t}x R(\Gamma)\] \[s.t. \ \pi_{k}+\rho_{j}-d_{j}x_{j}\geq 0 \forall k\in[k]\ \forall j\in P_{k}\] \[\pi\in\mathbb{R}_{+}^{K},\ \rho\in\mathbb{R}_{+}^{n},\ x\in X\] Unfortunately the parameters in \(\Gamma\) are affecting a vector of continuous variables \((\pi)\) and the multiparametric analysis by using **A-\(Q\)** (see Appendix A) is not possible with the formulation presented. Next we present a new formulation. Let \((\pi,\rho,x)\) be an optimal solution for \(R(\Gamma)\) then because of the minimization criterium we have \(\rho_{j}=\max\{0,d_{j}x_{j}-\pi_{k}\}\ \forall j\in P_{k}\ \forall k\in[K]\). Hence \(R(\Gamma)\) may be rewritten as a problem in \((\pi,x)\) as follows: \[\min \ \sum_{k\in[k]}\left(\Gamma_{k}\pi_{k}+\sum_{j\in P_{k}}\max\{0, d_{j}x_{j}-\pi_{k}\}\right)+\underline{c}^{t}x R(\Gamma)\] \[s.t. \ \pi\in\mathbb{R}_{+}^{K},\ x\in X\] Let \((\pi^{*},x)\) be an optimal solution for \(R(\Gamma)\) then \(\pi_{k}^{*}\) is an optimal solution for the following problem in \(\pi_{k}\): \[\min \ \Gamma_{k}\pi_{k}+\sum_{j\in P_{k}}\max\{0,d_{j}x_{j}-\pi_{k}\} R(\Gamma_{k},x)\] \[s.t. \ \pi_{k}\in\mathbb{R}_{+}\] If \(d_{j}x_{j}=0\ \forall j\in[n]\) then 0 is an optimal solution for \(R(\Gamma_{k},x)\). Otherwise the objective function in \(R(\Gamma_{k},x)\) is a convex, continuous and piecewise linear function on \([0,\infty)\) (see Appendix C) and then we have that there exists an optimal solution with \(\pi_{k}\in\{0\}\cup\{d_{j}x_{j}:j\in P_{k}\}\). Therefore \(R(\Gamma)\) may be rewritten as a problem in \((\rho,x,\alpha)\) as follows: \[\min \sum\limits_{k\in[K]}\left(\Gamma_{k}\sum\limits_{s\in P_{k}}d_{s} \alpha_{s}x_{s}\right)+\sum\limits_{j\in[n]}\rho_{j}+\underline{c}^{t}x R(\Gamma)\] \[s.t. \sum\limits_{s\in P_{k}}d_{s}\alpha_{s}x_{s}+\rho_{j}-d_{j}x_{j}\geq 0 \forall k\in[K]\;\forall j\in P_{k}\] \[\sum\limits_{s\in P_{k}}\alpha_{s}\leq 1 \forall k\in[K]\] \[\alpha\in\{0,1\}^{n},\;\rho\in\mathbb{R}_{+}^{n},\;x\in X\] Let \((\rho,x,\alpha)\) be an optimal solution for \(R(\Gamma)\). Let \(\pi_{k}=\sum\limits_{s\in P_{k}}d_{s}\alpha_{s}x_{s}\;\forall k\in[K]\) then \(\pi_{k}\in\{0\}\cup\{d_{j}x_{j}:j\in P_{k}\}\) and the reformulation of \(R(\Gamma)\) is valid. \(R(\Gamma)\) may be rewritten as a 0-1-MILP problem in \((\rho,x,\alpha,w)\) as follows: \[\min \sum\limits_{k\in[K]}\left(\Gamma_{k}\sum\limits_{s\in P_{k}}d_{s} w_{s}\right)+\sum\limits_{j\in[n]}\rho_{j}+\underline{c}^{t}x R(\Gamma)\] \[s.t. \sum\limits_{s\in P_{k}}d_{s}w_{s}+\rho_{j}-d_{j}x_{j}\geq 0 \forall k\in[K]\;\forall j\in P_{k}\] \[\sum\limits_{s\in P_{k}}\alpha_{s}\leq 1 \forall k\in[K]\] \[w_{j}-\alpha_{j}-x_{j}\geq-1 \forall j\in[n]\] \[w\in\{0,1\}^{n},\;\alpha\in\{0,1\}^{n},\;\rho\in\mathbb{R}_{+}^ {n},\;x\in X\] Let \((\rho,x,\alpha,w)\) be an optimal solution. If \(x_{j}=0\) then \(w_{j}\geq\alpha_{j}-1\) and because of the minimization criterium we have \(w_{j}=0=\alpha_{j}x_{j}\). If \(x_{j}=1\) then \(w_{j}\geq\alpha_{j}\) and because of the minimization criterium we have \(w_{j}=\alpha_{j}=\alpha_{j}x_{j}\) and the reformulation of \(R(\Gamma)\) is valid. Now the parameters in \(\Gamma\) are affecting only a vector of 0-1-variables \((w)\), and then the mulliparametric analysis may be performed by using **A**-\(Q\) with problem \(Q\) rewritten appropriately. ## 6 Conclusions and further extensions In this paper we studied combinatorial problems with locally budgeted uncertainty parameterized with \(\Gamma\). We presented an algorithm to find \(\{x^{i}\}_{1}^{r}\) such that for any parameters vector \(\Gamma\) there exists \(i(\Gamma)\in[r]\) such that robustness of \(x^{i(\Gamma)}\) is near optimal. As far as we know this is the first algorithm presented to do that task. The algorithm consists of applying a multiparametric algorithm to obtain a near optimal multiparametric solution relative to the objective function for a combinatorial problem defined to find a robust solution for the parameters vector fixed (\(R(\Gamma)\)). The case in which the matrix to define the nominal problems is unimodular is particulary considered. Three cases for the parameters were considered for the computational experience: interval case, line segment case and budgeted case. Two problems were considered for the computational experience: the first one with the totally unimodular property, the shortest path problem, and the second one without that property, the \(p\)-medians problem. The parameter to stop the algorithm was choice in such a manner that when the algorithm stops we have a near optimal solution in the relative sense. The experience show that we may find \(\left\{x^{i}\right\}_{1}^{r}\) with a tolerable computational effort for problems with moderate size as we can see in tables. The decision maker can observe the increasing of the number of necessary solutions and computational effort according dimensions and uncertainty level. For large problems the experience strongly suggests that we will need to solve the \(Q\) problems (see **A**-\(Q\)) more efficiently than using a standard branch and cut algorithm directly, either by using particular properties for the nominal problem or by using general properties for \(Q\). The formulation for \(R(\Gamma)\) that uses only the \(\pi\) variables (see subsection 2.1) suggests that either a relax (solve \(\overline{Q}\) instead of \(Q\) to obtain \(\pi\) and a lower bound) and fix (solve \(P(c(\pi))\) to obtain \((\rho,x)\) and an upper bound) with the framework of a branch and cut algorithm or a branch and cut algorithm with the branching scheme guided with the \(\pi\) variables are plausible options to be studied. For low uncertainty, at least in the interval case, it may be necessary to redefine the approach if the number of solutions is not tolerable for the decision maker and perhaps a set with few solutions is enough if for any parameters vector under consideration some solution in the set has a tolerable value. Note that to use a high \(\epsilon\) value is not the unique option to be considered: _compromise solutions_ to take to account the robustness relative error and the values at the same time may be a plausible option. If the uncertainty set is parameterized either with \(d\) instead of \(\Gamma\) or \(d\) and \(\Gamma\) instead of \(\Gamma\) the approach remains valid with a straightforward redefinition of \(Q\), however we can expect a computational effort and a number of generated solutions not tolerable and then a redefinition of the objective may be necessary. Our approach remains valid for all uncertainty sets such that there exists a 0-1-MILP formulation for the problem to find a robust solution with the fixed parameters affecting only 0-1-variables. That is the case for the variant of the uncertainty set presented. ## Declaration of interest The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgments Research supported by Universidad Central de Venezuela
2305.12908
Language Models for German Text Simplification: Overcoming Parallel Data Scarcity through Style-specific Pre-training
Automatic text simplification systems help to reduce textual information barriers on the internet. However, for languages other than English, only few parallel data to train these systems exists. We propose a two-step approach to overcome this data scarcity issue. First, we fine-tuned language models on a corpus of German Easy Language, a specific style of German. Then, we used these models as decoders in a sequence-to-sequence simplification task. We show that the language models adapt to the style characteristics of Easy Language and output more accessible texts. Moreover, with the style-specific pre-training, we reduced the number of trainable parameters in text simplification models. Hence, less parallel data is sufficient for training. Our results indicate that pre-training on unaligned data can reduce the required parallel data while improving the performance on downstream tasks.
Miriam Anschütz, Joshua Oehms, Thomas Wimmer, Bartłomiej Jezierski, Georg Groh
2023-05-22T10:41:30Z
http://arxiv.org/abs/2305.12908v1
# Language Models for German Text Simplification: ###### Abstract Automatic text simplification systems help to reduce textual information barriers on the internet. However, for languages other than English, only few parallel data to train these systems exists. We propose a two-step approach to overcome this data scarcity issue. First, we fine-tuned language models on a corpus of German Easy Language, a specific style of German. Then, we used these models as decoders in a sequence-to-sequence simplification task. We show that the language models adapt to the style characteristics of Easy Language and output more accessible texts. Moreover, with the style-specific pre-training, we reduced the number of trainable parameters in text simplification models. Hence, less parallel data is sufficient for training. Our results indicate that pre-training on unaligned data can reduce the required parallel data while improving the performance on downstream tasks. ## 1 Introduction Automatic text simplification (ATS) is the task of simplifying a text's lexical and structural complexity while preserving its original meaning. Easy-to-read texts can help people with learning deficiencies or non-native speakers gain access to texts that they could not understand otherwise. On the one hand, ATS can be used to create assisting tools for people with reading disabilities or professional translators Suarez-Figueroa et al. (2022). On the other hand, ATS can be applied as a preprocessing step for other natural language processing tasks such as machine translation or information retrieval to improve their performances Stajner and Popovic (2016), making it an important field of study. In German, there exist multiple levels of simplified language. In contrast to the underspecified simple language, the so-called Leichte Sprache (Easy Language) enforces a very strong simplification level and follows predefined structural rules Netzwerk Leichte Sprache (2013). These rules include conveying only one message per sentence (structural simplification), restriction to common words (lexical simplification), and usage of simplified grammar (syntactical simplification). This simplified grammar breaks with standard German grammar, for example, by using native instead of genitive to indicate possession. We consider Easy Language as a standalone language style. Therefore, we refer to Easy Language data as monolingual data in the further course of the paper, even though it is German as well. This work shows the benefits of fine-tuning language models for specific styles and characteristics. We publish and discuss a collection of causal language models fine-tuned for German Easy Language. As shown in previous work Gururangan et al. (2020), pre-training language models for specific domains can benefit the performances of downstream tasks in the respective domain. We extend this analysis to the language style of Easy Language. In addition, the fine-tuned models can be used to generate text with the specificities of Easy Language, for example, in data augmentation applications. Finally, we present how these models can serve as plug-in-decoders in BART-like architectures Lewis et al. (2020) to speed up and improve the training on sequence-to-sequence (seq2seq) tasks. Therefore, our contributions are the following: * We publish five German Easy Language causal language models and extensively evaluate their language style adaptions. * We assess the models' performance on the two downstream tasks of text complexity prediction and text simplification. * We suggest an ATS training process that exploits our pre-trained language models. This process reduces the number of trained param eters by over 90% while preserving state-of-the-art performance. With the reduction of trainable parameters, less aligned data is needed to train an ATS system. Especially for languages other than English, where aligned data is sparse, pre-trained causal language models can improve ATS performance. We publish our code and results for further research and application1. Footnote 1: [https://github.com/MiriUll/Language-Models-German-Simplification](https://github.com/MiriUll/Language-Models-German-Simplification) ## 2 Related work Causal language models can complete text based on a prompt. In contrast to masked language models, where the models know about the context before and after a specific token, these causal language models rely only on the input and the previously outputted tokens. Therefore, they are called autoregressive models. The Generative Pre-trained Transformer (GPT) (Radford et al., 2019) is a prominent example of such an autoregressive language model. It was trained on a collection of web data and, thus, outputs text for general purposes. Previous work has fine-tuned GPT for multiple domains and tasks, such as the task of quest generation in games (**?**) or the medical domain (Schneider et al., 2021). In addition to domain adaption, GPT was tailored to specific text styles and characteristics. These style transfer approaches include fine-tuning for poem generation (Liao et al., 2019) or the reduction of non-normative clauses (Peng et al., 2020). Li et al. (2022) trained a GPT model to mimic the language of people with dementia. By calculating the perplexities of texts with the fine-tuned and original version, they could distinguish samples from healthy and diseased people. Sun and Wan (2022) adapted a language model for simple language by only masking easy-to-understand words in training. However, this model is a masked language model that can only fill in blanks and not generate text from scratch. Most similar to our work is the TransformerLM by Maruyama and Yamamoto (2019) trained for Japanese text simplification. The authors used a parallel corpus to directly fine-tune a GPT model for simplification. In contrast, our models are fine-tuned on monolingual Easy Language data. Therefore, they do not require alignments and can be used for a broader range of tasks. ### German Text simplification In contrast to the English language, automatic text simplification in German has seen little research. The first system for Easy Language was proposed by Suter et al. (2016) and consisted of a collection of hand-crafted rules, including sentence splitting and paraphrasing. Sauberli et al. (2020) published the first neural simplification approach based on the transformer architecture, together with an aligned corpus. They discussed multiple data augmentation strategies, but their results lacked fluency and content preservation. Based on an extended version of this dataset, Spring et al. (2021) built a controllable simplification system that can output different simplification levels based on the Common European Framework of References for Languages (CEFR), but not specifically Easy Language. Finally, Rios et al. (2021) proposed a modified mBART architecture for document-level simplification. In our paper, we adopted their architecture to evaluate our language models on the downstream task of ATS. ## 3 Datasets Several sources are available in Easy Language; however, they mostly encompass news websites, and only a few are aligned with articles in standard German. In the following sections, we detail the information on the data used in our training, including the Easy Language monolingual corpus utilized for fine-tuning German language models and the parallel corpus for the downstream task of text simplification. The dataset utilized for the downstream task of text complexity prediction is publicly available as a part of the GermEval 2022 shared task (Mohtaj et al., 2022) (refer to Subsection 5.4). We published scrapers to recreate our sources for the use of the academic community2. We also provide an overview of available monolingual and parallel data sources for simplified German beyond our training data in Appendix A. Footnote 2: [https://github.com/brjezierski/scrapers](https://github.com/brjezierski/scrapers) ### Monolingual corpus An overview of the available monolingual data can be found in Table 1. The publicly available Easy Language datasets are very limited: The Simple German corpus published by Toborek et al. (2022) contains texts on health and medication, public administration, politics, information texts for disabled people, and news articles. The second publicly available resource is a small corpus published by Siegel et al. (2019). It contains election programs, excerpts from the Bible, children's stories, and Red Cross documents. Kurier, InfoEasy, and NDR are public broadcasting services in Austria, Switzerland, and northern Germany, respectively, and have specific columns in Easy Language. In addition, Hurraki and Lebenshilfe offer online dictionaries in Easy Language, while Einfachstars contains news articles about celebrities. These three data sources diversify our covered domains and styles of writing. More details about the data sources can be found in Table 8 in Appendix A. Our fine-tuning data combines all sources included in Table 1. The combined data was shuffled and randomly split into a training set containing 90% of the data and a validation set with 10% of the total. ### Parallel corpus For training the text simplification model, we used the publicly available 20 Minuten dataset3. The dataset consists of full articles paired with shortened, simplified summaries from the Swiss news magazine 20 Minuten. It comprises 17,905 article pairs in the training dataset and 200 pairs in the validation and test set each (Rios et al., 2021). The dataset's compression ratio (the reduction in the word count of simplified summaries) was estimated at 11%. Footnote 3: [https://github.com/ZurichNLP/20Minuten](https://github.com/ZurichNLP/20Minuten) ### Preprocessing pipeline Analyzing the outputs of publicly available language models in standard German, we noticed that in many cases, especially for the news headline-like input, the output contained noise, such as HTML tags or URLs. For this reason, coupled with the fact that we obtained data from multiple sources using various formats, we built a shared preprocessing pipeline to standardize the input for the fine-tuning of the language models as well as the simplified parts in the aligned dataset. Our pipeline removed redundant tags and characters. Some Easy Language texts use bullet points to break down sentences. Since most of the data did not follow this guideline, we converted the existing bullet points into comma-separated phrases. Another feature of Easy Language is the hyphenation of compound nouns. We compiled a list of hyphenated nouns in the monolingual dataset and used it to replace equivalent non-hyphenated compound nouns. ## 4 Methodology Our approach is divided into two parts. First, we fine-tuned generative language models for German Easy Language. Then, we used these models as plug-in decoders in a BART-based simplification task. ### Fine-tuning language models We selected five different pre-trained GPT-based models from Huggingface (Wolf et al., 2020) as the base for our language models, four German models, and one multilingual model. As shown in Table 2, the models differ in their original training data, initialization, and size. All German models use an embedding size of 1024, while mGPT has a size of 2048. To fine-tune the models, we used a NVIDIA A100 GPU. We trained for one epoch, with a learning rate of \(1e^{-4}\), a weight decay of \(0.01\), and a batch size of eight together with a gradient accumulation of four. However, due to the large model size, we had to decrease the batch size to one for mGPT. The dropout parameters for the embedding, the attention mechanism, and the fully connected layers were set to \(0.1\) each. Su et al. (2022) proposed a new learning objective for generative language models, the contrastive loss. This loss adds a similarity regularization to the cross entropy loss to enforce discriminative token representations. We used this loss function together with an AdamW optimizer for our fine-tuning. \begin{table} \begin{tabular}{l r l} \hline \hline **Dataset** & **Sentences** & **Domain** \\ \hline Hurraki & 56,785 & lexicon \\ \hline Lebenshilfe & 7,144 & lexicon \\ Einfachstars & 129,674 & news \\ Nachrichtenleicht & 122,842 & news \\ Kurier & 67,827 & news \\ NDR & 60,749 & news \\ InfoEasy & 10,310 & news \\ Siegel et al. (2019) & 4,210 & misc. \\ Toborek et al. (2022) & 28,356 & misc. \\ \hline **Total** & **544,467** & \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the monolingual data used for language model fine-tuning. ### Text simplification The simplification task can be considered as a translation-like seq2seq problem. Thus, we used an encoder-decoder architecture based on mBART's architecture Liu et al. (2020). It consists of a BERT-like encoder and a GPT-like decoder. Additionally, mBART was pre-trained on multilingual data (including German) on a denoising objective and forms the current baseline for transformer-based German ATS Rios et al. (2021). The baseline's mBART-encoder was modified to use sliding attention to be applied to article inputs. Thus, it was possible to use long input sequences efficiently. We adapted this architecture and replaced the mBART-decoder with our fine-tuned GPT models. For the target text, we used the same preprocessing used for fine-tuning the decoder models. As our language models already output text in the desired style, no further training of the decoder was necessary. Therefore, we only trained the encoder-decoder cross attention to align the encoding of the complex articles with our language models. This was proven successful for machine translation with pre-trained language models by Gheini et al. (2021). Training only the cross attention reduced the number of parameters to be updated, making the training of the simplification more efficient. In addition, the language models were not updated, and thus, we avoided catastrophic forgetting Goodfellow et al. (2013) of their German language comprehension. We trained with the same hyperparameters as the baseline, except we set label smoothing to zero and added a contrastive part to the loss function Su et al. (2022). We trained on a single NVIDIA TITAN X. Similar to the baseline, the training converged after 3 to 4 days according to validation loss, which means training for about 20 epochs. Due to hardware limitations, we trained with a batch size of one and a gradient accumulation of 32. ## 5 Evaluation This section describes four experiments to compare our fine-tuned (FT) models with their original (O) versions. First, we measured the models' perplexities on easy and normal texts and analyzed the readability of their outputs. In addition, the models were evaluated on two downstream tasks; text complexity prediction and automatic text simplification. ### Perplexity scores The perplexity describes how likely a specific model will produce a given text. A lower perplexity score indicates a better match between the model and text. We evaluated how well our models adapt to the style of Easy Language. Therefore, the fine-tuned and original models' perplexities on easy and normal texts were compared. The data was collected from the MDR, a public broadcasting service in Germany that publishes news articles in Easy Language. We manually aligned 100 paragraphs from the easy and original articles. To calculate the perplexity of the data, we used the tutorial code from Huggingface transformers (2022) that implements perplexity as a sliding window over the input data. We adapted the code for a sample-wise calculation and averaged the perplexity over all samples. Perplexity is highly dependent on the tokenization and the length of the samples Wang et al. (2022). Therefore, we cannot determine the best fine-tuned models by selecting the model with the \begin{table} \begin{tabular}{l l l l} \hline \hline **Model** & **Training data** & **Initialization** & **\#Params** \\ \hline GerPT2 Minixhofer (2020) & CC-100 Corpus & English GPT2 & 163M \\ german-gpt2 Schweter (2020) & Wikipedia dump, EU Bookshop corpus, Open Subtitles, Common-Crawl, ParaCrawl and News Crawl & & 124M \\ GPT2 Wechsel & OSCAR corpus, MUSE & English GPT2 & 124M \\ Minixhofer et al. (2022) & & & \\ Oscar fine-tune mlôteam (2021) & OSCAR corpus & _no info_ & 354M \\ mGPT Shliazhko et al. (2022) & Wikipedia, Colossal Clean Crawled & from scratch & 1417M \\ _(multilingual)_ & Corpus & & \\ \hline \hline \end{tabular} \end{table} Table 2: Training setup and number of parameters for different German GPT2 models. These models were used as base for our Easy Language fine-tuning. lowest perplexity. However, the fine-tuned and original versions of the models use the same tokenizers. Thus, we can compare their perplexities and assess the effects of fine-tuning. Table 3 shows the average perplexity values for the easy and normal texts. No model has seen any of the data before in training. All fine-tuned models show a lower perplexity for the Easy Language samples. In contrast, except for one model, the original models perform better on the normal texts. This suggests that the fine-tuned models match the specificities and structure of Easy Language better and, thus, that they are more likely to produce similar texts. ### Readability and Easy Language characteristics To evaluate the readability of the models' outputs, we compared the Flesch Reading Ease (FRE) scores [1] of sample outputs. We prompted the models with six different inputs: "Das"(_This_), "Heute"(_Today_), "Wir"(_We_), "Die Turkei"(_Turkey_), "Dises Haus"(_This house_), and "Mein Vater"(_My father_). The models had to output 100 new tokens, and we set a repetition penalty to enforce novel content in the output. Moreover, three different decoding strategies (contrastive search, sampling, and beam search) were used, resulting in 18 output texts per model. Finally, the FRE score was calculated for each of the model outputs. This score considers the average sentence length and the average number of syllables per word, which favors concise sentences with short words. Therefore, a higher score indicates a more accessible text. Table 4 shows each model's average FRE score. The fine-tuned models achieve a higher score, which implies that their output is more readable than their original's. In addition, we counted the number of suggested newline (n) tokens. As presented in Table 4, the fine-tuned models output this token more often. This shows that they adapted to the Easy Language characteristic of only writing one thought per line. To further investigate this conformity with Easy Language, we gave the models the input sentence "Heute scheint die Sonne" (_Today sun is shining_) and let them predict the next token. As highlighted in Table 5, most of the fine-tuned models proposed to end the sentence, i.e., predicted a point or a modifier. In contrast, the original models added further information by continuing the sentence with a comma or an "and". ### Human grammar evaluation Fine-tuning language models to a specific style can result in catastrophic forgetting [1]. To test if our fine-tuning for Leichte Sprachle influences the output quality of the models, we asked human reviewers to rate the models' grammaticality. The reviewers were not paid for their \begin{table} \begin{tabular}{l c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Average FRE**} & \multicolumn{2}{c}{**un tokens**} \\ & **FT** & **O** & **FT** & **O** \\ \hline gerpt2 & **65.17** & 51.09 & **67** & 34 \\ german gpt & **75.09** & 70.89 & **79** & 74 \\ wechsel & **70.72** & 55.86 & **69** & 18 \\ oscar & **68.21** & 49.32 & **61** & 0 \\ mGPT & **72.16** & 55.30 & **106** & 29 \\ \hline \hline \end{tabular} \end{table} Table 4: Flesch Reading Ease score averaged over different prompts and decoding strategies, and total number of un tokens suggested. The fine-tuned models output more simple texts. \begin{table} \begin{tabular}{l c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Easy text**} & \multicolumn{2}{c}{**Normal text**} \\ & **FT** & **O** & **FT** & **O** \\ \hline gerpt2 & **25.35** & 51.31 & **53.74** & 56.42 \\ german\_gpt & **31.81** & 47.19 & 77.76 & **31.49** \\ wechsel & **25.99** & 38.98 & 69.29 & **34.80** \\ oscar & **34.24** & 59.31 & 112.75 & **66.22** \\ mGPT & **24.93** & 25.05 & 99.53 & **19.18** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of perplexity scores between easy and normal texts. Lower score means better match. The fine-tuned models fit easy German text better, while the original models favor normal texts. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Suggested next token**} \\ & **FT** & **O** \\ \hline gerpt2 & **.** & **,** \\ german\_gpt & sehr (_very_) & **,** \\ wechsel & **.** & und (_and_) \\ oscar & **.** & **,** \\ mGPT & auf (_on_) & bei (_at_) \\ \hline \hline \end{tabular} \end{table} Table 5: Suggested next token for the input sentence “Heute scheint die Sonne” (_Today the sun is shining_). The original models propose to continue the sentence, while the fine-tuned models only put one thought per sentence. review but participated voluntarily. We selected the outputs of the prompt "Dieses Haus"(_This house_) with decoding strategy contrastive from Section 5.2. Then, we presented the output of each original and its respective fine-tuned model side by side and asked the participants to select the candidate with fewer grammatical errors. Participants could also state that both models were equal. Overall, seven native speakers and one non-native speaker participated in the survey. The distribution of answers is shown in Figure 1. While most participants preferred the fine-tuned version of ggrpt2 and mGPT, the fine-tuning of oscar decreased its grammar score. When averaging over all responses and models, the worsening of the grammaticality by fine-tuning the models on Leichte Sprache is neglectable. ### Text complexity prediction Fine-tuning models for a specific domain improves their performance on different tasks within this domain Gururangan et al. (2020). To test if this applies to our models, we evaluated them on the downstream task of text complexity prediction. Therefore, we added a linear layer on top of the language model heads and fine-tuned the models for the respective task. The data for this task came from the GermEval 2022 shared task on text complexity assessment Mohtaj et al. (2022). This shared task's goal was to predict a sentence's complexity on a continuous scale between 1 and 7. We split the shared task's training data into train, evaluation, and test subsets with a ratio of 80:10:10 and fine-tuned our models for ten steps with a batch size of eight, i.e., on 80 samples total. Table 6 reports the mean squared errors on the unseen test set after the few-shot fine-tuning. The first two models have a high error for both the fine-tuned and original models. As the model only performed ten training steps, the results highly depend on the initialization. For the other three models, however, the fine-tuned models clearly outperform the original models. This gives evidence that with the fine-tuning on Easy Language data, the models get a better understanding of text complexity and, thus, can better discriminate easy from normal texts. ### Text simplification We used our pre-trained language models as plug-in decoders in a mBART simplification model. As the decoders already know how to output Easy Language, we only trained the encoder-decoder cross attention. Due to computational limitations, we could not test all our language models on the text simplification downstream task. Therefore, we selected the two most promising ones, gerpt2 and german_gpt. Table 7 shows how our simplification models perform on the 20 Minuten test dataset compared to the baseline by Rios et al. (2021). To generate the simplifications, we used a beam size of four and calculated the metrics with Huggingface evaluate. Our models outperform the baseline on the SARI metric; however, they fall behind when comparing ROUGE-L and BLEU scores. All of these metrics assess how well the proposed output overlaps with a reference simplification and do not consider synonyms. SARI is a score explicitly tailored to the task of simplification, while BLEU and ROUGE-L are general translation/seq2seq metrics. Herefore, a better SARI score may be an indication that our models do more rephrasing than the baseline model and, thus, yield better simplifications. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Mean squared error**} \\ & **FT** & **O** \\ \hline gerpt2 & **2.36** & 4.17 \\ german\_gpt & 6.22 & **4.25** \\ wechsel & **0.81** & 1.79 \\ oscar & **0.83** & 1.65 \\ mGPT & **0.92** & 1.11 \\ \hline \hline \end{tabular} \end{table} Table 6: Mean squared error after fine-tuning for continuous text complexity prediction on 80 sentences. Most of the fine-tuned models outperform their originals. Figure 1: Human grammar evaluation with a ranking task. Participants selected which model output of the fine-tuned and original versions showed fewer grammatical mistakes. To achieve this result, our models needed training on only 7% of the trainable parameters of the baseline while preserving state-of-the-art performance. ## 6 Conclusion With this paper, we have published a collection of causal language models for German Easy Language. These models mimic the style of Easy Language and favor short and precise sentences. In addition, they adapt to the conventions of only conveying one thought per sentence and putting a line break after every sentence. We exploited these pre-trained models in a sequence-to-sequence text simplification task. As the models were already fine-tuned to the desired output style, we only had to train the encoder-decoder cross attention and, thus, reduced the number of trainable parameters by 93%. With this, training a style-transfer system becomes feasible for settings with few aligned data or a lack of computational power. ### Limitations This paper focuses on the style transfer of Easy Language for German. Due to their word inflections and high average word length, languages like German are harder to learn for language models (Mielke et al., 2019). Therefore, the proposed approach may work even better on easier-to-model languages, but we did not test any other language. In addition, the style transfer of simplified language uses the same vocabulary as the original language and only reduces its diversity. Our approach has yet to be evaluated on other styles, for example, ones that introduce new words. When evaluating the influence of fine-tunung on the grammaticality of the model outputs, we found that even the original models were not perfect and produced grammatical errors. One possible reason is relying on GPT2-based models that are relatively small and, thus, perform worse than state-of-the-art language models like PaLM (Chowdhery et al., 2022). In addition, the German base models are often already fine-tuned versions of English models, and thus, may already suffer from catastrophic forgetting due to fine-tuning. ### Ethics Statement ATS systems can provide more accessible versions of texts, however, a good text simplification is targeted to the knowledge and language level of its audience. Therefore, to utilize these systems for the target group directly, the systems need to be deployed in a controllable setting where the user can set the level of simplification or ask for additional explanations if necessary. Nevertheless, there are also applications where ATS systems can increase the amount of accessible information on the internt without being used by the target group directly. For example, these systems can yield a draft simplification for professional translators or can be helpful for public state authorities that are forced by law to offer online information in Easy Language. Another problem is the possible stigmatization of users if they request a simplified version of the data (Hansen-Schirra, 2020). Finally, the availability of information in Easy Language is very sparse; thus, it is hard to fact-check material on the internet with other sources. This makes the target group of Easy Language highly vulnerable to misinformation and fake news. Hence, our generative models must be used with care as they do not provide hallucination control. Among the sources of our dataset, there is a significant bias towards news articles as well as some regional bias due to the large proportion of articles related to Austria, Switzerland, and northern Germany. As all sources are from official website articles, and the dataset does not include user comments, we expect the data to be unoffensive and of high quality. Nevertheless, we find topical biases such as the COVID-19 pandemic due to the years from which the articles were scraped. In respect of any intellectual property laws, we published the scrapers used to obtain the data but not the data itself. \begin{table} \begin{tabular}{l c c c} \hline \hline **Score** & **Baseline*** & \begin{tabular}{c} **gert2** \\ **FT** \\ \end{tabular} & \begin{tabular}{c} **german\_gpt** \\ **FT** \\ \end{tabular} \\ \hline **ROUGE-L** & **19.96** & 18.52 & 17.93 \\ **SARI** & 33.29 & 42.25 & **42.74** \\ **BLEU** & **6.29** & 4.95 & 4.80 \\ \hline **\#Params** & \multirow{2}{*}{416M} & \multirow{2}{*}{**29M**} & \multirow{2}{*}{**29M**} \\ **trained** & & & \\ \hline \hline \end{tabular} \end{table} Table 7: Text simplification performance on the 20 Minuten testset. For our models, only the cross attention was trained which reduced the number of trained parameters by far; *: copied from the baseline paper (Rios et al., 2021).
2301.06252
Estimating the feasibility of `standard speed-gun' distances
In a previous paper, we demonstrated a single-rung method for measuring cosmological distances in active galactic nuclei (AGN) that can be used from low redshift (z < 0.1) to high redshift (z > 3). This method relies on the assumption that the variability seen in AGN is constrained by the speed of light during a flare event and can therefore be used to estimate the size of an emitting region. A limitation of this method is that previously, the Doppler factor was required to be known. In this paper, we derive an extension of the `standard speed-gun' method for measuring cosmological distances that depends on the maximum intrinsic brightness temperature that a source can reach, rather than the Doppler factor. If the precise value of the intrinsic brightness temperature does not evolve with redshift and flares are statistically independent, we can in principle improve the errors in measurements of the matter content of the universe (in a flat LambdaCDM model) statistically. We then explored how well a future observing program would constrain cosmological parameters. We found that recovering the input cosmology depends critically on the uncertainty of the intrinsic brightness temperature and the number of flares observed.
Jeffrey A. Hodgson, Benjamin L'Huillier, Ioannis Liodakis, Sang-Sung Lee, Arman Shafieloo
2023-01-16T04:22:01Z
http://arxiv.org/abs/2301.06252v1
# Estimating the feasibility of'standard speed-gun' distances. ###### Abstract In a previous paper, we demonstrated a single-rung method for measuring cosmological distances in active galactic nuclei (AGN) that can be used from low redshift (\(z<0.1\)) to high redshift (\(z>3\)). This method relies on the assumption that the variability seen in AGN is constrained by the speed of light during a flare event and can therefore be used to estimate the size of an emitting region. A limitation of this method is that previously, the Doppler factor was required to be known. In this paper, we derive an extension of the'standard speed-gun' method for measuring cosmological distances that depends on the maximum intrinsic brightness temperature that a source can reach, rather than the Doppler factor. If the precise value of the intrinsic brightness temperature does not evolve with redshift and flares are statistically independent, we can in principle improve the errors on measurements of the matter content of the universe (in a flat \(\Lambda\)CDM model) statistically. We then explored how well a future observing program would constrain cosmological parameters. We found that recovering the input cosmology depends critically on the uncertainty of the intrinsic brightness temperature and the number of flares observed. keywords: methods: observational - techniques: high angular resolution - techniques: interferometric - galaxies: active - cosmology: observations ## 1 Introduction In our previous paper (Hodgson et al., 2020) (hereafter Paper I), we demonstrated a new single-rung method for measuring cosmological distances that relies on the speed-of-light to calibrate a standard ruler in active galactic nuclei (AGN) that we call the'standard speed-gun'. A common feature of AGN that have their jets pointed toward the observer are bright jets that exhibit special relativistic effects such as apparent superluminal motions, increased variability, and increased observed flux densities (e.g. Lister et al., 2016; Jorstad et al., 2017; Weaver et al., 2022). These effects are normally accounted for by estimating the relativistic Doppler factor, which is itself a function of the viewing angle to the source and its intrinsic Lorentz factor. In our previous paper, we made a well-justified assumption that the Doppler factor was of order unity in the source 3C 84 (e.g. Liodakis et al., 2017; Jorstad et al., 2017). While this assumption was likely reasonable in 3C 84, for sources at higher redshifts, relativistic effects must be taken into account. Unfortunately, it would be expected that the Doppler factor will evolve with redshift, with more distant sources likely to be more Doppler boosted. This is because Doppler boosting makes sources appear brighter than they are intrinsically and therefore will be more likely to be detected at greater distances for a given instrumental sensitivity. A common way of estimating the Doppler factor is to assume that there is a maximum intrinsic brightness temperature \(T_{\rm B,int}\) that the source can reach. There are several theoretical estimates have been made for this limit. The most commonly cited are the equipartition limit of \(\sim 5\times 10^{10}\) K (Readhead, 1994), \(<5\times 10^{11}\) K (Singal, 1986) and the inverse Compton limit \(<10^{12}\) K (Kellermann and Pauliny-Toth, 1969). Additionally, if the limit follows the equipartition limit, it can be calculated theoretically (Readhead et al., 2021). Several observational studies have been performed in order to constrain the true value of \(T_{\rm B,int}\). Estimates vary widely, including \(\lesssim 10^{11}\) K (Lahteenmaki et al., 1999), \(\sim 2\times 10^{10}\) K Cohen et al. (2003), \(2.78\pm 0.72\times 10^{11}\) K (Liodakis et al., 2018) and \(4.1(\pm 0.6)\times 10^{10}\) K (Homan et al., 2021). In the case of Liodakis et al. (2018), only the brightest flares were used in the analysis and found that the \(T_{\rm B,int}\) distribution was best described by a Gaussian: suggesting that there is not a bias towards lower or higher intrinsic brightness temperatures when using the brightest flares. In this paper, we derived a new expression for the angular diameter distance that depends on the intrinsic brightness temperature, rather than the Doppler factor. The exact value of \(T_{\rm B,int}\) will affect measurements of the Hubble parameter (effectively the absolute scaling of the distance-redshift relationship), but so long as the value itself does not evolve with redshift and the observed flares are statistically independent, we can measure the matter content of the universe (\(\Omega_{\rm m}\), in a flat \(\Lambda\)CDM cosmology) and improve the precision of the measurement statistically. We begin by defining the brightness temperature under the Rayleigh-Jeans approximation (e.g. Kovalev et al., 2005; Kang et al., 2021): \[T_{\rm B}=\frac{I_{\nu}c^{2}}{2k_{\rm B}\nu^{2}}, \tag{1}\] where \(I_{\nu}\) is the intensity at a given frequency \(\nu\) and \(k_{\rm B}\) is Boltzmann's Constant. In the case of VLBI observations, the intensity can be observed as \(S/\Omega\), where \(\Omega\) is the solid angle of the emitting region on the sky and \(S\) is the observed flux density. Typically in VLBI, we fit a circular Gaussian to the image. Therefore, the solid angle can be expressed as \(\Omega=\pi\theta^{2}/4\ln 2\), where \(\theta_{\rm VLBI}\) is the FWHM of the circular Gaussian. The observed quantities in the source frame (primed) are affected by cosmological redshift and in the following ways: \(\nu=\nu^{\prime}/(1+z)\); \(S=S^{\prime}/(1+z)\) and \(\theta_{\rm VLBI}=\theta^{\prime}_{\rm VLBI}\). Relativistic corrections may also be required. They are transformed in the following ways (denoted by \({}^{*}\)) \(\nu=\nu^{*}\delta\); \(S=S^{*}\delta^{3}\) and \(\theta_{\rm VLBI}=\theta^{\prime}_{\rm VLBI}\). This leads to the observed source-frame brightness temperature from VLBI observations (assuming a single resolved component with a flat spectrum): \[T^{\prime*}_{\rm B,VLBI}=\frac{2\ln 2c^{2}\delta S(1+z)}{\pi k_{\rm B}\nu^{ 2}\theta^{2}_{\rm VLBI}}. \tag{2}\] This means that the observed VLBI brightness temperature is related to the true intrinsic brightness temperature, \(T_{\rm B,int}\), as \(T^{\prime}_{\rm B,VLBI}=\delta T_{\rm B,int}\) or \(T_{\rm B,VLBI}=T_{\rm B,int}\delta/(1+z)\). An alternative way to measure the brightness temperature is to assume that the size of an emitting region is constrained by the speed of light. This allows us to estimate a "variability size" (as derived in Hodgson et al. (2020)), and including \(\Delta t\) relativistically transformed as \(\Delta t=\Delta t^{*}/\delta\): \[\theta^{\prime*}_{\rm var}=\frac{c\delta\Delta t}{(1+z)D_{\rm A}}, \tag{3}\] where \(\Delta t\) is the variability timescale and \(D_{\rm A}\) is the angular diameter distance to the source. We can then substitute Eq. 3 into Eq. 2 (including the relativistic corrections) to derive the observed source-frame variability brightness temperature (e.g. Liodakis et al., 2017): \[T^{\prime*}_{\rm B,var}=\frac{2\ln 2D_{\rm A}^{2}\delta^{3}(1+z)^{3}\Delta S}{ \pi k_{\rm B}\nu^{2}\Delta t^{2}}. \tag{4}\] where \(\Delta S\) is the peak flux density of a flare. This should have the same value as \(S\) if the variability information is derived from the same VLBI data. Furthermore, because \(\Delta S\) is a measure of the difference in flux density measured from the beginning to the peak of a flare, it follows that if \(\Delta S=S\), then the VLBI flux density measurement should be taken at the peak of a flare. This also means that the observed _variability_ brightness temperature is related to the true intrinsic brightness temperature, \(T_{\rm B,int}\), as \(T^{\prime}_{\rm B,var}=\delta^{3}T_{\rm B,int}\) or \(T_{\rm B,var}=T_{\rm B,int}\delta^{3}/(1+z)^{3}\). Due to the differing dependencies on \(\delta\), we can therefore combine these observations to derive direct estimates of both the relativistic Doppler factor (in either the source or observer frames): \[\delta=\sqrt{\frac{T^{\prime}_{\rm B,var}}{T^{\prime}_{\rm B,VLBI}}} \tag{5}\] and the intrinsic brightness temperature: \[T_{\rm B,int}=\frac{T^{\prime 3/2}_{\rm B,VLBI}}{T^{\prime 1/2}_{\rm B,var}}. \tag{6}\] Since there is a dependence on the distance, these expressions can be rearranged to measure the distance in terms of the Doppler factor: Figure 1: Top: Simulated observations, assuming 30 sources with 10 flares observed per source assuming a 25% on the intrinsic brightness temperature. The fractional error on \(\Omega_{\rm m}\) from the recovered cosmology is shown in the figures. Input cosmology in green. Recovered cosmology in orange. Simulated data in blue. Four redshift distributions for the simulated observations were made: i) half the sources between \(0<z<0.5\) and the remaining half between \(0.5<z<4\); ii) half the sources between \(0<z<1\) and the remaining half between \(1<z<4\); iii) as in case (ii) except pivoting on \(z=2\) (i.e. equally distributed) and iv) as before except pivoting on \(z=3\). Bottom: residuals (simulated data - input cosmology). \[D_{\rm A}=\frac{c\Delta t\delta}{\theta_{\rm VLBI}(1+z)} \tag{7}\] or equivalently in terms of the intrinsic brightness temperature: \[D_{\rm A}=\frac{2\ln 2c^{3}S\Delta t}{\pi k_{\rm B}T_{\rm B,int}v^{2}\theta_{\rm VLBI }^{3}}. \tag{8}\] A scaling factor of either 1.6x or 1.8x should be included in Eq. 7 in order to convert the measured Gaussian from the VLBI observations to either a disk-like or sphere-like geometry respectively. As in Hodgson et al. (2020), we adopted a compromise 1.7x scaling, with the uncertainty included in the error budget. In the case of Eq. 8, no scaling is included as the equation still includes the Gaussian assumption. ## 2 Discussion There are some interesting features of Eq. 8. The first is that it depends on the intrinsic brightness temperature \(T_{\rm B,int}\) rather than the Doppler factor. So long as \(T_{\rm B,int}\) does not evolve with redshift, we can improve measurements of the curve of the distance-redshift statistically. The exact value of \(T_{\rm B,int}\) is expected to either not vary with redshift, or weakly if the limit follows the equipartition limit. We note that if \(T_{\rm B,int}\) follows the theoretical equipartition limit, we can calculate the expected value (Readhead, 1994; Readhead et al., 2021). Another potential problem could be flare-to-flare variations in \(T_{\rm B,int}\), leading to biases in the \(T_{\rm B,int}\) estimates. However, so long as the brightest flares are used, this may not be an issue (Liodakis et al., 2018). Additionally, a bias like this would not be expected to evolve with redshift, therefore affecting \(H_{0}\) measurements and not \(\Omega_{\rm m}\). Another interesting feature is that there is no redshift dependence. This means that we could roughly estimate redshifts for sources without redshift measurements if a cosmology is assumed. In the published observational parameters for 3C 84 from Hodgson et al. (2020), we calculated the Doppler factor and \(T_{\rm B,int}\) assuming both of the most recent estimates of the Hubble Constant from SH0ES (Riess et al., 2021) and from Planck (Planck Collaboration et al., 2020). These values are shown in Table 1. The Doppler factor appears to be consistent with unity, within errors in both cases. The \(T_{\rm B,int}\) estimates are consistent with the value determined by Liodakis et al. (2018). It is important to note that while it would be expected that the Doppler factor evolves with redshift, we believe it is unlikely that \(T_{\rm B,int}\) would, or at least in a different way. In order to explore how this would potentially affect our observations, we made some simple simulations of a future observing program. The key assumptions are that the flares are statistically independent and that \(T_{\rm B,int}\) does not evolve with redshift. Blazar flaring is known to be a stochastic process and well modeled by damped random walk processes (e.g. Kozlowski, 2016). While this doesn't necessarily mean that the flares are statistically independent, it does hint in that direction. In Fig. 1, we first explored how the redshift distribution affects the fractional uncertainty on \(\Omega_{\rm m}\). We assumed an initial sample of 30 sources over a redshift range of \(0<z<4\). We then distributed over the redshift range in four ways. They were i) half the sources between \(0<z<0.5\) and the remain Figure 2: Residuals of simulated observations, exploring the effect of the number of sources, the number of flares observed per source and the uncertainty on \(T_{\rm B,int}\). The fractional error on \(\Omega_{\rm m}\) from the recovered cosmology is shown in the figures. Input cosmology in orange. Recovered cosmology in green. Simulated data in blue. Four scenarios are: i) 30x sources, 10x flares (as above); ii) 100x sources, 10x flares; iii) 10x sources, 100x flares and iv) 1000x sources, 100x flares. Top panel assumes 25% uncertainty on \(T_{\rm B,int}\), bottom panel assumes 100% uncertainty on \(T_{\rm B,int}\). ing half between \(0.5<z<4\); ii) half the sources between \(0<z<1\) and the remaining half between \(1<z<4\); iii) as in case (ii) except pivoting on \(z=2\) (i.e. equally distributed) and iv) as before except pivoting on \(z=3\). Assuming a fractional uncertainty of 25% on \(T_{\rm B,int}\)(Liodakis et al., 2018), we found that having more sources at higher redshift tends to improve the fractional error \(\Omega_{\rm m}\), although the uniformly distributed sources scenario recovered \(\Omega_{\rm m}\) roughly as well as when the sources were concentrated at high redshift. For context, the fractional error on \(\Omega\) from the Planck Collaboration is \(\sim\)2.2% (Planck Collaboration et al., 2020), and the error from the latest supernovae catalogs is \(\sim\)5% (Brout et al., 2022). Additionally, we note that in this case, we would be more interested in observing deviations from the expected cosmology at high-z (e.g. Zhao et al., 2017; Risaliti & Lusso, 2019). We then explored how the number of observations and the uncertainty on \(T_{\rm B,int}\) affects our recovered cosmology. We tested four observational scenarios (all with sources equally distributed over the redshift range): i) 30x sources, 10x flares (as above); ii) 100x sources, 10x flares; iii) 10x sources, 100x flares and iv) 1000x sources, 100x flares. Observing 100x flares per source is likely not realistic, but allows us to explore if it is better to have more sources or simply more observations. We included the last case as an example of a very large observing program. We then compared these scenarios for two uncertainties on \(T_{\rm B,int}\) of 25% (as above) and 100% (a "worst-case scenario"). The results are shown in Fig. 2. It can be seen that it is the total number of flares observed that is important. It does not appear to depend if it is a smaller number of sources with more flare per source or simply more sources. Given that there may only be \(\sim\)1 flare per source, per year, this suggests that observing more sources would be a more efficient way of improving the error budget. We can also see that the uncertainty on \(T_{\rm B,int}\) critically affects the recovered cosmology. We can see that if \(T_{\rm B,int}\) is known to 25%, we are competitive with other methods (e.g. Brout et al., 2022), with realistic observing programs, such as in scenarios (ii) and (iii). If \(T_{\rm B,int}\) is not known well, we would need a much larger observing program in order to be competitive. Angelakis et al. (2019) suggests that 1-2 flares per year per source is reasonable, and therefore the method could be competitive within 5-10 years of observations. In principle, we can also fit \(T_{\rm B,int}\) as a free parameter with the other cosmological parameters. But \(D_{\rm A}\propto 1/T_{\rm B,int}\) and also \(D_{\rm A}\propto 1/H_{0}\), which means we can only constrain the ratio. However, with a cosmology-independent measure of the Doppler factor, the degeneracy could be broken. A good candidate for a cosmology-independent measure of the Doppler factor is the inverse-Compton Doppler factor (Ghisellini et al., 1993; Liodakis et al., 2017). Note that this does not have to be estimated for every source, but a smaller subset would be sufficient. That the Doppler factor evolves with redshift is not necessarily a problem, so long as the _correction_ does not evolve with redshift. Furthermore, an observational program such as this would provide an excellent sample to investigate AGN evolution. ## 3 Conclusions In this paper, we have derived a new expression for the angular diameter distance for blazars that depends on a maximum intrinsic brightness temperature (\(T_{\rm B,int}\)), rather than the Doppler factor. In the case of 3C 84, we assumed two cosmologies and derived estimates of the Doppler factor and \(T_{\rm B,int}\). The Doppler factor was found to be consistent with unity and \(T_{\rm B,int}\) was consistent with the independent estimate of Liodakis et al. (2018). We explored how well a future observing program could constrain cosmological parameters in a flat \(\Lambda\)CDM model. We found that recovering the input cosmology depends critically on the uncertainty on \(T_{\rm B,int}\) and the number of flares observed. In our next paper, we will investigate if \(T_{\rm B,int}\) evolves with redshift.
2309.02616
Generative AI-aided Joint Training-free Secure Semantic Communications via Multi-modal Prompts
Semantic communication (SemCom) holds promise for reducing network resource consumption while achieving the communications goal. However, the computational overheads in jointly training semantic encoders and decoders-and the subsequent deployment in network devices-are overlooked. Recent advances in Generative artificial intelligence (GAI) offer a potential solution. The robust learning abilities of GAI models indicate that semantic decoders can reconstruct source messages using a limited amount of semantic information, e.g., prompts, without joint training with the semantic encoder. A notable challenge, however, is the instability introduced by GAI's diverse generation ability. This instability, evident in outputs like text-generated images, limits the direct application of GAI in scenarios demanding accurate message recovery, such as face image transmission. To solve the above problems, this paper proposes a GAI-aided SemCom system with multi-model prompts for accurate content decoding. Moreover, in response to security concerns, we introduce the application of covert communications aided by a friendly jammer. The system jointly optimizes the diffusion step, jamming, and transmitting power with the aid of the generative diffusion models, enabling successful and secure transmission of the source messages.
Hongyang Du, Guangyuan Liu, Dusit Niyato, Jiayi Zhang, Jiawen Kang, Zehui Xiong, Bo Ai, Dong In Kim
2023-09-05T23:24:56Z
http://arxiv.org/abs/2309.02616v1
# Generative AI-aided Joint Training-free Secure Semantic Communications via Multi-modal Prompts ###### Abstract Semantic communication (SemCom) holds promise for reducing network resource consumption while achieving the communications goal. However, the computational overheads in jointly training semantic encoders and decoders--and the subsequent deployment in network devices--are overlooked. Recent advances in Generative artificial intelligence (GAI) offer a potential solution. The robust learning abilities of GAI models indicate that semantic decoders can reconstruct source messages using a limited amount of semantic information, e.g., prompts, without joint training with the semantic encoder. A notable challenge, however, is the instability introduced by GAI's diverse generation ability. This instability, evident in outputs like text-generated images, limits the direct application of GAI in scenarios demanding accurate message recovery, such as face image transmission. To solve the above problems, this paper proposes a GAI-aided SemCom system with multi-model prompts for accurate content decoding. Moreover, in response to security concerns, we introduce the application of covert communications aided by a friendly jammer. The system jointly optimizes the diffusion step, jamming, and transmitting power with the aid of the generative diffusion models, enabling successful and secure transmission of the source messages. Generative AI, semantic communications, prompt engineering, covert communications ## I Introduction The continuous evolution of wireless communication networks has led to an exponential growth in data volume. One potential solution to this challenge is the semantic communications (SemCom) technique [1]. The basic architecture of SemCom involves joint training of a semantic encoder and decoder. After passing through the semantic encoder, the source message is transformed into semantic information suitable for wireless transmission. The semantic decoder can then decode this information to recover the original source message or fulfill specific task requirements. Yet, there are inherent challenges in jointly training and distributing the semantic encoders and decoders, making the process both complex and energy-demanding [2]. Specifically, the semantic encoder-decoder pair demands co-training with the channel to achieve optimal performance [3]. Furthermore, the trained semantic models are often tailored to specific tasks, necessitating the training and distribution of multiple semantic encoder-decoder pairs to diverse internet-of-things (IoT) devices. Absent this joint training, SemCom systems face difficulties in communication tasks demanding precise message reconstruction. For example, text descriptions extracted from images, such as knowledge graphs [4], can serve as the semantic information of the image. A semantic decoder can employ these graphs to extract information fit for tasks like Q&A about image content [4]. Conversely, this method proves ineffectual for tasks such as transmitting face images where accuracy is paramount, as shown in Part A of Fig. 1. Therefore, avoiding joint training while ensuring accurate image transmission is a significant challenge in SemCom. The rise of Generative AI (GAI) introduces a potential solution to achieve the accurate transmission goal. GAI proves especially advantageous for designing decoders, enabling efficient source information retrieval without joint encoder training. In the text domain, advanced language models such as ChatGPT can craft detailed articles from simple prompts, which might serve as concise semantic information about the transmitter's message. For image, conveying a photo might involve transmitting its text description. Several image-generation GAI models, e.g., DALLE [5] and Stable Diffusion [6], can perform semantic decoding after receiving prompts. Specifically, a prompt is a succinct representation that instructs models to generate specific outputs, whether in text, images, or other digital content forms [7]. However, implementing GAI-aided SemCom poses an inherent challenge: _a single prompt can result in varied image interpretations_ as shown in Part B of Fig. 1. This variability is attributable to the dynamic nature of GAI models [8]. While this adaptability is advantageous in many scenarios, it becomes especially problematic for SemCom tasks that demand message accuracy, such as transmitting human face images. Hence, the design of prompts is imperative, entering Fig. 1: Diverse semantic communication schemes. **Part A** represents SemCom based on knowledge graphs. **Part B** shows GAI-aided SemCom using only the textual prompt. The decoder reconstructs an image that is not accurate. **Part C** shows GAI-powered SemCom using multi-modal prompts. The decoder leverages GAI models to generate an accurate image. the realm of prompt engineering. This paper introduces the concept of multi-modal prompts to address the challenges in SemCom tasks that require accurate image reconstruction. The multi-modal prompts incorporate _visual prompts_, which are aimed at restoring the image's structural fidelity, and _textual prompts_, which capture the semantic information of the image. In addition, data security is a significant issue in the transmission process of multi-modal prompts [9]. First, transmitting prompts within the confines of an open wireless environment necessitates robust protective measures. Covert communication is a promising and potential solution. Unlike traditional physical layer security approaches [9], covert communication operates on concealing the communication activity itself, effectively making it indiscernible to potential eavesdroppers or external attackers. Second, visual information leakage may happen when messages are intercepted. The reason is that common visual prompts, such as the contours of objectives in the transmitted image, inadvertently lead to information leakage [10]. Consequently, an optimal visual prompt should not overtly reveal information about the original image. Simultaneously, it should effectively aid the GAI model in reconstructing the structure information of the original image. To address this challenge, we propose a GAI-aided secure SemCom framework as shown in Part C of Fig. 1. The key contributions are as follows: * We introduce a new GAI-aided SemCom framework without necessitating joint training. This approach offers a reduction in both computational complexity and energy cost compared to conventional SemCom methods. * Our novel approach leverages multi-modal prompts, allowing for the accurate reconstruction of the source message. This innovation addresses the challenge of unstable data recovery when GAI models are used. * We use covert communication techniques to safeguard the transmission of multi-model prompts within the open wireless environment. The optimal resource allocation scheme is generated using the generative diffusion model (GDM)-based method, achieving accurate image regeneration under energy constraints. ## II System Model and Problem Formulation In this section, we discuss the system model and formulate the optimization problem. ### _Covert Communications_ The network consists of a transmitter, a receiver, a friendly jammer, and a warden in an open wireless environment. The transmitter's objective is to transmit images to the receiver while evading detection by the warden. To enhance data security, we use the covert communication technique. Instead of depending on encryption, covert communication hides the transmission behavior [9]. Specifically, the warden evaluates two potential scenarios: the null hypothesis, \(\mathcal{H}_{0}\), representing the transmitter's inactivity, and the alternate hypothesis, \(\mathcal{H}_{1}\), indicating active transmission. This can be mathematically represented as: \[y_{w}=\left\{\begin{array}{ll}\kappa^{2}+D_{jw}^{-\alpha_{jw}}P_{j}h_{jw}^ {2},&\mathcal{H}_{0},\\ D_{tw}^{-\alpha_{tw}}P_{t}h_{tw}^{2}+\kappa^{2}+D_{jw}^{-\alpha_{jw}}P_{j}h_{ jw}^{2},&\mathcal{H}_{1},\end{array}\right. \tag{1}\] where \(P_{t}\) is the transmit power, \(P_{j}\) is the jamming power, \(\kappa^{2}\) characterizes the Gaussian noise. Distances between the jammer and the warden, and the transmitter and the warden are given by \(D_{jw}\) and \(D_{tw}\), respectively. \(\alpha_{jw}\) and \(\alpha_{tw}\) are the path loss exponents for their respective links, while \(h_{jw}\) and \(h_{tw}\) reflect the small-scale fading effects. Decision-making by the warden, denoted as \(\mathcal{D}_{1}\) and \(\mathcal{D}_{0}\), is grounded in the aforementioned hypotheses, adhering to a threshold rule [9]. Detection inaccuracies occur in two situations: _false alarm_, where \(\mathcal{D}_{1}\) is selected during \(\mathcal{H}_{0}\), and _miss detection_, where \(\mathcal{D}_{0}\) is chosen during \(\mathcal{H}_{1}\). The detection error probability (DEP), which quantifies the likelihood of inaccurate warden decisions, is defined as \[\xi= \mathbb{P}_{FA}+\mathbb{P}_{MD}=\Pr\left(\kappa^{2}+D_{jw}^{- \alpha_{jw}}P_{jk}h_{jw}^{2}>\varepsilon\right)\] \[+\Pr\left(D_{tw}^{-\alpha_{tw}}P_{a}h_{tw}^{2}+\kappa^{2}+D_{jw} ^{-\alpha_{jw}}P_{j}h_{jw}^{2}<\varepsilon\right), \tag{2}\] where \(\varepsilon\) denotes the detection threshold, \(\mathbb{P}_{FA}\) denotes the false alarm probability, and \(\mathbb{P}_{MD}\) denotes the miss detection probability. Covert communication is successful when the DEP exceeds a threshold, i.e., \(\xi_{\rm th}\), approximating \(1\). ### _Problem Formulation_ We propose the GAI-aided SemCom. In this framework, a transmitter processes a source image, i.e., \(\rm Img_{s}\), extracting multi-modal prompts. The prompts are transmitted over wireless channels. The receiver regenerates the image, i.e., \(\rm Img_{r}\), with the GAI model. Given that diverse resource allocation strategies influence the signal-to-noise ratio (SNR), which subsequently alters the bit error probability (BEP) and impacts the final reconstruction of the image at the receiver, we employ the structural similarity (SSIM) metric as our objective function. Let \(T\) represent the number of diffusion steps in the image generation process. The terms \(\beta_{t}\), \(\beta_{j}\), and \(\beta_{T}\) denote the energy costs per unit for transmit power, jamming power, and diffusion step, respectively. We formulate the optimization problem as: \[\begin{array}{ll}\max\limits_{\{P_{t},P_{j},T\}}&\mathrm{SSIM}\left(\rm Img _{s},\rm Img_{r}\right),\\ &\mathrm{s.t.},&\xi(P_{t},P_{j})>\xi_{\rm th},\\ &\beta_{t}P_{t}+\beta_{j}P_{j}+\beta_{T}T\leq E,\end{array} \tag{3}\] where the first constraint ensures the communications remain covert, and the second constraint bounds the total energy Fig. 2: The proposed GAI-aided secure SemCom system with the covert communications technique. \(E\), which introduces a natural trade-off. On the one hand, when \(P_{t}\) is too low or \(P_{j}\) is too high, it becomes simpler to maintain covert communications. Yet, this configuration increases the BEP because of the low SNR, reducing the SSIM value. On the other hand, while the image generation process with high \(T\) can improve the robustness against noise [11], excessive energy consumption towards this could limit the energy available for \(P_{j}\) and \(P_{t}\). As such, joint optimization is essential to balance covert communications and the quality of the regenerated image. Next, we introduce the GAI-aided SemCom in Section III and then give the optimization problem solution in Section IV. ## III Multi-modal Prompt Mechanism In this section, we introduce the multi-modal prompt mechanism in the GAI-aided SemCom. ### _Semantic Encoder_ Let us consider \(\mathbf{x}_{0}\) as the original image, i.e., \(\mathrm{Im}_{\mathrm{g}}\). #### Iii-A1 Textual Prompt Textual prompts for image generation typically align with image-to-text tasks. The Blip method [12] innovatively utilizes noisy web data refined through bootstrapping. The fundamental workflow can be outlined as follows: 1. **Multi-modal Mixture of Encoder-Decoder (MED):** This has three operational modes. The _Unimodal Encoder_ functions for images and text independently, using a classification token for text summarization. The _Image-grounded Text Encoder_ merges visual data with text, resulting in a multi-modal representation through an encoder token. Lastly, the _Image-grounded Text Decoder_ transforms images to text using causal self-attention and special tokens for sequence demarcation. 2. **Pre-training Objectives:** MED is tailored with three key objectives. The _Image-Text Contrastive Loss_ aligns image and text features, distinguishing congruent from incongruent pairs. _Image-Text Matching Loss_ acts as a binary classifier to ascertain the alignment of visual and textual inputs. _Language Modeling Loss_ is tasked with producing coherent text from images. 3. **Captioning and Filtering:** A twofold approach to manage web data noise. The _Captioner_ creates synthetic captions for online images, while the _Filter_ removes unreliable original and generated captions. 4. **Data Integration:** The refined image-text combinations merge with human-annotated content, forming an exhaustive dataset for model training. Given an image \(\mathbf{x}_{0}\), the textual description \(\mathbf{t}_{\mathrm{sem}}\) of \(\mathbf{x}_{0}\) is extracted through the MED's _Image-grounded Text Decoder_ model, i.e., \(\mathcal{T}\left(\cdot\right)\), as \(\mathbf{t}_{\mathrm{sem}}=\mathcal{T}\left\{\mathbf{x}_{0}\right\}\). #### Iii-A2 Visual Prompt GDMs have showcased their prowess in modeling target distributions by mastering a denoising procedure over a spectrum of noise levels [13]. From an arbitrary Gaussian noise map, drawn from the prior \(\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\), an ideal GDM can transform this noisy map into an image sample after \(T\) denoising iterations [8]. A pioneering effort was made by the authors in [13], who introduced a function \(\varepsilon_{\theta}^{t}\left(x_{t}\right)\). This function ingests a noisy image \(\mathbf{x}_{t}\) and predicts the corresponding noise. The GDM optimization involves the loss function \(\left|\varepsilon_{\theta}^{t}\left(\mathbf{x}_{t}\right)-\varepsilon_{a}\right|\), where \(\varepsilon_{a}\) symbolizes the actual noise that was added to \(\mathbf{x}_{0}\) to produce \(\mathbf{x}_{t}\). A significant stride in the realm of denoising is the Denoising Diffusion Implicit Model (DDIM) [14], which stands out due to its deterministic generative process: \[\mathbf{x}_{t-1}\!=\!\sqrt{\alpha_{t-1}}\left(\frac{\mathbf{x}_{ t}-\sqrt{1\!-\!\alpha_{t}}\varepsilon_{\theta}^{t}\left(\mathbf{x}_{t} \right)}{\sqrt{\alpha_{t}}}\!\right)\!+\!\sqrt{1\!-\!\alpha_{t-1}}\varepsilon_ {\theta}^{t}\left(\mathbf{x}_{t}\right), \tag{4}\] and \[q(\mathbf{x}_{t-1}|\;\mathbf{x}_{t},\mathbf{x}_{0})\!=\!\mathcal{N}\!\left(\! \sqrt{\alpha_{t-1}}\mathbf{x}_{0}\!+\!\sqrt{1\!-\!\alpha_{t-1}}\frac{\mathbf{x }_{t}\!-\!\sqrt{\alpha_{t}}\mathbf{x}_{0}}{\sqrt{1\!-\!\alpha_{t}}},\mathbf{0 }\!\right). \tag{5}\] An intriguing aspect of DDIM is the capacity to run its generative procedure in reverse, deterministically retrieving the noise map \(\mathbf{x}_{T}\)[14]. This map can be perceived as the latent encoding for the image \(\mathbf{x}_{0}\). Though the reconstruction accuracy is commendable, the resultant \(\mathbf{x}_{T}\) lacks higher-level semantics expected of a meaningful representation. This observation, combined with insights from [15], led to the exploration of treating \(\mathbf{x}_{T}\) as a visual prompt \(\mathbf{v}_{\mathrm{sem}}\). Thus, with the \(\mathbf{t}_{\mathrm{sem}}\) to catch the high-level semantic information, the conditional DDIM can be employed to encode an image \(\mathbf{x}_{0}\) into the visual prompt \(\mathbf{v}_{\mathrm{sem}}\) to catch the image structure information [15], as demonstrated in (4) as \[\mathbf{x}_{t+1}\!=\!\sqrt{\alpha_{t+1}}\mathbf{f}_{\theta}\left(\mathbf{x}_{ t},t,\mathbf{t}_{\mathrm{sem}}\right)\!+\!\sqrt{1\!-\!\alpha_{t+1}}\varepsilon_{ \theta}\left(\mathbf{x}_{t},t,\mathbf{t}_{\mathrm{sem}}\right), \tag{6}\] where \[\mathbf{f}_{\theta}\left(\mathbf{x}_{t},t,\mathbf{t}_{\mathrm{sem}}\right)= \frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\sqrt{1-\alpha_{t}} \varepsilon_{\theta}\left(\mathbf{x}_{t},t,\mathbf{t}_{\mathrm{sem}}\right) \right). \tag{7}\] Then, an image can be regenerated accurately by using both textual and visual prompts. The visual prompt \(\mathbf{v}_{\mathrm{sem}}\) is defined as \(\mathbf{v}_{\mathrm{sem}}=\mathcal{V}_{T}\left\{\mathbf{t}_{\mathrm{sem}}, \mathbf{x}_{0}\right\}\), where \(\mathcal{V}_{T}\) denotes the diffusion process, i.e., (6), with \(T\) steps. ### _Semantic Decoder_ The purpose of the GDM-based semantic decoder is to use the textual and visual prompts, i.e., \(\mathbf{t}_{\mathrm{sem}}\) and \(\mathbf{v}_{\mathrm{sem}}\), to generate the source image \(\mathbf{x}_{0}\). This decoder is a conditional DDIM that models \(p_{\theta}\left(\mathbf{x}_{t-1}\;|\;\mathbf{x}_{t},\mathbf{t}_{\mathrm{sem}}\right)\) to match the noising distribution \(q\left(\mathbf{x}_{t-1}\;|\;\mathbf{x}_{t},\mathbf{x}_{0}\right)\) defined in (5), with the following reverse (generative) process as: \[p_{\theta}\left(\mathbf{x}_{0:T}\;|\;\mathbf{t}_{\mathrm{sem}}\right)=p\left( \mathbf{x}_{T}\right)\prod_{t=1}^{T}p_{\theta}\left(\mathbf{x}_{t-1}\;|\; \mathbf{x}_{t},\mathbf{t}_{\mathrm{sem}}\right), \tag{8}\] which can be further expressed as (9) Training is done by optimizing \[L_{\mathrm{simple}}=\sum_{t=1}^{T}\mathbb{E}_{\mathbf{x}_{0},\varepsilon_{t}} \left[\left\|\varepsilon_{\theta}\left(\mathbf{x}_{t},t,\mathbf{t}_{\mathrm{ sem}}\right)-\varepsilon_{t}\right\|_{2}^{2}\right], \tag{10}\] where \(\varepsilon_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\mathbf{x}_{t}=\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}}\varepsilon_{t}\). ## IV GDM-based Resource Allocation Scheme In this section, we present the GDM-based resource allocation scheme for the optimization problem (3). ### _GDM in Optimization_ The application of a conditional GDM facilitates the derivation of an optimal resource allocation scheme [16], i.e., \(\mathbf{r}=\left\{P_{t},P_{j},T\right\}\), in (3). Distinct from traditional backpropagation techniques in neural networks or the direct model parameter optimization using deep reinforcement learning, diffusion models incrementally refine the primary distribution by denoising. We introduce vector \(\mathbf{c}\) to encapsulate the multiple factors that influence the optimal resource allocation scheme, i.e., the condition, as \[\mathbf{c} =\{D_{tw},D_{tr},D_{jw},D_{jr},\alpha_{tw},\alpha_{tr},\alpha_{jw },\alpha_{jr},\] \[\quad\kappa^{2},\varepsilon,\xi_{\mathrm{th}},h_{tw},h_{tr},h_{ jw},h_{jr}\}. \tag{11}\] ### _Scheme Evaluation and Generation Networks_ Introducing the _scheme evaluation network_, denoted as \(Q_{v}\), we can assign a Q-value indicative of the predicted objective function, i.e., \(\mathrm{SSIM}\left(\mathrm{Img}_{\mathrm{s}},\mathrm{Img}_{\mathrm{r}}\right)\), to a condition-resource allocation scheme pair, i.e., \(\mathbf{c}\) and \(\mathbf{r}\). The \(Q_{v}\) network serves as an informative reference for the training of the GDM-based _scheme generation network_ that is denoted \(\mathbf{\eta}_{\theta}\). The ideal \(\mathbf{\eta}_{\theta}\) is designed to generate a resource allocation scheme \(\mathbf{r}_{0}\) with the maximal predicted Q-value by progressive denoising with \(\mathbf{c}\) being the condition, commencing from Gaussian noise like (8). However, the training loss function differs, which can be represented as: \[\operatorname*{arg\,min}_{\mathbf{\eta}_{\theta}}\mathcal{L}_{\mathbf{\eta}}(\theta) =-\mathbb{E}_{\mathbf{r}_{0}\sim\mathbf{\eta}_{\theta}}\left[Q_{v}\left(\mathbf{c},\mathbf{r}_ {0}\right)\right]. \tag{12}\] The _solution evaluation network_, \(Q_{v}\), aims to minimize the difference between predicted and actual Q-values. Therefore, the loss function for \(Q_{v}\) can be formulated as: \[\operatorname*{arg\,min}_{Q_{v}}\mathcal{L}_{Q}(v)=\mathbb{E}_{\mathbf{r}_{0}\sim \mathbf{\eta}_{\theta}}\left[\left\|r(\mathbf{c},\mathbf{r}_{0})-Q_{v}\left(\mathbf{c},\mathbf{r} _{0}\right)\right\|^{2}\right], \tag{13}\] where the \(r\) is the real objective value, \(\mathrm{SSIM}\left(\mathrm{Img}_{\mathrm{s}},\mathrm{Img}_{\mathrm{r}}\right)\), obtained when the resource allocation scheme \(\mathbf{r}_{0}\) is implemented under the condition \(\mathbf{c}\). ## V Numerical Analysis In this section, we demonstrate the feasibility of the proposed GAI-aided secure SemCom system and the effectiveness of the proposed GDM-based resource allocation scheme. In Fig. 3, the impact of increased jamming power on the covert rate (defined as the data rate achieved during covert communications), DEP, and BEP are depicted, with \(P_{t}=20\)\(\mathrm{dBW}\), \(\varepsilon=50\). Positioned within a Cartesian coordinate system with units in meters, the transmitter, warden, receiver, and jammer have coordinates at \((3,8)\), \((3,14)\), \((7,10)\), and \((6,8)\), respectively. The path loss exponents are \(\alpha_{tr}=1\), \(\alpha_{tw}=1.2\), \(\alpha_{jw}=\alpha_{jr}=1.7\). Small-scale channel fading attributes, \(\mathrm{dBW}\), \(h_{tw}\), \(h_{tr}\), \(h_{jw}\), and \(h_{jr}\), are in line with the \(\alpha\)-\(\mu\) fading model, with parameters \(\alpha=2\) and \(\mu=4\). The channel coding approach adopted is Binary Phase-shift Keying (BPSK). From Fig. 3, we can observe that as the jamming power increases, there is a consistent decrease in the covert rate and an increase in the BEP. Given the current environmental conditions and the warden's detection threshold, covert communication requirements are satisfied, which means that the DEP exceeds the threshold when the jamming power exceeds approximately \(24\) dBW. However, this results in a BEP greater than or equal to \(10^{-5}\). The test reward curves of the GDM-based and deep reinforcement learning (DRL)-based resource allocation schemes are presented in Fig. 4. The well-trained resource allocation scheme generation network \(\mathbf{\eta}_{\theta}\) determines the number of diffusion steps, transmit power, and jammer power values to enable covert communications under the given condition \(\mathbf{c}\), resulting in a BEP. Subsequently, image regeneration is performed. It is shown that the GDM method overperforms the DRL method. Fig. 5 shows the regeneration process using the received multi-modal prompts with different diffusion steps under varying BEPs. From Fig. 5, we can observe that when BEP is \(10^{-5}\), approximately \(50\) diffusion steps are sufficient to achieve a reasonable image reconstruction quality. However, as the BEP degrades to \(10^{-3}\), the reconstructed images consistently exhibit noise, leading to relatively lower SSIM scores. This further underscores the essentiality of a well-designed resource allocation scheme. ## VI Conclusion We presented a GAI-aided secure SemCom system to address challenges in computational overhead and data security in semantic communication. By eliminating joint training and employing multi-modal prompts, our approach ensures accurate message reconstruction. With the integration of covert communication techniques, our system enhances the secure transmission of prompts, offering improvements for wireless communication scenarios. Fig. 3: The covert rate, DEP, and the BEP versus the jamming power. Fig. 5: The regeneration process when the receiver uses the received multi-modal prompts with different BEP and diffusion steps.
2303.01350
Securing Verified IO Programs Against Unverified Code in F*
We introduce SCIO*, a formally secure compilation framework for statically verified partial programs performing input-output (IO). The source language is an F* subset in which a verified program interacts with its IO-performing context via a higher-order interface that includes refinement types as well as pre- and post-conditions about past IO events. The target language is a smaller F* subset in which the compiled program is linked with an adversarial context that has an interface without refinement types, pre-conditions, or concrete post-conditions. To bridge this interface gap and make compilation and linking secure we propose a formally verified combination of higher-order contracts and reference monitoring for recording and controlling IO operations. Compilation uses contracts to convert the logical assumptions the program makes about the context into dynamic checks on each context-program boundary crossing. These boundary checks can depend on information about past IO events stored in the state of the monitor. But these checks cannot stop the adversarial target context before it performs dangerous IO operations. Therefore linking in SCIO* additionally forces the context to perform all IO actions via a secure IO library, which uses reference monitoring to dynamically enforce an access control policy before each IO operation. We prove in F* that SCIO* soundly enforces a global trace property for the compiled verified program linked with the untrusted context. Moreover, we prove in F* that SCIO* satisfies by construction Robust Relational Hyperproperty Preservation, a very strong secure compilation criterion. Finally, we illustrate SCIO* at work on a simple web server example.
Cezar-Constantin Andrici, Stefan Ciobaca, Catalin Hritcu, Guido Martínez, Exequiel Rivas, Éric Tanter, Théo Winterhalter
2023-03-02T15:30:25Z
http://arxiv.org/abs/2303.01350v3
# Securely Compiling Verified F+ ###### Abstract. We propose a secure compilation chain for statically verified partial programs with input-output (IO). The source language is an F\({}^{\star}\) subset in which a verified IO-performing program interacts with its IO-performing context via a higher-order interface that includes refinement types as well as pre- and post-conditions about past IO events. The target language is a smaller F\({}^{\star}\) subset in which the compiled program is linked with an adversarial context via an interface without refinement types or pre- and post-conditions. To bridge this interface gap and make compilation and linking secure we propose a novel combination of higher-order contracts and reference monitoring for recording and controlling IO operations. During compilation we use contracts to convert the logical assumptions the program makes about the context into dynamic checks on each context-program boundary crossing. These boundary checks can depend on information about past IO events stored in the monitor's state, yet these checks cannot stop the adversarial target context _before_ it performs dangerous IO operations. So, additionally, our linking forces the context to perform all IO via a secure IO library that uses reference monitoring to dynamically enforce an access control policy before each IO operation. We propose a novel way to model in F\({}^{\star}\) that the context cannot directly access the IO operations and the monitor's internal state, based on F\({}^{\star}\)'s recent support for flag-based effect polymorphism. We prove in F\({}^{\star}\) that enforcing the access control policy on the context in combination with static verification of the program soundly enforces a global trace property. Moreover, we prove in F\({}^{\star}\) that our secure compilation chain satisfies by construction Robust Relational Hyperproperty Preservation, a very strong secure compilation criterion. Finally, we illustrate our secure compilation chain at work on a simple web server example. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † †: First author. + Footnote † † †: First author. + Footnote † † †: First author. + Footnote since not all assumptions made during verification are expressed explicitly, or are even easy to express (e.g., higher-order functions like map over a list in \(\mathrm{F}^{\star}\) or Coq implicitly assume that the passed argument is a pure function, without side effects, which is not something one can enforce for an OCaml caller; and neither are all the OCaml side effects trivial to express in Coq or even \(\mathrm{F}^{\star}\)). This problem remains challenging even if, as we assume in this paper, one is willing to (re)write a used unverified library in \(\mathrm{F}^{\star}\), since strengthening the weak interface of the library by statically verifying it would require a lot more effort and take away the simplicity of just using it, as verification in \(\mathrm{F}^{\star}\) involves user interaction and expertise. An alternative is to add enough dynamic checks to bridge the gap between the strong interface of the verified program and the weak interface of the library. This can still be tedious and error prone though, and as we explain below also challenging. Wouldn't it be great if we could instead systematically insert all the required dynamic checks to formally bridge this interface gap? In this paper we achieve this for verified \(\mathrm{F}^{\star}\) programs with IO (e.g., reading and writing files and network sockets), by looking at this problem through the lens of secure compilation [2, 37]. In particular, we build a secure compilation chain (i.e., a compiler and a linker) that compiles a _partial source program_ with a strong higher-order interface such that it can be securely linked against an arbitrary adversarial _target context_ with a weak interface (e.g., an untrusted library). We see this as an important first step towards a full-fledged formally secure compilation chain from \(\mathrm{F}^{\star}\) to a safe subset of OCaml. The partial source program and the target context can only be linked securely when their interfaces match, which is not the case in our setting because the strong interface of the program contains refinement types and pre- and post-conditions, while the weak interface of the context doesn't. Refinement types are used in \(\mathrm{F}^{\star}\) to constrain the values of a base type with a logical formula (e.g., the value of an integer is larger than zero). \(\mathrm{F}^{\star}\) is also a dependently-typed language in which the pre- and post-conditions of a function can depend on its arguments and can also specify the function's result. For our verified IO programs the pre- and post-conditions also specify the IO behavior of the function by also considering the trace of past IO events at the time the function was called--each time an IO operation is performed, an event containing the arguments and the result is appended to the trace (e.g., a pre-condition could specify that the file corresponding to a given file descriptor is currently open by looking at the trace for an open event for the file descriptor and making sure that no close event happened afterwards for this file descriptor). Moreover, post-conditions can also take into account the IO events produced by the function itself when it returns (e.g., the function closed all file descriptors it opened) and can also specify that there is a relation between the result value and the return-time trace (e.g., the returned value was read from a file). Intuitively, there are two (non-exclusive) ways of adding dynamic checks to make the partial source program and target context have the same interface: we can weaken the interface of the program or we can strengthen the interface of the context. For the first way, we can weaken the interface of the program by using _higher-order contracts_[17] to wrap the program in new functions with weaker types and add dynamic checks before each function call and after each function return crossing the boundary between the program and the context. Higher-order contracts are fit for enforcing refinement types on arguments and results, as well as pre- and post-conditions of pure functions. Moreover, in our setting the partial program is statically verified, thus we only need to add dynamic checks when the untrusted context passes control and values to the program [50]. Putting higher-contracts on the partial program's interface is, however, not enough for enforcing the pre- and post-conditions specifying the IO behavior of functions, since for a start the contracts would lack information about the IO events that happened before the function was called and during the execution of the function. One can instead use stateful contracts [15, 45, 51] not only on the interface of the partial program but also that of the IO library. Such stateful contracts can record information about the past IO operations, which enables dynamically checking the pre-conditions of functions performing IO. Even if this already goes in the direction of reference monitoring, it would still not be enough, however, to dynamically enforce some of the post-conditions that specify the IO behavior of the context, since checking some post-conditions when the context returns would be too late to enforce that certain IO events should not have happened (e.g., the post-condition of the context may specify that it should not access the passwords file or the network, but if we only detect a violation of this once the context returns, the damage may already be done and the contents of the passwords file may have already been accessed and leaked over the network). For the second way to make the two interfaces match, we can strengthen an interface by using a _reference monitor_[7, 24, 43] to enforce an _access control policy_, by performing a dynamic check before each IO operation. The way this works is that the monitor records in its state information about the trace of IO events that happened so far during the execution, and it uses this state to dynamically check whether performing the next IO operation would satisfy the policy and can be allowed or not (e.g., prevent accessing a file descriptor associated with a previously opened password file or network socket). To ensure efficiency our reference monitor needs to be aware of the distinction between program and context, since in our setting the partial program is statically verified, meaning that performing some of the dynamic checks also before the IO operations of the program would be redundant (e.g., the program is statically verified to only write to files that are currently open). In fact, since in our setting only the context is untrusted and adversarial we only want to enforce the access control policy on the context, not on the program, and we generally want the flexibility to choose a more restrictive policy to dynamically enforce on the context than what we statically verify for the program (e.g., in the extreme case we may want to prevent the untrusted context to perform any IO operations without going through the program). While such a reference monitor can be built and is helpful for immediately blocking bad IO operations by the untrusted context and strengthen its interface, reference monitoring is not enough on its own for solving our problem, since the strong interface contains refinement types and some pre- and post-conditions that cannot be enforced at the level of IO operations, but have to be enforced on the boundary between the partial program and the context (e.g., the pre-condition of a callback sent to the context can require that its argument is an open file descriptor). Since this boundary is higher-order, this cannot be handled by the usual reference monitors. We propose a new secure compilation chain that combines reference monitoring for recording and controlling IO events with higher-order contracts that have access to the monitor's state. Together they bridge the interface gap between the verified program and the untrusted context: we use contracts during compilation to weaken the interface of the program and we use monitoring during linking to strengthen the interface of the target context by enforcing an access control policy on its IO operations. We achieve this by linking the target context with a secure IO library that dynamically enforces the policy, while the partial program gets a version of the IO library that only records the IO events in the monitor's state, but performs no dynamic checks. Our compiler also takes advantage of static verification to perform no dynamic checks when the verified program passes control to the context. Another key ingredient of our solution is that we require the post-conditions of the context to be split so that each post-condition is dynamically checked either by the reference monitor on each IO operation of the context, or by the contracts when the context returns to the program. We require the user to decide on this split, and since specifications in F\({}^{\star}\) can be non-executable, to also to provide the access control policy and the extra executable checks needed by the contracts. Our framework uses static verification to ensure that the user-provided policy and contract checks are correct, and more generally that the framework adds enough dynamic checks so that the program and the context can interoperate securely. Since this verification is done in F\({}^{\star}\) it takes advantage of its support for SMT automation and interactive proofs [30]. Formally, we defined our source and target language by using shallow embeddings in \(\mathrm{F}^{\star}\). As explained above, the main difference is that in the source the program and the context interact via a strong higher-order interface, while in the target they interact via a weak interface. Compared to the full \(\mathrm{F}^{\star}\) language that has a type-and-effect system supporting user-defined effects, functions in our languages use only one custom effect we defined to model terminating IO computations. We represent our custom effect as a monad indexed by a specification monad, known as a Dijkstra Monad [28]. Our effect is called \(\mathtt{MIO}\), which stands for "Monitored IO", because in addition to the usual IO operations (e.g., reading and writing files and network sockets) it contains a new \(\mathtt{GetTrace}\) operation that returns the reference monitor's state--i.e., the current trace of events. The \(\mathtt{GetTrace}\) operation allows us to implement the dynamic checks inserted by our compiler and to model the enforcement of the access control policy directly in \(\mathrm{F}^{\star}\). We, however, need to prevent the untrusted context from directly calling \(\mathtt{GetTrace}\), which is a powerful reflection construct that reveals secret information about previous IO events, and also from directly calling any IO operations, which would circumvent the access control policy we are trying to enforce. This paper makes the following **contributions**: * We introduce a new secure compilation chain in \(\mathrm{F}^{\star}\) for compiling a statically verified partial IO program and linking it against a context with a weak higher-order interface. We bridge the interface gap between program and context by using a novel combination of higher-order contracts and a reference monitor, which share state recording information about prior IO events. Our compilation chain takes advantage of the static verification guarantees and performs no dynamic checks when the verified program performs IO or when it passes control to the context. One novel idea for combining higher-order contracts and reference monitoring is that we require the post-conditions of the context to be split so that each of them is dynamically checked either by the reference monitor on each IO operation of the context, or by the contracts when the context returns to the program. Finally, we use static verification in \(\mathrm{F}^{\star}\) to make sure that all the parts fit well together and enough dynamic checks are added so that the program and the context can interoperate securely. * We propose the new \(\mathtt{MIO}\) monadic effect, which is at its core a way to statically verify terminating IO programs in \(\mathrm{F}^{\star}\), engineered to take advantage of SMT automation. In addition, the \(\mathtt{GetTrace}\) operation provides a simple abstract model of a reference monitor that hides the implementation details, while still allowing us to implement in \(\mathrm{F}^{\star}\) the dynamic checks done by the contracts and before the context's IO operations. At the specification level we distinguish between events produced by the program and those produced by the context, which enables us to write more expressive specifications and to dynamically enforce stronger access control policies on the untrusted context. * To prevent the untrusted context from directly calling the IO operations and \(\mathtt{GetTrace}\) we index the \(\mathtt{MIO}\) monad by an extra flag that controls which operations a computation can access and propose a novel use of flag-based effect polymorphism in \(\mathrm{F}^{\star}\) to model the context. * We prove in \(\mathrm{F}^{\star}\) that enforcing an access control policy on the context in combination with static verification of the program soundly enforces a global trace property. This proof is done entirely by \(\mathrm{F}^{\star}\) typing, a form of program verification that heavily benefits from SMT automation [28; 48]. * We also provide a machine-checked proof in \(\mathrm{F}^{\star}\) that our secure compilation chain satisfies by Robust Relational Hyperproperty Preservation (RrHP), which is the strongest secure compilation criterion of Abate et al. [2], and which is in particular stronger than full abstraction [2; 37]. While proofs of such secure compilation criteria are generally very challenging [2; 37], we have carefully set things up so that our proof is very simple by construction. Such a simple proof is possible in our setting since (1) our languages are shallowly embedded in \(\mathrm{F}^{\star}\); (2) we get back-translation for free from the way we compile higher-order functions; and (3) our compilation and back-translation satisfy a syntactic cancellation law that immediately implies RrHP. * We illustrate that all this works on a case study in which a simple web server is statically verified in F* and linked against potentially adversarial request handlers. We furthermore extract both the web server and the handlers to OCaml in order to execute them. First, our case study illustrates that using our MO monadic effect simplifies the static verification process by taking advantage of SMT automation. Second, while only the first and most interesting compilation step is formalized and proved secure as described above, the case study still offers empirical evidence that several attacks attempted by adversarial request handlers written in our target language are blocked by the dynamic checks of either the higher-order contracts or the reference monitor, even at the OCaml level. So while securing and formally verifying the remaining steps (including e.g., the extraction mechanism of F*) is out of scope for this paper, the part we do secure and verify here works as expected also at the OCaml level, and should provide a good base for building a larger secure compilation chain that will protect verified programs against arbitrary contexts in a safe subset of OCaml (see SS9). Finally, we also implement a non-adversarial request handler that serves files, which also works as intended. **Outline.** We start by illustrating our key ideas (SS2). The following three sections present the main technical pieces of our work: the MO monadic effect (SS3), our implementation of higher-order contracts (SS4), and our reference monitor (SS5). We then put these pieces together to define our compilation chain and prove it soundly enforces a global safety property and it satisfies RrHP (SS6). We then present our web server case study (SS7) before discussing related (SS8) and future work (SS9). **Artifact.** This paper comes with an artifact in F* that contains a formalization of the contributions above. The artifact contains the compilation chain, the mechanized proofs of sound enforcement of a global trace property and of RrHP, as well as the web server case study. The artifact is available at the following URL: [https://github.com/andricicezar/fstar-io](https://github.com/andricicezar/fstar-io) **F* syntax primer.** F*'s syntax roughly follows that of OCaml (val, let, match, etc). Binding occurrences b take the form xt, declaring a variable x at type t; or #xt indicating an implicit argument. The syntax \(\lambda b_{1}\ldots b_{n}\!\!\rightarrow\!\!\)t introduces a lambda abstraction (where t ranges over both types and terms), whereas \(b_{1}\!\!\rightarrow\!\!\ldots\!\rightarrow\!\!\!\)b\({}_{n}\!\!\!\rightarrow\!\!\!\)C is the shape of a curried function type with computation type C, describing the effect, result, and specification of the computation. Contiguous binders of the same type are abbreviated by (v\({}_{1}\ldots v_{n}\): t). Refinement types are written b{t}, e.g., type x:int{x\(\geq 0\)} represents natural numbers. The squash t type is defined as the refinement _unit{t}, and can be seen as the type of computationally irrelevant proofs of t. As usual, we omit the type in a binding when it can be inferred; and for non-dependent function types, we omit the variable name, e.g., type #a:Type #m:nat #n:nat # vec a m * vec a n * vec a (m+n) represents the append function on vectors, where the two explicit arguments and the return type depend on the three implicit arguments. A type-class constraint \(\{\|\;\text{d}:\text{c}\text{t}1\ldots\;\text{t}n\|\}\) is a special kind of implicit argument, solved by a tactic during elaboration. They can also be provided explicitly. We generally omit universe annotations. Type0 is the lowest universe of F*; we also use it to write propositions, including \(\top\) (True) and \(\bot\) (False). In!? t tests whether t is of the shape In! u. Binary operations can be used in infix notation by using backticks: x 'op' y stands for op x y. ## 2. Key ideas in action We illustrate the key ideas on a running example inspired by our case study (SS7). The example consists of a simple verified web server that takes as argument an untrusted request handler. In Figure 1, one can see the type and parts of the implementation of this simple web server in F* written and verified using the MO monadic effect. We highlight the following interesting specifications: 1. The post-condition of the web_server ensures that it responds to all accepted clients. It does this either via the handler or if that fails by responding itself with an error (400). 2. The strong higher-order type of the request handler (req_handler) is particularly interesting. The first two arguments of the handler are a file descriptor client and a buffer req that is guaranteed to contain a valid HTTP request, which we express using a refinement type. The handler also has as pre-condition that no response has been sent yet to the most recently accepted client. 3. The third argument the handler receives from the web_server is a callback send that expects as argument a buffer that is a valid HTTP response and requires that no response has been sent yet to the most recently accepted client--i.e., the same pre-condition as the handler. The concrete callback we pass in for send is statically verified and simply writes the response to the client. 4. The handler ensures that it wrote at least once to the file descriptor it got as argument or it otherwise returns an exception value (tagged with lnr). 5. The handler also ensures that it only opens files from a specific folder, reads only from its own opened files, and closes only them. It also ensures that during its execution only the trusted web_server writes files (so only when the handler calls the send callback). Any other IO operation attempted by the handler or the web_server is not permitted by the post-condition. These specifications describe how the web server and the handler should interact. It would be insecure to naively compile this web server by just erasing all specifications and to naively link it with an untrusted handler with the following weak type: \(\mathsf{type\,weak\_req\_handler}=\mathsf{file\_descr}\rightarrow\mathsf{buffer} \rightarrow(\mathsf{buffer}\rightarrow\mathsf{resexn\,unit})\rightarrow\mathsf{resexn\,unit}\) An untrusted handler with this type could write an invalid HTTP response directly to the client without using send, thus violating specification 5. Therefore, in the presence of an untrusted handler with a weak type the web server has to be protected. Our secure compilation chain uses static verification to enforce specifications 1 and 2, higher-order contracts to enforce specifications 3 and 4 and an access control policy to enforce specification 5. In what follows we present some high level ideas about our compilation chain, then in SS2.1 we present how we weaken the logical assumptions the web server--i.e., the partial program--makes about the handler--i.e., the untrusted context--by using higher-order contracts. In the end, in SS2.2 we present how we strengthen the type of a target handler using reference monitoring to enforce the access control policy. First, thanks to static verification we do not have to enforce all five specifications dynamically. Since the web server is verified in \(\mathsf{F}^{\star}\), this ensures that its post-condition is satisfied (1)--provided the handler satisfies its own post-condition (4-5), see next paragraph. It also means that we do not have to enforce anything when the web server passes control to the handler--i.e., the pre-condition Figure 1: Running example in \(\mathsf{F}^{\star}\): web server that takes as argument a request handler. The types are simplified. of the handler, or that the web server passes a valid HTTP request to the handler (2). Also when the handler calls callback send, we do not have to enforce the post-condition of send because send is passed by the web server, meaning it is also statically verified. The post-condition of the handler (4-5) is the most interesting and challenging to enforce because it requires both higher-order contracts and reference monitoring the IO operations. The first part of the post-condition--i.e., checking whether the handler wrote to the client (4)--can be soundly enforced by a contract when the handler returns, but not using reference monitoring at the level of the IO operations. The second part of the post-condition 5 specifies the IO behavior of the handler, which cannot be securely checked by a contract only when the handler returns.The post-condition 5 for instance specifies that the handler does not open files outside of a specific folder: if the untrusted handler opens a password file from a different folder we could detect that when it gives back control to the web server, however, this still breaks our post-condition, since the bad event has already happened and it is too late to do anything about it--thus, one has to prevent the violation from happening by using reference monitoring (see SS2.2). So, to be able to enforce post-conditions, we require that they can be split into a dynamic check to be enforced at the end of the function using contracts and the access control policy enforced by the reference monitor. Since specifications in \(\mathbf{F^{\star}}\) are not directly executable, to be able to weaken the web server's assumptions, one has to provide to our compilation chain the dynamic checks to be used inside the higher-order contracts and the access control policy to be enforced by the reference monitor. The provided dynamic checks and the access control policy have to satisfy a few constraints, in particular that they can soundly enforce the logical assumptions the program makes about the context--e.g., if the post-condition is correctly split between a dynamic check and the access control policy (we give more details in SS4). While one can make any arbitrary assumptions about the context, not all of them have sound dynamic checks that are both _precise enough_ and _efficiently enforceable_, thus it becomes a design decision on what assumptions are made and what dynamic checks are picked. For our example, we have to provide a dynamic check that implies the pre-condition of send (3), a dynamic check that implies part of the post-condition of the handler (4), and an access control policy that implies the other part (5). ### Higher-order contracts After deciding which dynamic checks are going to be used, one can start weakening some of the assumptions by wrapping the web server in higher-order contracts. We enforce higher-order contracts by using two dual functions: export that converts strongly-typed source values into weakly-typed target values, and import that converts weakly-typed target values into strongly-typed source values. The import and export functions are based on the types of the values and defined in terms of each other, which is needed for supporting higher-order types (Hanan and Han, 2017). We can weaken the assumption made by the web server using export, which wraps the web server in a new function with no pre- or post-condition, and where the handler is expected to have the specification given by the reference monitor, denoted by \(\pi\) and explained in SS2.2 below (the reason for this \(\pi\) is that, as explained in SS1, we are bridging the interface gap from both sides). The type of the exported web server is listed below, where by \(\mathsf{MIO}\)\(\alpha\top\pi\) we denote a computation with no pre-condition respecting the specification \(\pi\): \begin{tabular}{l} type exported\_web\_server \(\pi\) = \\ handler:(file\_descr\(\rightarrow\) (buffer\(\rightarrow\) \(\mathsf{MIO}\) (resexn unit) \(\top\pi\))\(\rightarrow\) \(\mathsf{MIO}\) (resexn unit) \(\top\pi\)) \(\rightarrow\) \(\mathsf{MIO}\) (resexn unit) \(\top\pi\) \\ \end{tabular} Inside the exported web server, the weakly-typed handler taken as argument is imported to the strong type req_handler. Inside this imported handler, one of the first things we do is to export the strongly-typed arguments to weakly-typed arguments. The export wraps the strongly-typed send into a new weakly-typed function, which enforces the pre-condition of send (3)--the only pre-condition we have to enforce dynamically in the example. Not enforcing the pre-condition of send would be insecure, because it would enable the handler to call the function without satisfying the pre-condition. The pre-condition of sent, did_not_respond h, returns in fact a bool, so we can directly use it as a dynamic check, and the same is also true for the valid_http_response refinement on the res argument of send. The exported send first runs valid_http_response and did_not_respond: in case the checks succeed, function send is called and its result is exported and returned, otherwise Inr Contract_failure is returned. This is why the return type of send is resexn unit, meaning that it can fail as resexn \(\alpha\) stands for the sum type either \(\alpha\) exn. Thus, when a contract fails and when the monitor prevents an IO operation from happening, an error is returned, which makes the partial program and the context able to handle the errors. Now that the strongly-typed arguments of the imported handler are exported, one can call the weakly-typed handler with them and obtain a weakly-typed result. In our case, the result is just resexn unit, thus the import is just the identity function. However, before returning this as the result of the imported handler, we have to check if the post-condition holds--i.e., if it wrote at least once to the client--by using the following dynamic check: let check_handler_post (client,req) h res It : bool = if Inr? res then wrote_at_least_once client It else true Since this dynamic check implies post-condition 4 and the weakly-typed handler already satisfies post-condition 5 it means that the imported handler will satisfy both of them. This concludes the dynamic checks added by the higher-order contracts in this example. ### The reference monitor The only specification that we have to enforce using reference monitoring is post-condition 5 of the handler. The post-condition 5 states that the handler can only open, read, and close its own files, and that the web server is allowed to write during the execution (when the handler calls send). The traces of the MO monadic effect are informative and one can distinguish between the events produced by the web server and those produced by the handler. This is because the IO operations have an extra argument that can be set to P or C, a bit of information that exists on every event. This bit of information is used when writing specifications and it allows us to enforce a stronger specification on the handler than on the web server--e.g., the context _cannot_ write to any file descriptors, but the web server can. Here we give an example of an access control policy \(\phi\) and its specification \(\pi\) that enforces the post-condition of the handler (5). The difference between the specification and the access control policy, is that the access control policy is enforced _only_ on the IO calls of the handler, while the specification characterizes the IO calls of the handler _and_ the IO calls of the web server inside the handler (when the handler calls send). The intuition is that the policy \(\phi\) allows the handler only to open, read, and close its own files, while the specification \(\pi\) says that the trace produced by the handler contains the events allowed by the policy plus the writes done by the web server. ``` val\(\pi\):policy_spec let\(\pi\)hcallermdarg:Type0= matchcallermdargwith |C,Openfile,fnm\(\rightarrow\) iffnm="/temp"thenTelse\(\bot\) C,Read,fd\(\rightarrow\)is_opened_by_Chfd |C,Close,fd\(\rightarrow\)is_opened_by_Chfd |P,Write,fd\(\rightarrow\)T ``` val\(\phi\):policy\(\pi\) let\(\phi\)hcmdarg:bool= matchcmd,argwith |Openfile,fnm\(\rightarrow\) iffnm="/temp"thenTrueelsefalse |Read,fd\(\rightarrow\)is_opened_by_Chfd |Close,fd\(\rightarrow\)is_opened_by_Chfd |Close,fd\(\rightarrow\)is_opened_by_Chfd |\(\rightarrow\)false ``` Listing 10: The protocol To model that the target context does not call the IO operations directly, but rather calls our secure variants that enforce the access control policy, we can use the flag index of the MIO effect. A flag value is simply an inhabitant of a variant type tflag with four constructors: NoActions, GetTraceActions, IOActions and AllActions. By indexing a computation with an element of this type we can restrict the operations that the computation can perform: (i) NoActions means that no operation can be used in the computation, (ii) GetTraceActions means that the computation can only use the GetTrace operation and no other operation, (iii) IOActions means that the computation can only use IO operations (open, read, etc.) but no GetTrace, (iv) AllActions means that the computation can access to any of the operations. We make the target context parametric in the flag, preventing calls to any of the actions of the underlying Dijkstra Monad--as the flag could take the value NoActions--and we refer to this form of effect polymorphism as _flag-based effect polymorphism_. This was inspired from work on effect polymorphism where each action is treated as a separate effect [10; 42], however we do not need that fine level of granularity. Because the target context is parametric in the flag it cannot call the default IO operations, and instead to perform any IO it has to call our secure IO operations, which we pass to it as argument. Because the target context uses the secure IO operations, which enforce the access control policy, we can give a specification to the target context about that in its post-condition. Having this post-condition around works really well with our shallow embeddings and makes our proofs about the compiler easy. We also have to make the target context parametric in the specification of the access control policy. Now we can give as an example the type of the target handler: \[\begin{array}{l}\texttt{type\ tgt\_handler}=\texttt{f\_{}}\texttt{:} \texttt{reased\_}\rightarrow\pi\texttt{:}\texttt{reased\_}\rightarrow\texttt{ acts}\ \pi\ \texttt{fl}\ C\rightarrow\\ \texttt{file\_descr}\rightarrow\texttt{buffer}\rightarrow(\texttt{buffer} \rightarrow\texttt{MO}\ (\texttt{resexn\ unit})\ \texttt{fl}\ \texttt{\top}\ \pi))\rightarrow\texttt{MIO}\ (\texttt{resexn\ unit})\ \texttt{fl}\ \texttt{\top}\ \pi\end{array}\] where \(\texttt{fl}\) is the flag (labeled with erased making it unusable in the computation) and where acts \(\texttt{fl}\ \pi\ C\) is the signature of the IO operations in MIO that use only actions under flag \(\texttt{fl}\) and that enforce an access control policy which has the specification \(\pi\). Note that the GetTrace operation is not part of acts, thus it is not passed to the context. During linking, the target handler is instantiated by passing the flag AllActions, the specification of the access control policy \(\pi\) that the web server expects the handler to satisfy, and IO operations that enforce the access control policy \(\phi\). We can put together what happens during linking with what happens during the exported web server when importing the handler, and we can define the following back_translate function specialized for the handler. Thus, if one defines a target handler, one can back-translate it and pass it to the source web server. \[\begin{array}{l}\texttt{let\ back\_translate}(\texttt{handler:}\texttt{tgt\_handler})\ \pi\ \phi\ \texttt{cks}:\texttt{req\_handler}=\\ \texttt{let\ handler}^{\prime}=\texttt{handler\ AllActions}\ \pi\ (\texttt{enforce\_policy\ io\_acts}\ \phi)\ \texttt{in}\\ \texttt{import\ handler}^{\prime}(\texttt{make\_checks\_eff\ \texttt{cks}})\end{array}\] ## 3. The MIO Dijkstra Monad and Static Verification in F\({}^{\star}\) F\({}^{\star}\) is a functional programming language with a sophisticated type system aimed at program verification. It differs from other verification-enabled languages because in F\({}^{\star}\) an expression has a type _and_ a computational effect. For example, an expression whose inferred type is Tot int is an expression that always terminates, does not perform any effects and returns an integer. F\({}^{\star}\) comes with other primitive effects, but also with multiple mechanisms to define new effects. We will use these mechanisms to write our own effect that allows us to express IO computations. Computations will be terms of the type MIO\(\alpha\) fl pre post, where MIO stands for _monitored IO_. The type arguments can be summarized as follows: \(\alpha\) is the return type of the computation, fl is an index representing which IO actions the computation can use, pre is a pre-condition over the history of events that must be satisfied to be able to call the computation, and post is a post-condition over the result and the trace produced by the current computation. In the following subsections we describe in detail how to obtain the effect \(\mathtt{MIO}\) in \(\mathrm{F}^{\star}\), and give some examples involving static verification. ### The essence of the \(\mathtt{MIO}\) Dijkstra monad To define our effect \(\mathtt{MIO}\), we use a combination of Dijkstra monads and layered effects. We follow Maillard et al. (2017), and construct a Dijkstra monad by picking three ingredients: i) a computational monad, ii) a specification monad, and iii) a monad morphism from the former to the latter; obtaining thus an effect in which computations are indexed by specifications. We refine this Dijkstra monad by defining a layered effect on top of it. Specifically, we incorporate a flag that restricts the operations that the computation can invoke. In the following paragraphs, we provide a concise summary of the ingredients above, and elaborate them in more detail in the subsequent subsections. For the computational monad, we use a free monad that is parametric in the underlying primitive operations. For simplicity, we define operations for file management and for socket communication, but these account only for one possible instantiation of the monad. Our approach can be extended to other IO operations as needed. Moreover, the free monad can be used to implement other effects such as exceptions, state and non-determinism by extending the signature of effects (Bordes, 2017), and it is future work to integrate these effects in our account. The \(\mathtt{MIO}\) monadic effect uses a specification monad that captures the behavior of a computation as a predicate transformer, i.e., a function that given a post-condition, it returns a pre-condition that is strong enough to guarantee the post-condition after execution of the computation. As explained in the introduction, in our setting a pre-condition is a property of the current trace, and a post-condition is a property of the result and the new trace. When we write computations, we will work directly with the pre- and post-condition, and not with the predicate transformer, but these two approaches are equivalent (Kang et al., 2017). Finally, we need to define a monad morphism from the computation monad to the specification monad. This is done using a translation from computations written in the free monad to a predicate transformer written in the specification monad, following a recipe widely employed in \(\mathrm{F}^{\star}\). Up to this point, we focused on defining a Dijkstra monad that allows statically verifying IO programs via specifications, but we have not mentioned any mechanism to support dynamic checks. To enable dynamic verification, we include a new silent operation called get_trace that is meant to return the trace computed until the point it is called. This operation allows to write computations that reflect on the past events performed, and which allows to write dynamic checks based on that. ### The computational monad For our \(\mathtt{MIO}\) monadic effect, we chose a computational monad that is parametric in the underlying primitive operations by using a free monad (Bordes, 2017) that can accept any signature of effects (Kang et al., 2017). A signature is described by a type of operations together with the type of arguments (or input) and result type (or output) for each operation: \[\mathtt{type\ op\_sig}\left(\mathtt{op\_Type}\right)=\{\mathtt{args:op\to Type ;res:(cmd\cdot op)\to argscmd\to Type}\}\] Typically, the type for representing computations in free monads are trees whose nodes are operation calls, and whose leaves are the values returned by a computation. For our computations, we additionally allow an extra variant of nodes which contain a pre-condition. These (pure) pre-conditions are used to capture extra logical constraints we enforce during computation: The Call constructor comes with a parameter c of the variant type caller that we use to mark whether the computation making the call is the partial program (P) or the context (C). The type constructor free comes with combinators free_return and free_bind which can be shown to form a monad structure on free, independently of the operations and signature. For MIO, the signature we are interested in is given by a type of operations: \(\mathtt{type\ cmds}=|\ \mathtt{Openfile}\ |\ \mathtt{Read}\ |\ \mathtt{Write}\ |\ \mathtt{Close}\ |\...\ |\ \mathtt{GetTrace}\) The first four are natural IO operations (captured by a sub-type io_cmds), while the last one is the operation that allows performing dynamic verification by retrieving the trace history. The three dots indicate the existence of more IO operations, such as those related to network communication that we use in SS2, but which we do not write here for the sake of concision. Our implementation enables all IO operations to return errors, as the output type of these operations is defined to be either a result type or an error. For example, the Read operation has as input type file_descr, and as output type either string exn (which we abbreviate as resexn string). The left case of either describes a successful read from the file descriptor, while the right case uses the type exn (of exceptions) to signal an error. The signature for cmds is called mio_sig, and it is defined as the sum of the signatures m_sig (of GetTrace) and io_sig (of commands in io_cmds). The computational monad for MIO is obtained by instantiating free with the signature of these effects: type mio a = free cmds mio_sig a ### The specification monad The role of the specification monad is to give a logical account of the semantics of the computation. We use a predicate transformer semantics, and for that we need to specify how traces are encoded. Events are defined following the signature of the operations. For each IO operation, we have an event constructor capturing the operation name, caller, argument and result of the operation call: \[\mathtt{type\ event}=|\ \mathtt{EOpenfile}:(\mathtt{c:caller})\to\mathtt{a:io\_sig.args}\ \mathtt{Openfile}\to(\mathtt{r:io\_sig.res}\ \mathtt{Openfile}\ \mathtt{a})\to\mathtt{event}\ |\...\] Non-IO GetTrace does not have an event associated. A trace is a list of events: type trace = list event. We define a type constructor wp that captures transformers from post-conditions to pre-conditions: type wp a = (wp_post : It:trace \(\to\) r:a \(\to\) Type0) \(\to\) (h:trace \(\to\) Type0) Predicate transformers can be naturally organized as continuation-like monads (Han et al., 2017). Indeed, as in the case of free, the type wp comes equipped with a monadic structure given by combinators wp_return and wp_bind. In addition, a specification monad also needs to come equipped with an order between computations of the same type. For wp, we define this order as follows \[\mathtt{let\ wp\_ord\ wp1\ wp2}=\mathtt{Vh\ post.wp1\ post\ h}\implies\mathtt{wp2\ post\ h}\] This order is a form of refinement (Kang and Bettmann, 2017) that allows us to compare specification by precision. Monadic combinators wp_return and wp_bind are monotone with respect to this order. To form an ordered monad it is also required that we restrict to only those predicate transformers that are monotonic, i.e., which map stronger post-conditions to stronger pre-conditions. We enforce this last condition by refining the type wp. The monad wp captures a logical semantics of computations (specifically, trivial computations using return and sequencing using bind). However, it remains to specify how the operations (both IO and GetTrace) are understood in this semantics. For that, we define a function \[\mathtt{let\ mio\_wps}\ (\mathtt{c:caller})\ (\mathtt{cmd:mio\_cmds})\ (\mathtt{arg:mio\_sig.args}\ \mathtt{cmd}):\ wp\ (\mathtt{mio\_sig.res}\ \mathtt{cmd}\ \mathtt{arg})=\lambda\mathtt{post\ h}\to\] if GetTrace?cmd then post [] h else (V (r:mio\_sig.res}\ \mathtt{cmd}\ \mathtt{arg}\)). post [convert_call_to_event ccmd arg r] r) that maps an operation and its arguments to a specification term. For GetTrace, this function captures the history, and passes it as the result to the post-condition. For IO operations, the post-condition receives a single event trace corresponding to the IO operation performed, which is converted with help of the auxiliary function convert_call_to_event. ### The monad morphism, the Dijkstra monad and the layered effect As the third ingredient for constructing a Dijkstra monad, we need a monad morphism from the computation monad mio to the specification monad wp. We call such morphism \(\theta\), as it is customary in literature on Dijkstra monads, and it has type (\(\#\)a:Type) \(\rightarrow\) mio a \(\rightarrow\) wp a. Its definition follows mostly induction on the monadic computation, and uses the term mio_wps we defined above. We have shown that \(\theta\) is indeed a lax monad morphism (Lev let unspecified_read_from_fd (fd:file_descr) : IO (option string) = let data = read fd in close fd; if InI? data then Some (InI?.v data) else None The first step to specify our code is to write our computation in MIO. For that, we need to change the calls from read and close to calls that simply wrap io_acts: \[\text{val read : }(\text{fd :file\_descr})\rightarrow\text{MIO}(\text{research buffer})\text{IOActions}\] (requires (\(\lambda\) h \(\rightarrow\) T)) (ensures (\(\lambda\) p r It \(\rightarrow\) It == [ERead true fd r])) \[\text{val close : }(\text{fd :file\_descr})\rightarrow\text{MIO}(\text{research unit})\text{IOActions}\] (requires (\(\lambda\) h \(\rightarrow\) T)) (ensures (\(\lambda\) p r It \(\rightarrow\) It == [EClose true fd r])) We can write predicates that ensure conditions of interest. As an example, we use the predicate is_opened_by_C (introduced in SS2) as pre-condition, which takes a file descriptor and a trace and tells whether the file descriptor is left open by the context at the end of the trace: let rec is opened by C (h:trace) (fd:file_descr) : bool = match h with \(\left|\begin{array}{l}\text{let }\rightarrow\text{false}\\ \text{EOenfle }\text{C }\text{(InI fd)}::\text{tl }\rightarrow\text{if fd = fd' then true else is_opened_by_C tl fd}\\ \text{EClose }\text{fd' (InI)}::\text{tl }\rightarrow\text{if fd = fd' then false else is_opened_by_C tl fd}\\ \text{--}::\text{tl }\rightarrow\text{is\_opened\_by\_C tl fd}\end{array}\right.\) Its definition works by doing recursion on the trace, taking into account the events that signal when file descriptors were opened or closed. For the post-condition, we use a predicate that ensures that no open or write operations were made: let rec no open or write (h:trace) : bool = match h with \(\left|\begin{array}{l}\text{let }\rightarrow\text{true}\\ \text{EOenfle }\text{---}\text{|EWrite}\text{---}\text{---}\text{false}\\ \text{--}::\text{tail }\rightarrow\text{no\_open\_or\_write tail}\end{array}\right.\) The final code for read from fd together with its proposed specification is the following: let read from fd (fd:file_descr) : MIO (option string) IOActions (requires (\(\lambda\) h \(\rightarrow\) is_opened_by_C h fd)) (ensures (\(\lambda\) h r It \(\rightarrow\) no_open_or_write lt)) = let data = read fd in close fd; if InI? data then Some (InI?.v data) else None The computation body for read from fd is exactly the same as that of unspecified_read from fd, however, read from fd has a specification that provides information on the behavior of the computation. Notice that no usage of tactics or involved terms is required to make read from fd type successfully. F* uses SMT automation to establish that the post-condition holds assuming the pre-condition, and no intervention from the user is needed in this case. Keeping the history backwards is one of the factors that help the SMT solver discharge these statements. ## 4. Higher-order contracts Because the verified partial program has a strong interface while the unverified context has a weak interface, they cannot simply be linked together, as there would be no guarantee that the context respects the invariants of the partial program. For instance, it could break "value-level" invariants by returning a negative number where a positive one is expected (a common problem solved by higher-order contracts). Furthermore, in our setting, it could fail on its promises related to IO behavior, such as by returning a closed file descriptor where an open one is expected, not closing all of its temporary file descriptors, or by returning some random string while stating it comes from a given file. None of these failures can be caught by an IO reference monitor: the context performs completely legitimate IO operations, yet fails to satisfy its intended specification. We implement a variant of higher-order contracts that weakens the strong interface of the verified partial program by dynamically enforcing the assumptions made about the context. Our contract mechanism allows for a higher-order interface between the partial program and context, and inserts dynamic checks to enforce refinements, pre- and post-conditions as needed. Crucially, these checks have the same structure as our pre- and post-condition such that they can also distinguish between the history and the local trace of a computation, and they can also distinguish the events of the partial program from the events of the context. Since the interface is higher-order, the trace of a computation can contain events generated by both the partial program and the context. Because we can access the trace of IO events from the monitor's state, inside the contracts we only have to enforce the necessary dynamic checks because keeping the IO trace updated is the responsibility of the monitor. The dynamic checks are added at the interface level--i.e., only at the boundary between the partial program and the context. Moreover, because the partial program is statically verified, we only have to add checks when the context passes control to the partial program. We now provide some details on the mechanism. ### Weak types, exporting, and importing To begin, we must define the set of types that are present in the target language, i.e. the _weak_ types. Since the target language is shallowly embedded into F\({}^{\star}\), these are then a subset of F\({}^{\star}\) types, which allows us to determine them by using a (methodless,trivial) type class (see Figure 2). This type class is indexed by a flag and an access control policy to guarantee that its instances are invariant with respect to them (see e.g., the last instance). The weak type class instances are manually written to reflect our assumptions on the target language. As we are considering an ML-like language, we provide instances for every base (unrefined) type, as well as pairs, sums, options, and (non-dependent) functions. Given the set of target types, we define new type classes for _exporting_ a value from the source to the target and for _importing_ in the opposite direction. The classes are indexed by the _strong_ source type \(\mathsf{styp}\) and contain a _weak_ target type \(\mathsf{wtyp}\). The type \(\mathsf{resen}\) a is defined as either a \(\mathsf{exn}\), hence importing a value may fail if it does not satisfy the proper requirements. These two classes are indexed not only by the source type, but also by the specification \(\pi\) of the access control policy \(\phi\), the actions flag 'fl', and a (structured) collection of dynamic checks 'cks'. The first two indices serve to ensure that higher-order functions are invariant over flags and access control policies. That is, when translating a function that itself receives a function or returns a function, all the arrows must operate over the same fl and \(\pi\). This is enforced by only defining instances that respect this property. This guarantees we do not import functions that are not parametric in these two indices, which would allow the context to break the access policy when doing higher-order. The cks index is a _check-tree_ for the type in question. A check-tree is a binary tree of dynamic checks that encodes how pre- and post-conditions of functions are enforced. Weak types simply use a Leaf here, as there is no interesting check to perform. Formulas in refinement types must be decidable, and do not make use of this structure. For a function with pre- and post-conditions, of type \(\mathsf{a}\rightarrow\mathsf{MIO}\) b pre post, this tree takes the shape 'Node rc left right', where rc is a dynamic check for this arrow (i.e. a function checking whether post indeed holds) and left and right are check-trees for \(\mathsf{a}\) and \(\mathsf{b}\) respectively. For structured, non-arrow types such as tuples or sums, we instead use an 'EmptyNode left right' node, as there is no immediate pre- and post-condition to enforce, but there may be some deeper within the type. Figure 2. The weak type class representing target language types. class exportable (styp:Type) (\(\pi:\text{policy\_spec}\)) (cks:checks) (fl:erasedtflag) = { wtyp:Type; weak_wtyp:weakfl\(\pi\)wtyp; export:eff_checksflcks\(\rightarrow\)styp\(\rightarrow\)wtyp;} class importable (styp:Type) (\(\pi:\text{policy\_spec}\)) (cks:checks) (fl:erasedtflag) = { wtyp:Type; weak_wtyp:weakfl\(\pi\)wtyp; import:wtyp\(\rightarrow\)eff_checksflcks\(\rightarrow\)resexnstyp;} As usual, _exporting_ a (first-order) value from the program into the context mostly amounts to forgetting its properties (such as a refinement). Hence, a nat is exported into type int without modification. To import a nat from a context int, a dynamic check must be added. If this check fails, then importing fails with a Contract_failure exception. We use **solve** below to explicitly invoke type class resolution for the c_wtyp fields. The check-tree is unused. instance exportable_natfl:exportablenatLeaffl={ wtyp=int; c_wtyp=solve; export= (\(\lambda\)_cks(n:nat)\(\rightarrow\)n);} instance importable_natfl:importablenatLeaffl={ wtyp=int; c_wtyp=solve; import= (\(\lambda\)(n:int)_cks\(\rightarrow\)ifn\(\geq\)0thenInl(n<:nat)elseInrContract_failure);} As expected, the situation is more complicated for higher-order functions, where exporting and importing make use of each other. The basic idea is that to export a function f, we create a target-level function that imports its argument, calls f, and exports the result back to the target. Essentially, the composition export\(\cdot\)f\(\cdot\)import. Dually, a target function g is imported roughly as import\(\cdot\)g\(\cdot\)export. However, as import can fail (it returns an option type), we also require the codomain of each imported and exported function to "include" exceptions. Below is the instance for exporting simple functions, i.e., of type t1\(\rightarrow\)MOfl(resexnt2) without pre- and post-conditions: instance exportable_simple_arrow (#\(\pi\):policy_spec) (#rcs:(treepck_rc)(EmptyNode?rcs)) (#f:erasedtflag) (t1:Type) { d1 :importable t1 \(\pi\) (leftrcs) fl } (t2:Type) { d2 : exportable t2 \(\pi\) (rightrcs) fl } : exportable (t1 \(\rightarrow\)MOf(resexnt2)fl\(\top\pi\)) \(\pi\)rcs fl = { wtyp=d1.wtyp\(\rightarrow\)MIO(resexnd2.wtyp)fl\(\top\pi\); c_wtyp=solve; export= (\(\lambda\)cks(f:(t1 \(\rightarrow\)MOf(resexnt2)fl\(\top\pi\))) (x:d1.wtyp)\(\rightarrow\) matchd1.importx(leftcks) with |Inferr\(\rightarrow\)Inferr |Infx'\(\rightarrow\)matchfx'with |Inferr\(\rightarrow\)Inferr |Infy'\(\rightarrow\)Inl(d2.export(rightcks)y))))))))))))))))))))))}} The function is exported into the type w1\(\(\)\(\rightarrow\)MIOfl((\)resexnw2)\) where w1,w2\(\) are the weak types corresponding to t1,t2\(\) according to their instances. We let F\({}^{\star}\) automatically check that this type is weak too (using the weak_arrow instance shown above). Exporting produces a function that imports its argument, applies f, and exports the result with lnl. If either importing or the function itself fail (returning lnr), the exception is returned instead. The check-tree for this instance must always be an EmptyNode, as, since this function does not have pre-/post-conditions, there is no check attached to it. When importing/exporting its components, the appropriate check-trees are passed by using left and right. ### Enforcing the post-condition when importing To import a function with non-trivial pre- and post-conditions, we must first adjust them through the use of dynamic checks in order to enforce the post-condition before the function returns, strengthening its specification (the same thing we did in SS2.1 when we explained how we import the handler). Here is where the cks check-tree becomes important. The programmer must provide a procedure to check whether that each pre- and post-condition in fact hold. Hence, import and export both receive this dynamic check, alongside a proof that it implies the actual (propositional) pre- or post-condition. Crucially, the type of this check is also indexed by the flag, which allows the import/export mechanism to be completely parametric in the flag, thus disallowing to import or export functions that are specialized to a particular flag. Let us recall the type req_handler and focus only on the pre- and post-conditions: \begin{tabular}{l l} type req_handler = (client:file_descr) \(\rightarrow\) (req:buffer{valid_http_request req}) \(\rightarrow\) (send: (res:buffer \(\rightarrow\)...)) \(\rightarrow\) \\ MIO (resexn unit) (requires (\(\lambda\) h \(\rightarrow\) did_not_respond h)) \\ (ensures (\(\lambda\) h r l t \(\rightarrow\) (wrote_at_least_once client It \(\vee\) lnr? r) \(\wedge\) handler_only_opens_and_reads_files_from_folder It \(\wedge\) web_server_only_writes It) \\ \end{tabular} As handlers will be written in the target language, their type does not guarantee anything about their IO behavior. Hence, in order to import them into the source language, the post-condition (4) needs to be enforced via a dynamic check. The pre-condition does not need to be dynamically checked, because one can always strengthen the pre-condition of a function using wrapping. To define a pure, specification-level, dynamic check, we use the following type: \begin{tabular}{l l l} type rc_typ (argt rett : Type) = argt \(\rightarrow\) trace \(\rightarrow\) rett \(\rightarrow\) trace \(\rightarrow\) bool \\ \end{tabular} Given the type of the argument and return value of a function, a dynamic check is a predicate over the value of the argument, the history before the call was made, the return value, and the local trace (i.e., the events generated during the activation of the function). Crucially, the result is in bool, as this check must be computable. For the handler, the dynamic check in question is the one presented in SS2.1, check_handler_post. From this pure dynamic check, we must obtain a computation that performs it on the concrete trace. These are represented by the type eff_rc_typ below. Checking a post-condition must be done in two steps: a "setup" and and an actual check. First, for the setup, we must mark the point in history where the instrumented function was called, in order to able to determine its local trace when it returns. This is accomplished by calling the eff_rc_typ function itself: it will return a dependent pair composed of the captured current trace and a second function, an actual check for the post-condition. This second check must be called once the function returns, and will enforce that the post-condition holds. To call such a function on the result y, we must be in a history that is at least as advanced as the one captured, hence the pre-condition. The check then returns a boolean, and guarantees this boolean is exactly equal to the pure check over the correct initial history and local trace. (\(\star\) _Effectful dynamic check for the result, given initial value and initial history \(\star\))_ \begin{tabular}{l l} type eff_rc_typ\_cont (f:erased tflag) (t\({}_{1}\) z :Type) (rcrcrc_typ t\({}_{1}\) t\({}_{2}\)) (xt1) (initial_h:erased trace) = \\ y:t\({}_{2}\)\(\rightarrow\) MIO bool f (\(\lambda\) h \(\rightarrow\) initial_h'suffix_of' h) \\ (\(\lambda\) current_h b It \(\rightarrow\) \\ let the_lt = initial_h 'trace_subtract' current_h in (\(\star\) obtain local trace since initial_h \(\star\)) \\ It == [] (\(\star\) _no events generated by this check \(\star\)_) \\ \(\wedge\) (b \(\Longleftrightarrow\) rc x initial_h y the_lt)) (\(\star\) the result is the same as the pure rc. \(\star\)) \\ \end{tabular} (\(\star\) _Effectful dynamic check: given an x:t\({}_{1}\) returns an erased trace and an effectful check for the result (t\({}_{2}\) )_ ``` typeeff_rc_typ(f!erasedtflag)(#t1#t2:Type)(rc:rc_typt1t2)= x:t1\(\rightarrow\)MIO(initial_h-(erasedtrace)&eff_rc_typ_contf1t1t2rcxinitial_h) fl(\(\lambda\rightarrow\tau\)) (\(\lambda\tilde{h}\left(\lambda\text{initial\_h,\_}\right)\)lt=h==reveinitial_h\(\wedge\)lt==[]) ``` Converting a pure dynamic check into an effectful check is straightforward: ``` valmake_check_eff:(#argt*ret:Type)\(\rightarrow\)rc:rc_typargtret\(\rightarrow\)eff_rc_typAllActionsrc letmake_check_eff#argt#retrcx= letinitial_h=get_traceP in letcont:eff_rc_typ_contAllActionsargtretrcx(hideinitial_h)=\(\lambda\)(y:rett)\(\rightarrow\) letcurrent_h=get_traceP in letlt=initial_h'trace_subtract'current_h in rcxinitial_hyt in() hideinitial_h,cont() ``` We are now ready to strengthen a function by adding a check to enforce its post-condition. Given a function f (with no pre- nor post-condition), a dynamic check rc, and its effectful counterpart eff_rc, we create the following wrapped variant of f. The wrapped version first "sets up" the effectful check, obtaining the do_check that checks the result. It then calls f, and checks the result, returning an exception if the check fails. ``` letenforce_post(#t1#t2:Type)(#f!erasedtflag)(#:policy_spec) (pre:t1\(\rightarrow\)trace\(\rightarrow\)Type0) (post:t1\(\rightarrow\)trace\(\rightarrow\)rescant1\(\rightarrow\)trace\(\rightarrow\)Type0) (rc:rc_typt1(rescant2))(eff_rc:eff_rc_typf1rc) (c1post:squash(yxhft.prexh\(\wedge\)enforced_locally\(\pi\)hlt\(\Longrightarrow\)postxh(lnrContract_failure)lt)) (c2post:squash(yxhrt.lt.prexh\(\wedge\)enforced_locally\(\pi\)hlt\(\wedge\)rcxhrt\(\Longrightarrow\)postxh(lnrContract_failure)lt)) (f:t1\(\rightarrow\)MIO(rescant2)flT\(\pi\)) :(x:t1\(\rightarrow\)MIO(rescant2)fl(prex)(postx) =x\(\rightarrow\)let([h,do_check])=eff_rcxin letr:rescant2=fxin ifdo_checkrthenrelselnrContract_failure ``` Some constraints have to hold between the post-condition, the dynamic check and the access control policy, to be able to dynamically enforce the post-condition. First, given that the contract can fail--e.g., when enforcing the post-condition--we require that the chosen post-condition accepts as a possible return value lnrContract_failure, this is captured by the c1post constraint. Moreover, the c1post and c2post constraints check if the post-condition is split correctly between a dynamic check and the access control policy enforced by the monitor. They check if the dynamic check does not enforce a property that would be too late to enforce at the end of the computation and if the access control policy and the dynamic check imply the post-condition, provided with the static guarantees related to the pre-condition. ### Enforcing the pre-condition when exporting Let us now recall that, in fact, the argument send of a handler is also a function with a pre-condition, belonging to the partial program. To export a function of this type into the context, we must add a check for its pre-condition, to ensure that it is only called on valid HTTP responses and when no response has been sent to the most recently accepted client (valid_http_responsereq\(\wedge\)did_not_respondh). The mechanism to enforce pre-conditions is similar to the one for post-conditions shown above. The main difference is that checking pre-conditions does not require a setup-check split, and can be done at once. To denote a dynamic check for a pre-condition, we reuse the rc_typ type above, using unit for the codomain and providing an empty local trace to the checks. This can be done via the enforce_pre combinator shown below: let trivialize_new_post#a#b(pre:a->trace->bool)(x:a)(post:full_post(resexnb)):full_post(resexnb)=\(\lambda h\ r\ l\to\texttt{if}\ \texttt{pre}\,\texttt{x}\,\texttt{h}\) then posth\(r\)ltelser==(lnr Contract_failure)\(\wedge\)lt==[] let enforce_pre#1#12#1 (pre:trace->Type0)(rc:rc_typt1unit)(eff_rc:eff_rc_typflrc) (post:trace->resexn2->trace->Type0)(#c_pre:squash(Vxh.rcxh()[]==preh)) (f:(t1->MO(resexnt2)flprefpost)) :x:t1->MO(resexnt2)fl(A->T)(trivialize_new_post(A->xh->rcxh()[])x(A->post)) =\(\lambda(\texttt{ct1})\to\) let([h,do_check()]=eff_rcxin) ifdo_check()thenfxelselnr Contract_failure This combinator receives a functionf with pre- and post-conditions. As for post-conditions, we also require a pure, specification-level dynamic check'rc' and an effectful version of it. The c_pre argument represents a pre-condition of enforce_pre stating that, for any historyh, the dynamic check succeeding is a sufficient condition for the pre-condition holding (i.e., it is a correct, perhaps conservative, check). The resulting function's pre-condition is then trivial, and can be called unconditionally. However, its post-condition is relativized to the check being successful, as it is possible that the function will not execute. Operationally, the enforced function is not doing more than performing the check and returning an exception when it fails. Now, to apply this combinator, we need a concrete dynamic check for the pre-condition. Given that did_not_respond returns in fact a boolean, the dynamic check is straightforward, as explained in SS2.1. ## 5. The reference monitor We use reference monitoring to strengthen the weak interface of the target context. The reference monitor records the IO calls of _both_ the partial program and the context, however it enforces an access control policy _only_ on the context. In this section we explain how we use the fact that the context is monitored to strengthen its specification, what are the constraints when choosing the access control policy and how we enforce the access control policy. We can use the fact that the monitor enforces an access control policy on the target context to give a specification to the context. For example, if the access control policy prevents the context to open files, then we can say that "the context does not open files". However, at the specification level, one distinguishes between the events of the partial program and of the context, while the access control policy is called only when the context does an IO operation. Therefore, using the access control policy directly to give specifications would say only half of the story--i.e., it would not specify what the partial program does in case the context calls the partial program. Because of this, our compilation chain requires the access control policy (denoted by \(\phi\) of type policy) and its specification (denoted by \(\pi\) of type policy_spec). type policy_spec=trace->call->cmd:io_cmds->io_sig.argscmd->Type0 type policy(#:policy_spec)=h:trace->cmd:io_cmds->arg:io_sig.argscmd->r:bool[r==r hCcmdarg] The main difference between the specification and the access control policy is that the access control policy is enforced _only_ on the IO calls of the context, thus it does not have to know who the caller is, because it is always the context, while the specification characterize the IO calls of the context _and_ the IO calls of the partial program inside the context. The refinement on the returned value of the policy makes sure that the policy implies the specification. Since the specification \(\pi\) has this form, we need a function enforced_locally to encode it as a post-condition. This function takes the history (h) and the local trace (lt), and destructs the local trace and checks if each event was satisfying \(\pi\) for that history. One can encode \(\pi\) as a post-condition of a computation using this function by doing the following: MIO\(\alpha\) fI (\(\lambda_{-}\rightarrow\tau\)) (\(\lambda\)h lt \(\rightarrow\) enforced_locally \(\pi\) h lt), which we shorten into MIO\(\alpha\) fI\(\top\pi\). ### Choosing the access control policy As we saw with the running example in SS2, the post-conditions assumed of the context are difficult to enforce dynamically. They have to be split between the access control policy enforced by the monitor and a dynamic check that is enforced by the higher-order contracts. The reference monitor cannot enforce the entire post-condition because it does not enforce the access control policy on function returns. When splitting, one has to pick an access control policy that has a specification such that for each post-condition of the context, the following condition is satisfied, which makes sure that \(\pi\) implies the post-condition. \(\forall\) args h lt. pre x h \(\wedge\) enforced_locally \(\pi\) h lt \(\Longrightarrow\exists\)r. post args h r It At the same time, each callback passed by the partial program to the context has to also satisfy this second condition, which makes sure that the post-condition of the callback is not breaking the specification \(\pi\). \(\forall\) h lt r. pre h \(\wedge\) post h r It \(\Longrightarrow\) enforced_locally \(\pi\) h lt Looking back at the running example (Figure 1), we can see why it would be hard to use the access control policy directly to give specification to the handler. The post-condition of the handler says that during its execution the web server only does writes (5), a logical assumption which is used by the web server to ensure its own post-condition (1). If one would use directly the access control policy into the specifications and also allow all IO operations of the web server, then the first constraint would be unsatisfiable for the post-condition of the handler. If one would use the access control policy and also block all IO operations of the web server, then the constraint would be unsatisfiable for the post-condition of send, because the post-condition states that the web server does one write. These two constraints are enforced by the importable and exportable type classes presented in (SS4). ### Enforcing the access control policy To enforce the access control policy, we use the operation get_trace from SS3. This operation provides a simple abstract model of a reference monitor that hides the details of an actual monitor implementation, while still allowing us to implement in F\({}^{\star}\) the secure IO operations and reason about them. The get_trace operation returns the current trace--i.e., the monitor's state. To generate the secure IO operations passed to the context we use the function enforce_policy, which takes the default IO operations (io_acts presented in SS3.5) and wraps them into a new function that first enforces the access control policy \(\phi\). This allows us to give to the secure IO operations the specification of the access control policy. The IO operations are defined using a single dependent function that takes as arguments the command (e.g., read) and the arguments (e.g., file descriptor) and returns the result of the IO operation (e.g., buffer). Since the secure IO operations are passed to the context, the caller is always set to be the context (denoted by \(C\)). type acts (f:erased tflag) (\(\pi\):policy_spec) (c:caller) = (cmd : io_cnds) \(\rightarrow\) (arg : io_sig.args cmd) \(\rightarrow\) MIO (io_sig.res cmd arg) fI (ensures (\(\lambda\) h r It \(\rightarrow\) enforced_locally \(\pi\) h lt \(\wedge\) (match r with Inr Contract_failure \(\rightarrow\) It == [] r' \(\rightarrow\) It == [convert_call_to_event c cmd arg r']))) val enforce_policy : acts T IOActions \(\rightarrow\)#\(\pi\):policy_spec \(\rightarrow\)\(\phi\):policy \(\pi\)\(\rightarrow\) acts \(\pi\) AllActions \(C\) let enforce_policy io_acts \(\phi\) cmd arg = if \(\phi\) (get_trace ()) cmd arg then io_acts \(C\) cmd arg else (Inr Contract_failure) ## 6. Secure compilation In this section we put all the pieces together and define our secure compilation chain (i.e., languages, compiler, and linker in SS6.1), give semantics to the source and target languages (SS6.2), and provide machine-checked proofs in F\({}^{\star}\) that our secure compilation chain soundly enforces a global trace property (SS6.3) and satisfies by construction a strong secure compilation criterion (SS6.4). ### Secure compilation chain In this subsection we use the definitions from the previous sections to formally define our source and target languages, compiler, and linker--to which we jointly refer as our secure compilation chain. We model our languages using a shallow embedding, which is a common way to express DSLs in F\({}^{\star}\)[40] and also simplifies our proofs. We represent the partial program and the context as functions (since, like in other proof assistants, F\({}^{\star}\) modules are not first-class objects and one cannot reason about them). In our setting, the partial program calls the context first, thus we made the partial program get the context as argument (after the context is instantiated with the flag, etc.). Figure 3. Secure compilation chain. Idealized mathematical notation. Our compilation chain is listed in Figure 3 and explained step by step below. As explained before, the partial program and the context share a higher-order interface. The difference between the source and the target language is that in the source language, the partial program and the context share a _strong_ interface, while in the target language they only share a _weak_ interface. The strong interface (type interface\({}^{S}\)) is a record that contains the type of the context (denoted by \(\mathsf{ctype}\)), the access control policy (denoted by \(\phi\)) and its specification \(\pi\) (explained in SS5), and the dynamic checks (denoted by \(\mathsf{cks}\) and explained in SS4) that are enforced by the higher-order contracts. The interface also contains the field \(\mathsf{importable\_type}\) that is a type class constraint which ensures that the specifications from the type of the context (\(\mathsf{ctype}\)) can be enforced using the access control policy \(\phi\) and the dynamic checks \(\mathsf{cks}\). The \(\mathsf{importable\_type}\) constraint also ensures that \(\mathsf{ctype}\) is parametric in the flag. The weak interface (type \(\mathsf{interface}^{T}\)) is a record that contains the type of the context (denoted again by \(\mathsf{ctype}\)), but this time, the field \(\mathsf{weak\_type}\) requires \(\mathsf{ctype}\) to be a weak type (e.g., a function that uses the \(\mathsf{MIO}\) monadic effect, with no pre-condition, that satisfies \(\pi\) and that is parametric in the flag). The interface also includes the access control policy \(\phi\) and its specification \(\pi\), which are used during linking. The type of the source partial program (\(\mathsf{prog}^{S}\)) and of the target context (\(\mathsf{ctx}^{T}\)) can also be found in Figure 3. We made the source partial program and the target context parametric in the flag such that they _cannot_ call \(\mathsf{GetTrace}\) directly. And as shown in SS2.2, we also use flag-based effect polymorphism to prevent the context from directly calling the IO operations. ### Trace-producing semantics We reason about whole programs using a trace-producing semantics. For this, we define a new type \(\mathsf{sem\_trace}\) of complete IO traces (i.e., of terminated executions) together with the final result of the whole program (which is an \(\mathsf{int}\), like in UNIX). We also define trace properties as sets of traces. \(\mathsf{type\_trace}=\mathsf{list\ event}\ast\mathsf{int}\) \(\mathsf{type\_trace}\mathsf{property}=\mathsf{sem\_trace}\rightarrow\mathsf{ Type}0\) The monad morphism used in our Dijkstra Monad, \(\theta\), gives a weakest-pre-condition semantics to the computational monad \(\mathsf{dm\_gmio}\), which we can easily adapt into a trace-producing semantics by applying first a backward predicate transformer and then the pre-/post-condition transformer [28], thus obtaining \(\mathsf{beh\_gmio}\). This allows us to define the following trace-producing semantics function \(\mathsf{beh}\), which gives us the behavior of a whole program, and which we can use in our security theorems below. Because our whole programs are computations in the \(\mathsf{MIO}\) monadic effect, we have to reveal their monadic representation using \(\mathsf{reify}\)[19]. \(\mathsf{val\_beh\_gmio}:\mathsf{dm\_gmio}\mathsf{int\ AllActions}\top\top \mathsf{sem\_trace\_property}\) \(\mathsf{let\_beh\_gmio}\mathsf{ws}\) (\(\mathsf{lt\_res}\)) = \(\forall\)p. \(\theta\)\(\mathsf{ws}\) p [] \(\Longrightarrow\) p \(\mathsf{lt}\)\(\mathsf{res}\) \(\mathsf{val\_beh}:\mathsf{whole}^{T}\rightarrow\mathsf{sem\_trace\_property}\) \(\mathsf{let\_beh}\mathsf{ws}=\mathsf{beh\_gmio}\mathsf{(reify}\mathsf{ws})\) ### Soundness of hybrid enforcement with respect to global trace property We first prove in \(\mathsf{F}^{\star}\) that dynamically enforcing the access control policy \(\pi\) on the context in combination with static verification of the partial program soundly enforces a global trace property \(\psi\), which serves as the post-condition of the program. Formally, we show that the compiled source program linked with the target context respects the trace property \(\psi\): **Theorem 6.1** (Soundness).: \(\forall\)_\(I^{S}\)_\(P^{S}_{I^{S}}\)_\(C^{T}_{I^{S}\downarrow}\). \(\mathsf{beh}(C^{T}_{I^{S}\downarrow}[P^{S}_{I^{S}\downarrow}])\subseteq I^{S}\psi\)_ Proof sketch.: The proof of this theorem is intuitive since the partial program is a function that takes the context as argument. The partial program is statically verified to satisfy \(\psi\) and our linking and compilation does not affect this guarantee. Since both languages are shallowly embedded, and since both the higher-order contracts and the reference monitor are internalized in \(\mathrm{F}^{\star}\), this theorem follows simply from linking (i.e., applying) the partial program to the monitored context. Thus, soundness is ensured by \(\mathrm{F}^{\star}\) typing. ### Robust Relational Hyperproperty Preservation (RrHP) We show that the compilation chain robustly preserves relational hyperproperties, which is the strongest secure compilation criterion of Abate et al. (Abate et al., 2018), and which is in particular stronger than full abstraction (Srivastava et al., 2017), as proved by Abate et al. (Abate et al., 2018) for a determinate setting with IO that closely matches ours. Relational hyperproperties (Abate et al., 2018) are a very broad class of security properties that includes trace properties (such as safety and liveness), hyperproperties (Bhattacharya et al., 2018) (such as noninterference), and also properties that relate the behaviors of multiple programs (such as trace equivalence). Robust Relational Hyperproperty Preservation (RrHP) states that for any relational hyperproperty that a collection of source programs robustly satisfy--i.e., the programs satisfy this relational hyperproperty when linked with any source context--then the compilation of these programs will also robustly satisfy the same relational hyperproperty with respect to arbitrary target contexts. Intuitively in order to achieve such a strong criterion, the various parts of the secure compilation chain have to work together to provide enough protection to the compiled program so that target contexts don't have more power to attack a compiled source program than a source context would have against the original source program. Stating RrHP thus also needs the definitions of source contexts (\(\mathrm{ctx}^{S}\)) and source linking (\(\mathrm{link}^{S}\)) from Figure3. The type of source contexts is similar to the ones in the target, but it has an extra argument for the effectful contracts: we have to manually pass the effectful contract to the source context to align the attack capabilities of the context between the target and the source language. In the target language, the higher-order contracts reveal information to the attacker by just calling the wrapped callbacks. In the source language, there is no need to wrap the callbacks the partial program and the context exchange, thus the same kind of attacks would not be possible if the effectful contracts would not be passed manually. In contrast to the target context that takes the effectful contracts from the compiled partial program, the source context takes the effectful contracts from the source linking. We are now ready to state RrHP and we use a property-free characterization that was proposed by Abate et al. (Abate et al., 2018) and that is generally better tailored for proofs: **Theorem 6.2** (Robust Relational Hyperproperty Preservation (RrHP)).: \[\forall\;I^{S}\;C^{T}_{I^{S}\downarrow}\;\exists C^{S}_{I^{S}}.\;\forall P^{ S}_{I^{S}}.\;\mathrm{beh}(C^{T}_{I^{S}\downarrow}[P^{S}_{I^{S}}])=\mathrm{beh}(C^{S}_{I^{S}}[P^{S}_{ I^{S}}])\] Proof sketch.: To prove this criterion, one has to create a source context by defining a back-translation that only takes the target context as argument. In our case, we can define back-translation by partially applying import (see Figure3), making it very similar to what compilation does to the target context. These simple definitions of compilation, back-translation, and source linking allow us to prove a syntactic equality (by unfolding the definitions) which makes the proof of the criterion immediate: \(\forall\;I^{S}\;C^{T}_{I^{S}\downarrow}\;P^{S}_{I^{S}}.\;C^{T}_{I^{S} \downarrow}[P^{S}_{I^{S}}]=C^{T}_{I^{S}}\uparrow[P^{S}_{I^{S}}]\) While RrHP is the strongest secure compilation criterion of Abate et al. (Abate et al., 2018), and such criteria are generally very difficult to prove, we have set things up so that it holds by construction, so our proof is very easy, which is not usually the case. This simplicity is possible since (1) our languages are shallowly embedded in \(\mathrm{F}^{\star}\); (2) we get back-translation for free from the way we compile the program, which is reminiscent of higher-order contracts; and (3) our compilation and back-translation satisfy a syntactic cancellation law that immediately implies RrHP. ## 7. Web Server Case Study In order to illustrate what we can achieve with our secure compilation chain, we present the case study of a simple, statically verified web server to be linked against an untrusted request handler. This is essentially a more complete version of our running example from SS2. To test and run our web server we extract it to Ocaml. While we cannot yet claim any security property about the extracted web server, it shows how our security mechanisms are not erased during extraction, which is encouraging for our long-term endeavor of a complete secure compilation chain. ### Static verification of the web server The web server opens a TCP socket and then inside a terminating loop waits for incoming clients. For each client it accepts, it reads the packet it received and then passes it to the linked request handler. To implement the web server we need several IO operations from the Unix library, such as network communication, in the MIO effect. The implementation of the web server is split into several functions which makes things more modular and eases verification by feeding smaller verification conditions to the SMT solver. Compared to the running example, our case study is more complete: the running example assumes the existence of some constants whereas the web server of the case study can even be extracted to OCaml. They are nonetheless very close and the few lemmas we need to develop to verify the running example and the case study are exactly the same, since we use the same specification. These lemmas are essentially properties about the every_request_gets_a_response post-condition that need to be proven by induction on the trace, something that the SMT solver cannot do on its own. Once these facts are proven however, the SMT solver is able to exploit them to prove the specifications of the various parts of our web server without further intervention from the user. ### Monitoring the request handler In its specification, the web server expects a request handler of type req_handler as shown in the running example, with very small changes in the specification. To monitor the request handler, we have to make sure that it uses the secure IO operations that enforce the access control policy. The reference monitor enforces the same access control policy \(\phi\) on the request handler as in the running example. As we already discussed in the paper, we do so by passing IO operations that are wrapped such that they enforce the access control policy and update the monitor's state. We give as example a series of adversarial request handlers that would break the specification of the web server if it weren't for the dynamic checks added by the higher-order contracts and the reference monitor. The simplest adversarial handler is the one that does nothing. It breaks specification (4) of the web server which expects the handler to write to the client at least once. let handler1 io_acts client req send = Inl 0 When we link the web server with this handler, we fall into the branch of the web server that checks whether the handler ran into a contract failure (i.e., the higher-order contract was violated) and responds with HTTP error 400, ensuring specification (1) of the web server stating that the client gets a response to every request. The second adversarial request handler tries to violate the pre-condition of send. In the case-study, the pre-condition of send is simpler, and only requires the length of the message to be smaller than 500 characters. The pre-condition is checked by the higher-order contracts, and it will always fail. let handler2 io_acts client req send = send (Bytes.create 501ul 10uy) The following three examples of adversarial request handlers all try to violate the access control policy. handler3 tries to open a file outside of the /temp folder, which is the only folder authorized by the contract; handler4 tries to write directly to the client, bypassing the send function passed by the web server; handler5 tries to use IO operations outside of those authorized by opening a socket. All of them are prevented from executing by the reference monitor. let handler3 io_acts client req send = io_acts Openfile ("/etc/passwd",[O_RDWR],0x650) let handler4 io_acts client req send = io_acts Write (client,(Bytes_create 501ul 10uy)) let handler5 io_acts client req send = io_acts Socket () As part of our case study, we also implement a request handler that is not adversarial. It serves files from the /temp folder. This shows that a handler that is not adversarial works as expected. ### Extraction of the web server To run the web server written in F*, we extract it to OCaml. During extraction, an actual OCaml implementation of the IO operations and GetTrace must be provided. We use a simple implementation that for each IO operation appends an event to the trace after performing the IO, and for the GetTrace operation returns the entire trace. For the request handlers we additionally extract from F* a version of the Unix library that is wrapped such that it enforces the access control policy, does IO, and updates the monitor's state; we then use this library for the request handler instead of the standard Unix library. When running the extracted web server and linking against monitored handlers, we can see that handlers that try to violate the access control policy are indeed interrupted: the browser shows the HTTP error as intended. ## 8. Related Work Higher-order contractsFindler and Felleisen (Felleisen, 2017) pioneered higher-order contracts, which have been implemented in the Racket programming language (Feldner, 2017, Chapter 8). Higher-order contracts can be also be stateful, and several works have explored adding special support for such stateful contracts, e.g., Disney et al. (Disney et al., 2018) propose temporal higher-order contracts, Scholliers et al. (Scholliers et al., 2018) propose computational contracts, and Tov and Pucella (Tov and Pucella, 2018) study stateful contracts for affine types. Higher-order contracts are also an important part of our solution, yet, as explained in SS1, combining higher-order contracts with a form of reference monitoring is needed for solving our problem. Not only do we need a form of stateful higher-order contracts to record all prior IO events, but certain post-conditions of the context have to be turned into access control policies enforced on each IO operation by a reference monitor. Moore et al. (Moore et al., 2018) propose an expressive framework for implementing access control monitors for software components based on authorization contracts, which are contracts that manage authority environments associating rights with an execution context. Our setting is in a way simpler, since we only have two components, the verified program and the untrusted context, and our access control policies are only enforced on the IO actions context, is a relatively straightforward way. Our setting is, however, different than that of most work on higher-order contracts, since we use higher-order contracts for secure interoperability between a fully-statically verified program and untrusted code and we propose a solution that is itself formally verified. This setting is also different from that on soft contract verification (Tonon et al., 2019; Tonon and Tononon, 2020; Tononon and T give to the refinement types and the pre- and post-conditions, our interfaces do not support other dependent types. Osera et al. [35] coined the idea of _dependent interoperability,_ which refers to the sound interoperability between a dependently-typed language and a simply-typed language. Dagan et al. [14] proposed a dependent interoperability framework in which one can relate dependent and simple types. Then, one can use those relations to weaken a dependently-typed interface or to strengthen a simply-typed interface of a pure and total program. In comparison, our programs contain the IO effect. For our long term goal, to have a secure compiler between F\({}^{\star}\) and a safe subset of OCaml, integrating their work into ours could make our strong interfaces more expressive by supporting dependent types--i.e., F\({}^{\star}\) has support for dependent types while OCaml does not. **Secure compilation.** There is various work on secure compilation [37], but the only work on securely compiling _formally verified_ programs seems to be that of Agten et al. [4] and Strydonck et al. [47], who protect programs verified with separation logic against adversarial context using protected module architectures [36; 4; 3] or linear capabilities [47]. They target full abstraction, while we show RrHP and our secure compilation proofs are also machine-checked in F\({}^{\star}\). A bigger difference is that in this work we focus on code that can perform IO, while Agten et al. [4] and Strydonck et al. [47] focus on stateful code. **Runtime verification.** There is extensive prior work on how to monitor program executions either by using reference monitoring or by doing instrumentation. Chen et al. [12] present a framework to enable Monitoring Oriented Programming (MOP) for software development and analysis that builds on the Aspect Oriented Programming (AOP) of Java [11; 23]. In MOP, the partial program and the context run inside the Java Virtual Machine, which enables MOP to monitor and instrument the whole program. In their framework, "monitors are automatically synthesized from formal specifications and integrated at appropriate places in the program" and they show how they can add dynamic checks on module boundaries for first-order interfaces. Their framework also does not have a mechanized proof of soundness and they do not prove a secure compilation criterion about their framework as we do. Moreover, the powerful Java Virtual Machine with AOP enabled is a massive and complex project and for now there are no alternatives for the languages F\({}^{\star}\) extracts to. Nevertheless, we show that it is possible to have a secure compilation chain only by mixing standard higher-order contracts and a reference monitor. **Static verification of IO.** There is a lot of work on statically verifying IO _whole_ programs [1; 16; 20; 21; 21; 25; 29; 38; 39]. The interaction trees [53] work is the most relevant to us, since later they managed to also define a Dijkstra Monad [46]. Interaction trees were shown to be a great fit to verify non-terminating impure computations in Coq. Interaction trees were used to verify an HTTP Key-Value Server that is part of CertiKOS [54]. The web server is written in C and the trace properties were verified in Coq. Coq requires to apply tactics manually to prove verification goals. Our work targets to simplify this kind of use-case by taking advantage of the SMT automation and we consider extending our MIO monadic effect with non-termination in future work. However, we also stress that all these papers do not address the problem of secure compilation. **Gradual verification.** Bader et al. [8] and Wise et al. [52] propose gradual program verification to easily combine dynamic and static verification in the same language at a very fine granularity of individual types. A main difference is that our work tries to give a model that combines dynamic and static verification in a source and a target language, at a much coarser granularity: program vs context. The other main difference is that we focus on code performing IO. **Reasoning about robust safety.** Interoperability between trusted and untrusted code was also studied by Sammler et al. [44], by showing the benefits of low-level sandboxing. This method relies on affine types and they also have an higher-order contract mechanism going between types and an universal type. They have a similar notion of exposing a wrapped version of the operation to the untrusted side, that has dynamic checks, but they discuss only robust safety related to the memory model and they do not discuss trace properties. ## 9. Conclusions and Future Work This paper showed how higher-order contracts and reference monitoring can be used together to securely compile a partial IO program verified in F* and enforce its specification against adversarial code. We see this as an important first step towards a full-fledged formally secure compilation chain from F* to a safe subset of OCaml. A next step in this direction would be to extend MO with the other OCaml effects such as non-termination, exceptions, and state. Another interesting future work would be to use parametricity to prove a noninterference theorem formalizing that our flag-based effect polymorphic context can't directly call GetTrace or the IO operations. There are, however, at least two challenges to overcome for achieving this: (1) our noninterference statement is significantly more complex than prior work in this space (Bauer and Pretranz, 2015; Bachthauser et al., 2020); and (2) the erased type, its interaction with the primitive Ghost effect of F*, and their parametricity properties would first need to be better formally understood. ###### Acknowledgements. We thank Stefan Ciobaca and Eric Tanter for the many interesting discussions and helpful feedback and the PriSC 2023 and ICFP SRC 2020 referees for their helpful reviews. This work was in part supported by by the European Research Council under ERC Starting Grant SECOMP (715753), the German Federal Ministry of Education and Research BMBF (grant 16KISK038, project 6GEM), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of the Excellence Strategy of the German Federal and State Governments - EXC 2092 CASA - 390781972.
2310.19427
Refining Diffusion Planner for Reliable Behavior Synthesis by Automatic Detection of Infeasible Plans
Diffusion-based planning has shown promising results in long-horizon, sparse-reward tasks by training trajectory diffusion models and conditioning the sampled trajectories using auxiliary guidance functions. However, due to their nature as generative models, diffusion models are not guaranteed to generate feasible plans, resulting in failed execution and precluding planners from being useful in safety-critical applications. In this work, we propose a novel approach to refine unreliable plans generated by diffusion models by providing refining guidance to error-prone plans. To this end, we suggest a new metric named restoration gap for evaluating the quality of individual plans generated by the diffusion model. A restoration gap is estimated by a gap predictor which produces restoration gap guidance to refine a diffusion planner. We additionally present an attribution map regularizer to prevent adversarial refining guidance that could be generated from the sub-optimal gap predictor, which enables further refinement of infeasible plans. We demonstrate the effectiveness of our approach on three different benchmarks in offline control settings that require long-horizon planning. We also illustrate that our approach presents explainability by presenting the attribution maps of the gap predictor and highlighting error-prone transitions, allowing for a deeper understanding of the generated plans.
Kyowoon Lee, Seongun Kim, Jaesik Choi
2023-10-30T10:35:42Z
http://arxiv.org/abs/2310.19427v1
Refining Diffusion Planner for Reliable Behavior Synthesis by Automatic Detection of Infeasible Plans ###### Abstract Diffusion-based planning has shown promising results in long-horizon, sparse-reward tasks by training trajectory diffusion models and conditioning the sampled trajectories using auxiliary guidance functions. However, due to their nature as generative models, diffusion models are not guaranteed to generate feasible plans, resulting in failed execution and precluding planners from being useful in safety-critical applications. In this work, we propose a novel approach to refine unreliable plans generated by diffusion models by providing refining guidance to error-prone plans. To this end, we suggest a new metric named _restoration gap_ for evaluating the quality of individual plans generated by the diffusion model. A restoration gap is estimated by a _gap predictor_ which produces _restoration gap guidance_ to refine a diffusion planner. We additionally present an attribution map regularizer to prevent adversarial refining guidance that could be generated from the sub-optimal gap predictor, which enables further refinement of infeasible plans. We demonstrate the effectiveness of our approach on three different benchmarks in offline control settings that require long-horizon planning. We also illustrate that our approach presents explainability by presenting the attribution maps of the gap predictor and highlighting error-prone transitions, allowing for a deeper understanding of the generated plans. ## 1 Introduction Planning plays a crucial and efficient role in tackling decision-making problems when the dynamics are known, including board games and simulated robot control (Tassa et al., 2012; Silver et al., 2016, 2017; Lee et al., 2018). To plan for more general tasks with unknown dynamics, the agent needs to learn the dynamics model from experience. This approach is appealing since the dynamics model is independent of rewards, enabling it to adapt to new tasks in the same environment, while also taking advantage of the latest advancements from deep supervised learning to employ high-capacity models. The most widely used techniques for learning dynamics models include autoregressive forward models (Deisenroth & Rasmussen, 2011; Hafner et al., 2019; Kaiser et al., 2020), which make predictions based on future time progression. Although an ideal forward model would provide significant benefits, there is a key challenge that the accuracy of the model directly affects the quality of the plan. As model inaccuracies accumulate over time (Ross & Bagnell, 2012; Talvitie, 2014; Luo et al., 2019; Janner et al., 2019; Voelcker et al., 2022), long-term planning using imprecise models might yield sub-optimal performances compared to those achievable through model-free techniques. Building upon the latest progress in generative models, recent studies have shown promise in transforming reinforcement learning (RL) problems into conditional sequence modeling, through the modeling of the joint distribution of sequences involving states, actions, and rewards (Lambert et al., 2021; Chen et al., 2021; Janner et al., 2021, 2022). For instance, Diffuser (Janner et al., 2022) introduces an effective framework for generating trajectories using a diffusion model with flexible constraints on the resulting trajectories through reward guidance in the sampling phase. Although these approaches have achieved notable performance on long-horizon tasks, they still face challenges in generating outputs with unreliable trajectories, referred to as artifacts, resulting in limited performance and unsuitability for deployment in safety-critical applications. This paper presents an orthogonal approach aimed at enhancing the plan quality of the diffusion model. We first propose a novel metric called _restoration gap_ that can automatically detect whether generated plans are feasible or not. We theoretically analyze that it could detect artifacts with bounded error probabilities under regularity conditions. The restoration gap directly evaluates the quality of generated plans by measuring their restorability through diffusion models in which plans are exposed to a certain degree of noise, as illustrated in Figure 1. A restoration gap is estimated by a function approximator which we name a _gap predictor_. The gap predictor provides an additional level of flexibility to the diffusion model, and we demonstrate its ability to efficiently improve low-quality plans by guiding the reduction of the estimated restoration gap through a process, which we call Restoration Gap Guidance (RGG). Furthermore, we propose a regularizer that prevents adversarial restoration gap guidance by utilizing an attribution map of the gap predictor. It effectively mitigates the risk of the plan being directed towards an unreliable plan, enabling further improvement in the planning performance. The main contributions of this paper are summarized as follows: **(1)** We provide a novel metric to assess the quality of individual plans generated by the diffusion model with theoretical justification. **(2)** We propose a new generative process, Restoration Gap Guidance (RGG) which utilizes a gap predictor that estimates the restoration gap. **(3)** We show the effectiveness of our approach across three different benchmarks in offline control settings. ## 2 Background ### Planning with Diffusion Probabilistic Models We consider the reinforcement learning problem which aims to maximize the expected discounted sum of rewards \(\mathbb{E}_{\pi}[\sum_{t=0}^{T}\gamma^{t}r(\mathbf{s}_{t},\mathbf{a}_{t})]\) where \(\pi\) is a policy that defines a distribution over actions Figure 1: Illustration of two plans with low/high restoration gaps with a specified start ��⃝���� and goal �⃝�����. For each input plan, we first perturb it using Gaussian noise. We then remove the noise from the perturbed plan by simulating the reverse SDE which progressively transforms the perturbed plan into the initial plan by utilizing the score function (Section 2.2). The restoration gap is then computed as the expected \(L_{2}\) distance between the input plan and the plan restored from noise corruption (Section 3). The top example exhibits a smaller restoration gap because of its successful restoration close to the original plan, while the bottom example has a larger restoration gap due to its poor restoration performance. Plans restored from various noise corruptions are differentiated by distinct colors. \(\mathbf{a}_{t}\), \(\mathbf{s}_{t}\) represents the states that undergo transition according to unknown discrete-time dynamics \(\mathbf{s}_{t+1}=f(\mathbf{s}_{t},\mathbf{a}_{t})\), \(r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is a reward function, and \(\gamma\in(0,1]\) is the discount factor. Trajectory optimization solves this problem by finding the sequence of actions \(\mathbf{a}_{0:T}^{*}\) that maximizes the expected discounted sum of rewards over planning horizon \(T\): \[\mathbf{a}_{0:T}^{*}=\operatorname*{arg\,max}_{\mathbf{a}_{0:T}}\mathcal{J}(\mathbf{\tau} )=\operatorname*{arg\,max}_{\mathbf{a}_{0:T}}\sum_{t=0}^{T}\gamma^{t}r(\mathbf{s}_{t}, \mathbf{a}_{t}), \tag{1}\] where \(\mathbf{\tau}=(\mathbf{s}_{0},\mathbf{a}_{0},\mathbf{s}_{1},\mathbf{a}_{1},...,\mathbf{s}_{t},\mathbf{a}_ {t})\) represents a trajectory and \(\mathcal{J}(\mathbf{\tau})\) denotes an objective value of that trajectory. This trajectory can be viewed as a particular form of two-dimensional sequence data: \[\mathbf{\tau}=\begin{bmatrix}\mathbf{s}_{0}&\mathbf{s}_{1}&\mathbf{s}_{T}\\ \mathbf{a}_{0}&\mathbf{a}_{1}&\cdots&\mathbf{a}_{T}\end{bmatrix}. \tag{2}\] Diffuser (Janner et al., 2022) is a trajectory planning model, which models a trajectory distribution by employing diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020): \[p_{\theta}(\mathbf{\tau}^{0})=\int p(\mathbf{\tau}^{N})\prod_{i=1}^{N}p_{\theta}(\mathbf{ \tau}^{i-1}|\mathbf{\tau}^{i})\,\mathrm{d}\mathbf{\tau}^{1:N} \tag{3}\] where \(p(\mathbf{\tau}^{N})\) is a standard Gaussian prior, \(\mathbf{\tau}^{0}\) is a noiseless trajectory, and \(p_{\theta}(\mathbf{\tau}^{i-1}|\mathbf{\tau}^{i})\) is a denoising process which is a reverse of a forward process \(q(\mathbf{\tau}^{i}|\mathbf{\tau}^{i-1})\) that gradually deteriorates the data structure by introducing noise. The denoising process is often parameterized as Gaussian with fixed timestep-dependent covariances: \(p_{\theta}(\mathbf{\tau}^{i-1}|\mathbf{\tau}^{i})=\mathcal{N}(\mathbf{\tau}^{i-1}|\mathbf{\mu} _{\theta}(\mathbf{\tau}^{i},i),\mathbf{\Sigma}^{i})\). Diffuser recasts the trajectory optimization problem as a conditional sampling with the conditional diffusion process under smoothness condition on \(p(\mathcal{O}_{1:T}=1|\mathbf{\tau})\)(Sohl-Dickstein et al., 2015): \[\tilde{p}_{\theta}(\mathbf{\tau})=p(\mathbf{\tau}|\mathcal{O}_{1:T}=1)\propto p(\mathbf{ \tau})p(\mathcal{O}_{1:T}=1|\mathbf{\tau}),\quad p_{\theta}(\mathbf{\tau}^{i-1}|\mathbf{ \tau}^{i},\mathcal{O}_{1:T})\approx\mathcal{N}(\mathbf{\tau}^{i-1};\mathbf{\mu}+\mathbf{ \Sigma}g,\mathbf{\Sigma}) \tag{4}\] where \(\mathbf{\mu}\), \(\mathbf{\Sigma}\) are the parameters of the denoising process \(p_{\theta}(\mathbf{\tau}^{i-1}|\mathbf{\tau}^{i})\), \(\mathcal{O}_{t}\) is the optimality of timestep \(t\) of trajectory with \(p(\mathcal{O}_{t}=1)=\exp(\gamma^{t}r(\mathbf{s}_{t},\mathbf{a}_{t}))\) and \[g=\nabla_{\mathbf{\tau}}\log p(\mathcal{O}_{1:T}|\mathbf{\tau})|_{\mathbf{\tau}=\mathbf{\mu}} =\sum_{t=0}^{T}\gamma^{t}\nabla_{\mathbf{s}_{t},\mathbf{a}_{t}}r(\mathbf{s}_{t},\mathbf{a}_{t })|_{(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbf{\mu}_{t}}=\nabla\mathcal{J}(\mathbf{\mu}). \tag{5}\] Therefore, a separate model \(\mathcal{J}_{\phi}\) can be trained to predict the cumulative rewards of trajectory samples \(\mathbf{\tau}^{i}\). By utilizing the gradients of \(\mathcal{J}_{\phi}\), trajectories with high cumulative rewards can be generated. As part of the training procedure, Diffuser trains an \(\mathbf{\epsilon}\)-model to predict the source noise instead of training \(\mathbf{\mu}_{\theta}\) as it turns out that learning \(\mathbf{\epsilon}_{\theta}\) enables the use of a simplified objective, where \(\mathbf{\mu}_{\theta}\) is easily recovered in a closed form (Ho et al., 2020): \[\mathcal{L}(\theta):=\mathbb{E}_{i,\mathbf{\epsilon},\mathbf{\tau}^{0}}[||\mathbf{\epsilon }-\mathbf{\epsilon}_{\theta}(\mathbf{\tau}^{i},i)||^{2}], \tag{6}\] where \(i\in\{0,1,...,N\}\) is the diffusion timestep, \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is the target noise, and \(\mathbf{\tau}^{i}\) is the trajectory corrupted by the noise \(\mathbf{\epsilon}\) from the noiseless trajectory \(\mathbf{\tau}^{0}\). ### Generalizing Diffusion Probabilistic Models as a Stochastic Differential Equation (SDE) The forward process in diffusion probabilistic models perturbs data structure by gradually adding Gaussian noises. Under an infinite number of noise scales, this forward process over continuous time can be represented as a stochastic differential equation (SDE) (Song et al., 2021): \[\mathrm{d}\mathbf{\tau}=\mathbf{f}(\mathbf{\tau},t)\,\mathrm{d}t+g(t)\,\mathrm{d}\mathbf{ w}, \tag{7}\] where \(t\in(0,1]\) is a continuous time variable for indexing diffusion timestep, \(\mathbf{f}(\mathbf{\tau},t)\) is the drift coefficient, \(g(t)\) is the diffusion coefficient, and \(\mathbf{w}\) is the standard Wiener process. Similarly, the denoising process can be defined by the following reverse-time SDE: \[\mathrm{d}\mathbf{\tau}=[\mathbf{f}(\mathbf{\tau},t)-g(t)^{2}\mathbf{s}_{\theta}(\mathbf{ \tau},t)]\,\mathrm{d}t+g(t)\,\mathrm{d}\bar{\mathbf{w}}, \tag{8}\] where \(\mathbf{\bar{w}}\) is the infinitesimal noise in the reverse time direction and \(\mathbf{s}_{\theta}(\mathbf{\tau},t)\) is the learned score network which estimates the data score \(\nabla_{\mathbf{\tau}}\log p_{t}(\mathbf{\tau})\). This score network can be replaced by the \(\mathbf{\epsilon}\)-model: \[\mathbf{s}_{\theta}(\mathbf{\tau},t)\approx\nabla_{\mathbf{\tau}}\log q(\mathbf{\tau})= \mathbb{E}_{\mathbf{\tau}^{0}}[\nabla_{\mathbf{\tau}}\log q(\mathbf{\tau}|\mathbf{\tau^{0}})] =\mathbb{E}_{\mathbf{\tau}^{0}}\left[-\frac{\mathbf{\epsilon}_{\theta}(\mathbf{\tau},t)}{C _{t}}\right]=-\frac{\mathbf{\epsilon}_{\theta}(\mathbf{\tau},t)}{C_{t}}, \tag{9}\] where \(C_{t}\) is a constant determined by the chosen perturbation strategies. The solution of a forward SDE is a time-varying random variable \(\mathbf{\tau}^{t}\). Using the reparameterization trick (Kingma and Welling, 2014), it is achieved by sampling a random noise \(\mathbf{\epsilon}\) from a standard Gaussian distribution which is scaled by the target standard deviation \(\sigma_{t}\) and shifted by the target mean: \[\mathbf{\tau}^{t}=\alpha_{t}\mathbf{\tau}^{0}+\sigma_{t}\mathbf{\epsilon},\quad\mathbf{ \epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{10}\] where \(\alpha_{t}:[0,1]\to[0,1]\) denotes a scalar function indicating the magnitude of the noiseless data \(\mathbf{\tau}^{0}\), and \(\sigma_{t}:[0,1]\to[0,\infty)\) denotes a scalar function that determines the size of the noise \(\mathbf{\epsilon}\). Depending on perturbation strategies for \(\alpha_{t}\) and \(\sigma_{t}\), two types of SDEs are commonly considered: the Variance Exploding SDE (VE-SDE) has \(\alpha_{t}=1\) for all \(t\); whereas the Variance Preserving (VP) SDE satisfies \(\alpha_{t}^{2}+\sigma_{t}^{2}=1\) for all \(t\). Both VE and VP SDE change the data distribution to random Gaussian noise as \(t\) moves from \(0\) to \(1\). In this work, we describe diffusion probabilistic models within the continuous-time framework using VE-SDE to simplify notation, as VE/VP SDEs are mathematically equivalent under scale translations (Song et al., 2021). For VE SDE, the forward process and denoising process are defined by the following SDEs: \[\text{(Forward SDE)}\,\,\mathrm{d}\mathbf{\tau} =\sqrt{\frac{\mathrm{d}[\sigma_{t}^{2}]}{\mathrm{d}t}}\,\mathrm{d}\bm {\mathrm{w}} \tag{11}\] \[\text{(Reverse SDE)}\,\,\mathrm{d}\mathbf{\tau} =\left[-\frac{\mathrm{d}[\sigma_{t}^{2}]}{\mathrm{d}t}\mathbf{s}_ {\theta}(\mathbf{\tau},t)\right]\mathrm{d}t+\sqrt{\frac{\mathrm{d}[\sigma_{t}^{2}] }{\mathrm{d}t}}\,\mathrm{d}\mathbf{\bar{w}}. \tag{12}\] ## 3 Restoration Gap To assess the quality of plans generated by diffusion probabilistic models, we propose a novel metric named _restoration gap_. It aims to automatically detect infeasible plans that violate system constraints. We hypothesize that for feasible plans, even if a certain amount of noise perturbs them, they can be closely restored to their initial plans by diffusion models. It is attributed to the property of temporal compositionality in diffusion planners (Janner et al., 2022) that encourages them to compose feasible trajectories by stitching together any feasible plan subsequences. However, for infeasible plans that obviously fall out of the training distribution as they violate physical constraints as shown in Figure 5, restoring them to a state near their original conditions is challenging. Based on this intuition, we define the restoration gap of the generated plan \(\mathbf{\tau}\) as follows: \[\text{perturb}_{\hat{t}}(\mathbf{\tau})=\mathbf{\tau}+\sigma_{\hat{t}} \mathbf{\epsilon}_{\hat{t}},\quad\mathbf{\epsilon}_{\hat{t}}\sim\mathcal{N}(\mathbf{0 },\mathbf{I}) \tag{13}\] \[\text{restore}_{\hat{t},\theta}(\mathbf{\tau})=\mathbf{\tau}+\hat{t}_{ \hat{t}}\left[-\frac{\mathrm{d}[\sigma_{t}^{2}]}{\mathrm{d}t}\mathbf{s}_{ \theta}(\mathbf{\tau},t)\right]\mathrm{d}t+\sqrt{\frac{\mathrm{d}[\sigma_{t}^{2} ]}{\mathrm{d}t}}\,\mathrm{d}\mathbf{\bar{w}}\] (14) \[\text{restoration gap}_{\hat{t},\theta}(\mathbf{\tau})=\mathbb{E}_{\bm {\epsilon}_{\hat{t}}}\left[\|\mathbf{\tau}-\text{restore}_{\hat{t},\theta}(\text{ perturb}_{\hat{t}}(\mathbf{\tau}))\|_{2}\right], \tag{15}\] where \(\hat{t}\in(0,1]\) indicates the magnitude of applied perturbation. The restoration gap measures the expected \(L_{2}\) distance between the generated plan and the plan restored from noise corruption, which is estimated by the Monte Carlo approximation. Figure 2 provides empirical evidence supporting our hypothesis. In order to analyze the effectiveness of the restoration gap, we define artifact plans generated by Diffuser (Janner et al., 2022) that involve transitions of passing through walls for which it is impossible for the agent to follow. We compare the distribution of the restoration gap for both groups, normal plans and artifact plans1. The histogram of the restoration gap for normal and artifact plans demonstrates that infeasible artifact plans have larger restoration gap values compared to normal plans. Therefore, the detection of infeasible artifact plans can be automated by incorporating a statistical test that utilizes the restoration gap and thresholding with a threshold value of \(b>0\): \[\text{restoration gap}_{\hat{t},\theta}(\mathbf{\tau})>b. \tag{16}\] To bound the probability of making errors by choosing the specific threshold \(b\), we provide Proposition 1. Let \(\mathbb{H}_{0}\) represent the null hypothesis which assumes that the trajectory \(\mathbf{\tau}\) belongs to the normal set \(\mathcal{T}_{\mathrm{normal}}\), and let \(\mathbb{H}_{1}\) represent the alternative hypothesis which assumes that the trajectory \(\mathbf{\tau}\) belongs to the artifact set \(\mathcal{T}_{\mathrm{artifacts}}\). The following proposition suggests how to choose the threshold \(b\) in order to bound the error probabilities. **Proposition 1**.: _Given \(t\in[0,1]\) and a positive constant \(C,\Delta\), assume that \(\|\mathbf{s}_{\theta}(\mathbf{\tau},t)\|_{2}^{2}\leq C^{2}\) for all \(\mathbf{\tau}\in\mathcal{T}_{\mathrm{normal}}\subset\mathbb{R}^{d}\), and \(\|\mathbf{s}_{\theta}(\mathbf{\tau},t)\|_{2}^{2}\geq(C+\Delta)^{2}\) for all \(\mathbf{\tau}\in\mathcal{T}_{\mathrm{artifacts}}\subset\mathbb{R}^{d}\). If_ \[\Delta\geq\frac{2\sqrt{d}+2\sqrt{d+2\sqrt{-d\cdot\log\delta}-2\log\delta}}{ \sigma_{\hat{t}}}, \tag{17}\] _then setting_ \[b\geq\sigma_{\hat{t}}\left(C\sigma_{\hat{t}}+\sqrt{d}+\sqrt{d+2\sqrt{-d\cdot \log\delta}-2\log\delta}\right) \tag{18}\] _guarantees both type I and type II errors at most \(2\delta\)._ Proof Sketch.: We begin by deriving thresholds \(b_{I}\) and \(b_{II}\) to control type I and type II errors at most \(\delta\), respectively. This is done by decomposing the restoration gap into the outcomes of the score and Gaussian noise. To ensure the control of both type I and type II errors, we examine the condition \(b_{I}\leq b_{II}\) and obtain the conclusion. For the complete proof, see Appendix A. According to Proposition 1, to achieve low error probabilities for both type I (false positives, where normal trajectories are incorrectly classified as artifacts) and type II (false negatives, where artifact trajectories are wrongly identified as normal) errors, it is essential to have a large enough \(\sigma_{\hat{t}}\) to properly satisfy the condition, which implies having a large enough \(\hat{t}\). In practice, we find that setting \(\hat{t}=0.9\) works well. Figure 2: The first and second rows show examples of artifact and normal plans, respectively, generated by Diffuser (Janner et al., 2022) in the Maze2D-Large environment, including a predetermined start and goal. The third row presents the density of realism score (Kynkaänniemi et al., 2019), rarity score (Han et al., 2023), and restoration gap to illustrate the differences in distribution between artifacts and normal plans. Detailed explanation of other metrics is described in Appendix C.2. ## 4 Refining Diffusion Planner ### Restoration Gap Guidance Although Diffuser (Janner et al., 2022) has demonstrated competitive performance against previous non-diffusion-based planning methods by utilizing gradients of return \(\mathcal{J}_{\phi}\) to guide trajectories during the denoising process: \[\mathrm{d}\mathbf{\tau}=[\mathbf{f}(\mathbf{\tau},t)-g(t)^{2}\big{(}\mathbf{s}_{\theta}( \mathbf{\tau},t)+\alpha\nabla\mathcal{J}_{\phi}(\mathbf{\tau})\big{)}]\,\mathrm{d}t+g( t)\,\mathrm{d}\bar{\mathbf{w}}, \tag{19}\] it entirely relies on the ability of a generative model and assumes a perfect data score estimation. For plans with inaccurately estimated scores, the diffusion models could generate unreliable plans that are infeasible to execute and lead to limited performance. To address this, it is essential to construct an adjusted score to refine the generative process of the diffusion planner. Therefore, we estimate the restoration gap by training a gap predictor \(\mathcal{G}_{\psi}\) on synthetic diffused data generated through the diffusion process, taking full advantage of its superior generation ability with conditional guidance from gradients of return. Parameters of the gap predictor \(\psi\) are optimized by minimizing the following objective: \[\mathcal{L}(\psi):=\mathbb{E}_{t,\mathbf{\tau}^{0}}[\|\text{restoration gap}_{t, \theta}(\mathbf{\tau}^{t})-\mathcal{G}_{\psi}(\mathbf{\tau}^{t},t)\|^{2}], \tag{20}\] where \(t\in(0,1]\) denotes a continuous time variable for indexing the diffusion timestep, and \(\mathbf{\tau}^{t}\) is the diffused trajectory resulting from \(\mathbf{\tau}^{0}\) at diffusion timestep \(t\). With this gap predictor, we define the Restoration Gap Guidance (RGG) as follows: \[\mathrm{d}\mathbf{\tau}=[\mathbf{f}(\mathbf{\tau},t)-g(t)^{2}\Big{(}\mathbf{s}_{ \theta}(\mathbf{\tau},t)+\alpha\big{(}\nabla\mathcal{J}_{\phi}(\mathbf{\tau})-\beta \nabla\mathcal{G}_{\psi}(\mathbf{\tau},t)\big{)}\Big{)}]\,\mathrm{d}t+g(t)\, \mathrm{d}\bar{\mathbf{w}}, \tag{21}\] where \(\alpha\) is a positive coefficient that scales the overall guidance and \(\beta\) is a positive coefficient that can be adjusted to enforce a small restoration gap for the generated trajectory. ### Attribution Map Regularization Although guiding the diffusion planner to minimize the restoration gap effectively refines low-quality plans (more details in Section 5), this refining guidance could push the plan in an undesirable direction due to the estimation error of the gap predictor during the denoising process. As a result of this estimation error, guiding plans with the sub-optimal gap predictor may result in _model exploitation_(Kurutach et al., 2018; Janner et al., 2019; Rajeswaran et al., 2020), yielding sub-optimal results. To mitigate the issue of adversarial guidance, we present a regularization method that prevents the gap predictor from directing plans in the wrong direction. Inspired by the prior studies which improve the model performance by utilizing attribution maps (Nagisetty et al., 2020; Bertoin et al., 2022), we measure a total variation of the attribution map \(M\) obtained from any input attribution methods \(M=E(\mathcal{G}_{\psi}(\mathbf{\tau},t))\). Each element of the attribution map indicates the extent to which the final prediction is influenced by the corresponding input feature. The rationale of employing the total variation of \(M\) lies in the hypothesis that transitions with excessively high attribution scores are more likely to be outliers. This is because a sequence of transitions within a planned trajectory, rather than a single one, causes a plan to have a high restoration gap. By adding this attribution map regularization, Equation 21 becomes: \[\mathrm{d}\mathbf{\tau}=[\mathbf{f}(\mathbf{\tau},t)-g(t)^{2}\Big{(}\mathbf{s}_{ \theta}(\mathbf{\tau},t)+\alpha\big{(}\nabla\mathcal{J}_{\phi}(\mathbf{\tau})-\beta \nabla\mathcal{G}_{\psi}(\mathbf{\tau},t)-\lambda\nabla\|\nabla M\|\big{)}\Big{)}] \,\mathrm{d}t+g(t)\,\mathrm{d}\bar{\mathbf{w}}, \tag{22}\] where \(\lambda\) is a control parameter given by a positive constant, encouraging the attribution map to have a simple, organized structure while preventing the occurrence of adversarial artifacts. We refer to this modification as RGG+. Figure 3: Planning performance of RGG+ on Maze2D-Large single-task with varying \(\lambda\) values. ## 5 Experiments We present the analytical results of approaches to improve planning performance by leveraging guidance from our proposed metric, the restoration gap, for a wide range of decision-making tasks in offline control settings. Specifically, we demonstrate **(1)** the relationship between a high restoration gap and poor planning performance, **(2)** the enhancement of planning performance in the diffusion planner by leveraging restoration gap guidance, and **(3)** explainability by presenting the attribution maps of the learned gap predictor, highlighting infeasible transitions. More information about our experimental setup and implementation details can be found in Appendix C and Appendix D, respectively. ### Relationship between Restoration Gap and Planning Performance We evaluate how effectively the restoration gap can identify infeasible plans by comparing our metric with a realism score (Kynkaanniemi et al., 2019) and rarity score (Han et al., 2023). Both prior metrics are designed to assess the quality of generated samples by examining the discrepancy between the generated sample and the real data manifold in the feature space. Figure 4 illustrates the performance of plans which are chosen up to top-K% from each metric. As illustrated in Figure 4, the higher the restoration gap of the plan is, the poorer the performance is, which implies that the restoration gap captures the quality of the plan well compared to other metrics. ronments where the complexity of the obstacle maps is higher than in U-Maze or Medium layouts, leading to a higher occurrence of infeasible plans. RGG+ performs on par with or better than RGG. In contrast, model-free algorithms fail to reliably achieve the goal, as Maze2D environments require hundreds of steps to arrive at the goal location. Locomotion ExperimentsGym-MuJoCo locomotion tasks (Fu et al., 2020) are standard benchmarks in evaluating algorithms on heterogeneous data with varying quality. We compare our methods with the model-free algorithms CQL (Kumar et al., 2020) and IQL (Kostrikov et al., 2022); model-based algorithms MOPO (Yu et al., 2020), MOReL (Kidambi et al., 2020), and MBOP (Argenson and Dulac-Arnold, 2021); sequence modeling approach Decision Transformer (DT) (Chen et al., 2021), Trajectory Transformer (TT) (Janner et al., 2021) and Diffuser (Janner et al., 2022); and pure imitation-based approach behavior-cloning (BC). As indicated in Table 1, our approach of refining Diffuser with RGG either matches or surpasses most of the offline RL baselines when considering the average score across various tasks. Additionally, it significantly enhances the performance of Diffuser, particularly in the "Medium" dataset. We attribute this improvement to the sub-optimal and exploratory nature of the policy that was used to generate the "Medium" dataset, which results in a challenging data distribution to learn the diffusion planner. Consequently, RGG clearly contributes to the enhancement of planning performance. However, RGG+ only brings about a marginal improvement over RGG. This might be because we adopt the strategy of (Janner et al., 2022), using a closed-loop controller and a shorter planning horizon in locomotion environments compared to Maze2D environments, thereby simplifying the learning process of the gap predictor. Block Stacking ExperimentsThe block stacking task suite with a Kuka iiwa robotic arm is a benchmark to evaluate the model performance for a large state space (Janner et al., 2022) where the offline demonstration data is achieved by PDDLStream (Garrett et al., 2020). It involves two tasks: an unconditional stacking task whose goal is to maximize the height of a block tower, and a conditional stacking task whose goal is to stack towers of blocks subject to a specified order of blocks. We compare our methods with model-free offline reinforcement learning algorithms BCQ (Fujimoto et al., 2019) and CQL (Kumar et al., 2020), and Diffuser (Janner et al., 2022). We present quantitative results in Table 3, where a score of 100 corresponds to the successful completion of the task. The results demonstrate the superior performance of RGG over all baselines, with RGG+ further enhancing this planning performance. ### Injecting Explainability to Diffusion Planners The explainability of decision-making models is particularly important in control domains as they could potentially harm physical objects including humans (Kim and Choi, 2021; Lee et al., 2023; Beechey et al., 2023; Kim et al., 2023; Kenny et al., 2023). Training the gap predictor enables the diffusion planner to have explainability. Diffusion planners often generate trajectories with unreliable transitions resulting in execution failures. Attribution maps from the gap predictor highlight such unreliable transitions by identifying the extent to which each transition contributes to the decision of the gap predictor. Specifically, in Maze2D, the attribution maps emphasize the transitions involving wall-crossing or abrupt directional changes, as illustrated in Figure 5. In the unconditional block stacking task where the robot destroys the tower while stacking the last block, the tower-breaking transitions are highlighted. On the other hand, for successful trajectories on the second and third attribution maps, the attribution maps do not emphasize picking or stacking behaviors. Similarly, in the conditional block stacking task where the robot fails to stack the block, they spotlight the transitions of stacking behaviors. ### Additional Experiments To study the benefit of regularization on harder tasks, characterized by a larger trajectory space and a smaller fraction of the space observed in training, we explore \(\lambda\) values \([0.0,0.5,1.0,3.0,5.0]\) while increasing the planning budget as illustrated in Figure 3. As the planning budget increases, \(\lambda=0\) generates adversarial plans, resulting in decreased performance. In contrast, RGG+ demonstrates \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Environment** & **BCQ** & **CQL** & **Diffuser** & **RGG** & **RGG+** \\ \hline Unconditional Stacking & 0.0 & 24.4 & 53.3 \(\pm\) 2.4 & 63.3 \(\pm\) 2.7 & 65.3 \(\pm\) 2.0 \\ Conditional Stacking & 0.0 & 0.4 & 44.3 \(\pm\) 3.2 & 53.0 \(\pm\) 3.3 & 56.7 \(\pm\) 3.1 \\ \hline **Average** & 0.0 & 8.1 & 48.8 & **58.2** & **61.0** \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of RGG, RGG+, and various prior methods evaluated over 100 planning seeds. A score of 100 is desired, while a random approach would receive a score of 0. effectiveness across a wide range of \(\lambda\) values, with \(\lambda>0\) consistently outperforming \(\lambda=0\) (i.e., better than no regularization). Further investigation into the attribution method, perturbation magnitude, and comparison with guidance approaches, including metrics such as the rarity score, the negative realism score, and the discriminator, as well as the visualization of low and high restoration gap plans, can be found in Appendix B. ## 6 Related Work Metrics for Evaluating Generative ModelInception score (IS) (Salimans et al., 2016) and Frechet inception distance (FID) (Heusel et al., 2017) are commonly used as standard evaluation metrics for generative models, assessing the quality of generated samples by comparing the discrepancy between real and generated samples in the feature space. However, these metrics do not distinguish between fidelity and diversity aspects of generated samples. To address this issue, precision and recall variants (Sajjadi et al., 2018; Kynkaanniemi et al., 2019) are introduced to separately evaluate these properties. Subsequently, density and coverage (Naeem et al., 2020) are proposed to overcome some of the drawbacks of precision and recall, such as vulnerability to outliers and computational inefficiency. While these metrics are helpful for evaluating the quality of a set of generated samples, they are not suitable for ranking individual samples. In contrast, realism score (Kynkaanniemi et al., 2019) and rarity score (Han et al., 2023) offer a continuous extension of improved precision and recall, enabling the evaluation of individually generated sample quality. Despite their usefulness, Figure 5: Attribution maps for trajectories generated by diffusion planner highlight transitions that have a substantial contribution to the estimation of a high restoration gap by the gap predictor, indicated in red. these methods come with limitations as they rely on real samples for precise real manifold estimation, whereas our restoration gap does not have such a constraint. Diffusion Model in Reinforcement LearningDiffusion models have gained prominence as a notable class of generative models, characterizing the data generation process through iterative denoising procedure (Sohl-Dickstein et al., 2015; Ho et al., 2020). This denoising procedure can be viewed as a way to parameterize the gradients of the data distribution (Song and Ermon, 2019), linking diffusion models to score matching (Hyvarinen and Dayan, 2005) and energy-based models (EBMs) (LeCun et al., 2006; Du and Mordatch, 2019; Nijkamp et al., 2019; Grathwohl et al., 2020). Recently, diffusion models have been successfully applied to various control tasks (Janner et al., 2022; Urain et al., 2023; Ajay et al., 2023; Chi et al., 2023; Liang et al., 2023). In particular, Diffuser (Janner et al., 2022) employs an unconditional diffusion model to generate trajectories consisting of state-action pairs. The approach includes training a separate model that predicts the cumulative rewards of noisy trajectory samples, which then guides the reverse diffusion process towards high-return trajectory samples in the inference phase, analogous to classifier-guided sampling (Dharwal and Nichol, 2021). Building upon this, Decision Diffuser (Ajay et al., 2023) extends the capabilities of Diffuser by adopting a conditional diffusion model with reward or constraint guidance to effectively satisfy constraints, compose skills, and maximize return. Meanwhile, AdapDiffuser (Liang et al., 2023) enhances generalization ability of the diffusion model to unseen tasks by selectively fine-tuning it with high-quality data, derived through the use of hand-designed reward functions and an inverse dynamics model. In contrast, in this work, we focus on evaluating the quality of individually generated samples and explore ways to enhance planning performance by utilizing guidance derived from these evaluations. Restoring Artifacts in Generative ModelsRecently, several studies have concentrated on investigating the artifacts in Generative Adversarial Networks (GAN) model architectures for image generation tasks. GAN Dissection (Bau et al., 2019) explores the internal mechanisms of GANs, focusing on the identification and removal of units that contribute to artifact production, leading to more realistic outputs. In a subsequent study, an external classifier is trained to identify regions of low visual fidelity in individual generations and to detect internal units associated with those regions (Tousi et al., 2021). Alternatively, artifact correction through latent code manipulation based on a binary linear classifier is proposed (Shen et al., 2020). Although these methods can assess the fidelity of individual samples, they still necessitate additional supervision, such as human annotation. To address this limitation, subsequent works explore unsupervised approaches for detecting and correcting artifact generations by examining local activation (Jeong et al., 2022) and activation frequency (Choi et al., 2022). In contrast, our work primarily focuses on refining the generative process of diffusion probabilistic models to restore low-quality plans. ## 7 Conclusion We have presented a novel refining method that fixes infeasible transitions within the trajectory generated by the diffusion planner. This refining process is guided by a proposed metric, restoration gap, which quantifies the restorability of a given plan. Under specific regularity conditions, we prove that the restoration gap effectively identifies unreliable plans while ensuring a low error probability for both type I and type II errors. The experimental results, which include enhancement in quantitative planning performance and visualization of qualitative attribution maps, highlight the importance of the refinement method of the diffusion planner. LimitationsWhile the restoration gap guidance effectively enhances the feasibility of plans and consistently improves the planning performance of diffusion models, our method is limited in situations where an offline dataset is provided. Training the diffusion model often requires transition data that uniformly covers the state-action space, the collection of which is a nontrivial and time-consuming task. Future WorkOur analysis of the effectiveness of the restoration gap is currently confined to a relatively simple task, Maze2D (see Figure 2), where we explicitly define normal and artifact plans. The choice of Maze2D is motivated by its suitability for identifying violations of prior knowledge, such as feasible plans not passing through walls. However, as future work, it would be worthwhile to explore the efficacy of restoration gap in more complex tasks, such as the block stacking task. ## Acknowledgements This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, Industry & Energy (MOTIE, Republic of Korea) and by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation, No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
2306.03399
Scalable telomere-to-telomere assembly for diploid and polyploid genomes with double graph
Despite recent advances in the length and the accuracy of long-read data, building haplotype-resolved genome assemblies from telomere to telomere still requires considerable computational resources. In this study, we present an efficient de novo assembly algorithm that combines multiple sequencing technologies to scale up population-wide telomere-to-telomere assemblies. By utilizing twenty-two human and two plant genomes, we demonstrate that our algorithm is around an order of magnitude cheaper than existing methods, while producing better diploid and haploid assemblies. Notably, our algorithm is the only feasible solution to the haplotype-resolved assembly of polyploid genomes.
Haoyu Cheng, Mobin Asri, Julian Lucas, Sergey Koren, Heng Li
2023-06-06T04:29:12Z
http://arxiv.org/abs/2306.03399v1
# Scalable telomere-to-telomere assembly for diploid and polyploid genomes with double graph ###### Abstract Despite recent advances in the length and the accuracy of long-read data, building haplotype-resolved genome assemblies from telomere to telomere still requires considerable computational resources. In this study, we present an efficient _de novo_ assembly algorithm that combines multiple sequencing technologies to scale up population-wide telomere-to-telomere assemblies. By utilizing twenty-two human and two plant genomes, we demonstrate that our algorithm is around an order of magnitude cheaper than existing methods, while producing better diploid and haploid assemblies. Notably, our algorithm is the only feasible solution to the haplotype-resolved assembly of polyploid genomes. The emergence of accurate PacBio High-Fidelity (HiFi) long reads has revolutionized the assembly of large genomes, making high-quality haplotype-resolved assembly a routine procedure [1, 2, 3]. However, HiFi reads are often not long enough to resolve long exact repeats, resulting in fragmented components around repeat-rich regions such as centromeres [4]. Recent advances by Oxford Nanopore Technologies (ONT) have enabled the generation of ultra-long reads, which are approximately 5-10 times longer than HiFi reads though at relatively lower accuracy [5]. The Telomere-to-Telomere (T2T) consortium has demonstrated that with careful manual curation, combining HiFi and ultra-long reads could perfectly reconstruct the haploid CHM13 human genome [6]. Learning from the complete human genome assembly of CHM13, Verkko is a first effort towards automated telomere-to-telomere assembly of diploid samples [7]. It can produce high-quality assembly when parental sequence data are available. However, as we will show later, Verkko does not fully phase a single diploid sample without parental data and thus results in incomplete assembly. It may produce relatively fragmented assembly at lower read coverage and is unable to produce haplotype-resolved assemblies of polyploid samples. Verkko is also compute intensive, making it costly to deploy Verkko to a large number of samples. For the efficient near telomere-to-telomere assembly of diploid and polyploid samples, we developed hifiasm (UL) that tightly integrates PacBio HiFi, ONT ultra-long, Hi-C reads and trio data and produces high-quality assembly in one go. Unlike Verkko that is based on the multiplex de Bruijn graph [8, 9], hifiasm (UL) represents sequences with two string graphs [10] (Fig. 1a). The first string graph is built from HiFi reads (Fig. 1b), the same as the original hifiasm graph [1]. The second string graph is built from ultra-long reads in reduced representation (Fig. 1b-d). Hifiasm (UL) then merges the two graphs to produce the final assembly graph (Fig. 1e). The use of two assembly graphs at different scales separates hifiasm (UL) from other assemblers. To compare hifiasm (UL) with Verkko at a population scale, we evaluated both approaches using 22 human samples selected from the Human Pangenome Reference Consortium (HPRC) [14]. Eleven of these samples were chosen from the Year-1 dataset of the HPRC, while the remaining eleven samples were selected from the Year-2 dataset (Supplementary Table 3). We carried out trio assembly for all 22 samples but only did Hi-C-based single-sample assembly for the 11 Year-1 samples. Verkko natively supports trio binning assembly. As it does not support internal Hi-C phasing, we utilized the Hi-C phasing approach, gfase [11], in combination with Verkko for the single-sample phased assembly. In total, we collected a total of 132 assembled haplotypes for comprehensive evaluation of hifiasm (UL) and Verkko. For each sample, both hifiasm (UL) and Verkko yielded assemblies of similar sizes (Fig. 2a) and exhibited comparable phasing accuracy (Supplementary Table 1). However, when assembling HPRC Year-1 samples at lower HiFi and ultra-long coverage (Supplementary Table 3), hifiasm (UL) tended to produce more contiguous assemblies (Fig. 2b). It generated contiguous contigs spanning from telomere to telomere for multiple chromosomes, whereas Verkko did not produce telomere -to-telomere contigs for Year-1 samples (Supplementary Fig. 1a). The consistent improvement to assembly contigibility highlights the advantages of our approach. Although Verkko could produce scaffolds that bridge entire chromosomes (Supplementary Table 4), the assembly gaps in the scaffolds will complicate downstream analysis. In addition, Verkko could not assemble chromosome-long scaffolds for all chromosomes. We anyway need a Hi-C-based scaffolder for reliable scaffolding. For HPRC Year-2 datasets at higher coverage (Supplementary Table 1), Verkko assemblies were broadly comparable to hifiasm (UL) assemblies in terms of assembly contiguity (Fig. 2b), the number of telomere-to-telomere contigs (Supplementary Fig. 1a), and phasing accuracy (Supplementary Table 1). A noticeable difference between the two assemblers is that Verkko did not assign all contigs to specific haplotypes given Hi-C data. We observed that the majority of unassigned sequences come from unpaired sex chromosomes of male samples, but there are also relatively larger numbers of unassigned sequences from paired sex chromosomes and autosomes. Due to these unassigned contigs, Verkko assemblies missed more autosomal genes in comparison to hifiasm (UL) and were thus less complete (Fig. 2c). Meanwhile, for samples HG01099 and HG03710, Verkko produced noticeably more duplicated genes. Close inspection of these errors revealed that Verkko duplicated a few Figure 1: **Hybrid assembly with PacBio HiFi and ONT ultra-long reads.****(a)** Overall workflow. Hifiasm (UL) corrects HiFi reads, constructs a string graph with HiFi reads alone and aligns ultra-long reads to the HiFi graph. Based on the graph alignment, hifiasm (UL) encodes an ultra-long read as a sequence of integers with each integer uniquely corresponding to a node (also known as a unitig) in the HiFi graph. It then constructs a string graph of integer-encoded ultra-long reads, and merges the HiFi graph and the ultra-long graph to generate the final assembly. **(b)** HiFi assembly graph and ultra-long alignment. Circles in orange and blue represent heterozygous nodes constructed by HiFi reads from haplotype 1 and haplotype 2, respectively. Green circles represent homozygous nodes within the HiFi string graph. The alignment paths of ultra-long reads from haplotype 1 and haplotype 2 are represented by orange and blue lines, respectively. **(c)** Ultra-long reads encoded as sequences of integer unitig identifiers in the HiFi graph. Nucleotide sequences are ignored at this step. **(d)** Ultra-long assembly graph and the resulting contigs in the integer encoding. **(e)** Final assembly graph by incorporating the ultra-long contigs into the HiFi graph. From the initial HiFi graph, hifiasm (UL) removes unitigs that are present on the ultra-long contigs and adds the ultra-long contigs back together with edges between remaining unitigs and unitigs on the ultra-long contigs. Some unitigs (green circles in the example) may appear multiple times in the final graph. Figure 2: **Statistics of different assemblies.** Hfiasm(UL)_trio and verkko_trio assemblies were generated using HiFi and ultra-long reads, along with parental short reads. Hfiasm(UL)_nic and verkko(UL)_gfase assemblies were constructed using HiFi, ultra-long, and Hi-C reads obtained from the same sample. Verkko(UL)_gfase applied the standalone Hi-C phasing algorithm, gfase[11], to the Verkko assembly graph. **(a)** Assembly length of 11 human samples. **(b)** Contig N50 representing the assembly contiguity of human samples. **(c)** Problematical autosomal genes reported by the asmgene method[12]. The number of each assembly is the sum of the asmgene results for haplotype 1 and 2. **(d)** Cloud computing cost for assembling human data. Only three samples were assembled by Verkko using cloud computing. **(e)** Assembly length of the haploid _Arabidopsis thaliana_ sample and the autotetrapoloid potato sample. Hfiasm(HiFi) represents hifiasm assemblies without the ultra-long integration. **(f)** Contig N50 of Arabidopsis and potato assemblies by filtering out contigs shorter than 500kb. **(g)** BUSCO[13] scores of Arabidopsis and potato assemblies by filtering out contigs shorter than 500kb. regions on one haplotype but left these regions blank on the other haplotype. Hiiasm (UL) was less affected by this issue. We assembled all Year-2 samples with hifiasm (UL) and three samples with Verkko using cloud computing and recorded the cost. Hifiasm (UL) is 8-15 times more cost-effective. The low computational cost of hifiasm (UL) is particularly important for population-scale telomere-to-telomere assembly projects. We used all HiFi reads and ultra-long reads with a minimum length of 50kb from the _Arabidopsis thaliana_ (Col-0) dataset [15] to evaluate the assembly results for non-human genomes (Fig. 2e-g). As an inbred plant strain, _A. thaliana_ Col-0 has five long chromosomes with a large number of ribosome DNAs (rDNAs) on the short arms of chromosomes 2 and 4. Hiiasm (UL) produced exactly five contigs that are 500 kb or longer. Three of them were telomere-to-telomere contigs corresponding to chromosomes 1, 3 and 5 (Supplementary Fig. 1b). The other two contigs represented the majority of chromosomes 2 and 4 except the rDNA arrays on their short arms. Hifiasm (UL) assembled tens of Mb of small contigs \(<\)500 kb (Fig. 2e). Almost all of them could be aligned to rDNA or the chloroplast DNA. Also interestingly, the contig corresponding to chromosome 2 integrated 294 kb of mitochondrial DNA towards the telomere end of the short arm. This integration is also present in the assembly done by the authors who produced the dataset [15] but is absent from the _A. thaliana_ reference genome or the assembly done by Naish et al [16]. For the _A. thaliana_ dataset, Verkko only generated one telomere-to-telomere contig corresponding to chromosome 5 (Supplementary Fig. 1b), partly due to homozygous regions that are longer than ultra-long reads but do not span entire chromosome arms. The Verkko assembly at present was less contiguous (Fig. 2f) and less complete based on the BUSCO evaluation [13] (Fig. 2g). The Verkko contig corresponding to chromosome 2 was fragmented on the short arm and did not reveal the mitochondrion integration. Both Verkko and hifiasm (UL) assemblies were more contiguous hifiasm HiFi-only assembly, indicating the additional power of ultra-long reads. To evaluate polyploid assembly, we further assembled an autotetraploid potato genome [17]. As Verkko does not support polyploid phasing, only hifiasm (UL) and hifiasm (HiFi) were applied with all HiFi reads and ultra-long reads with a minimum length of 50kb. By leveraging the additional genetic map information from progeny, both hifiasm (UL) and hifiasm (HiFi) could assemble four haplotypes based on the polyploidy graph-binning approach (Methods). The integration of ultra-long reads not only significantly increased assembly contiguity (Fig. 2f and Supplementary Fig. 1b) but also improved the completeness for all haplotypes (Fig. 2g). For the polyploid genome assembly, the main limitation of our current algorithm is that it requires genetic map information from progeny. In order to address this issue, we implemented an experimental single-sample approach using Hi-C phasing, and applied it to the autotetraploid potato dataset. This resulted in four haplotype assemblies, which have slightly worse phasing accuracy and contiguity in comparison to the genetic-map-based assemblies. However, the four Hi-C phased haplotype assemblies are imbalanced, with one assembly being 20% larger than the others. In the future, we plan to address this issue by proposing Hi-C phasing approaches specifically designed for polyploid genomes. The availability of ultra-long or accurate long reads has significantly advanced the development of _de novo_ genome assemblies. Recently, the Human Pangenome Reference Consortium (HPRC) has successfully applied our original hifiasm algorithm to achieve high-quality haplotype-resolved assemblies in a population-scale utilizing accurate HiFi reads, while the Telomere-to-Telomere (T2T) consortium has demonstrated the feasibility of reconstructing a human genome from telomere to telomere by co-assembling HiFi and ultra-long reads. In this study, we present a new hybrid assembly algorithm, hifiasm (UL), which provides an ultra-fast and robust solution for telomere-to-telomere genome assemblies in a population-scale. We anticipate that hifiasm (UL) will be a highly competitive _de novo_ assembler for numerous large-scale telomere-to-telomere assembly projects in the coming years. In the long term, hifiasm (UL) will facilitate a more comprehensive understanding of complex genomic regions such as centromeres and highly repetitive segmental duplications. ## Methods **Overview of hifiasm (UL).** The main objective of hifiasm (UL) is to leverage the benefits of HiFi and ultra-long reads, simplifying the assembly graph as much as possible (Fig. 1). A complete and clean assembly graph will substantially simplify the following steps like Hi-C phasing and phased contig generation. Our previous phasing algorithms [18, 1] are then applied to the graph to produce haplotype-resolved telomere-to-telomere assemblies. In building the high-quality assembly graph, hifiasm (UL) generally follows the traditional hybrid assembly paradigm, which uses the accurate HiFi graph as the backbone and extends the graph by aligning ultra-long reads to it. However, unlike existing methods, hifiasm (UL) performs an additional round of ultra-long-to-HiFi alignments in advance. This provides extra information for accurately constructing the HiFi graph and alleviates the contained read problem specifically for the string graph [19]. We then create an integer graph for the ultra-long reads and subsequently merge it with the initial HiFi graph to produce the final assembly graph. The advantages of hifiasm (UL) stem mainly from the novel double graph framework for co-assembly (Fig. 1). Indeed, there are several existing hybrid assemblers designed to combine shorter accurate reads as well as longer noisy reads, but they all rely on the straightforward accurate-read-first assembly strategy: that is, they first build the assembly graph with accurate reads, and then further resolve the graph by aligning long noisy reads onto the graph. Although this approach takes the information among accurate reads as well as the information between accurate reads and noisy reads, it disregards the critical information among noisy reads. To fully exploit all reads, the double graph framework in hifiasm (UL) employs a two-stage approach. First, it builds two string graphs individually, one for HiFi reads (Fig. 1b) and another for ultra-long reads (Fig. 1d). Second, it merges the two graphs to produce a final graph that combines both HiFi and ultra-long reads (Fig. 1e). This approach ensures that the information contained in both types of reads is fully leveraged, resulting in a more accurate and complete genome assembly. **Building an accurate string graph as the backbone.** A string graph is an assembly graph that preserves the information of complete reads, where each node represents a read, and edges connecting the nodes correspond to overlaps between reads. Hifiasm (UL) builds the initial backbone graph with HiFi reads, as they are much more accurate than ultra-long reads. To further eliminate sequencing errors, all HiFi reads are self-corrected with the haplotype-resolved error correction algorithm described in the original hifiasm [1]. Once the graph is constructed, it is necessary to perform multiple rounds of graph cleaning to simplify the graph by removing edges that are less likely to be real. Although the string graph has been widely utilized in many long-read assemblers, the issue of the contained read remains unclear and could potentially impact the completeness of the graph [19]. Given two reads \(X\) and \(Y\), if there is an overlap between \(X\) and \(Y\) that covers a part of \(X\) and the whole \(Y\), \(Y\) is a contained read that is totally contained in \(X\). Extended Data Fig. 1a gives an example. Read h11 and h12 are two contained reads covered by read h3. Practical implementations of the string graph remove all contained reads when building graphs, since the edges in the string graph correspond to the prefix-to-suffix or suffix-to-prefix overlaps between reads [10]. However, simply ignoring contained reads could introduce breakpoints in the string graph, especially in highly repetitive regions and homologous regions between two haplotypes. For instance, read h12 is a critical read for one haplotype (reads in blue) but is an unnecessary contained read for another haplotype (reads in orange), as shown in Extended Data Fig. 1a. Removing read h12 does not affect the haplotype in orange but leads to a breakpoint for the haplotype in blue, resulting in a fragmented assembly graph. Identifying critical contained reads and retaining them in the string graph is the primary challenge posed by the contained read problem. Several approaches have been proposed to tackle it based on the simplified assumptions of read coverage or length [19], which are not always reliable, especially in highly repetitive regions. Hifiasm (UL) alleviates the contained read problem within the HiFi string graph by utilizing ultra-long-to-HiFi read alignments. A HiFi read is considered a critical contained read only if it lacks sufficient informative variants to distinguish it from reads originating from other repeat copies (read h12 in Extended Data Fig. 1a). Given that ultra-long reads are frequently ten times longer than HiFi reads (with a median length exceeding 100kb), it is less probable that an ultra-long read is a critical contained read without any informative variant. As a result, when ultra-long reads are aligned to HiFi reads, the HiFi contained reads that must be covered by ultra-long read alignments are expected to be the critical reads. To this end, hifiasm (UL) theoretically constructs a HiFi string graph that includes both contained and uncontained reads (Extended Data Fig. 1b). It then employs the graph alignment to align all ultra-long reads to this graph. As shown in Extended Data Fig. 1b, the contained read h12 must be covered by the alignment paths of ultra-long reads u6 and u7, while another contained read h11 could be skipped by read h3. Consequently, hifiasm (UL) retains the critical read h12 for constructing a complete string graph of HiFi reads, while safely removing read h11 to simplify the graph. The ultra-long-to-HiFi read alignment could also be used to avoid the incorrect graph cleaning. In an ideal scenario where all HiFi reads are longer than any homozygous or repetitive regions, each node in the string graph should have a maximum of one edge extending towards the left and right sides. However, due to the limited length of HiFi reads, some nodes may have multiple edges, making it difficult for assemblers to determine the real number of edges to be retained. For instance, hifiasm and HiCanu [3] utilize a length-based strategy that prioritizes the edge with the longest overlap length and often removes other shorter edges. These heuristics graph cleaning solutions may result in the overcutting of real edges or retaining unrelated edges. If the initial backbone HiFi graph is either oversimplified or too complex, the downstream steps of hifiasm (UL) may not accurately resolve difficult-to-assemble regions. By utilizing the ultra-long-to-HiFi read alignment, hifiasm (UL) is able to ascertain the number of ultra-long reads supported for each edge, providing additional information to prevent incorrect graph cleaning (Extended Data Fig. 1b). **Integer graph with ultra-long reads.** To fully capture the length information of ultra-long reads, hifiasm (UL) constructs another string graph using only those reads. However, generating the string graph requires the computationally intensive all-versus-all pairwise read comparison, which constitutes the primary bottleneck in the long-read assembly workflow. Moreover, identifying correct overlaps among ultra-long reads is particularly challenging due to their significantly higher error rate compared to HiFi reads. Furthermore, the high frequency of recurrent sequence errors in ONT ultra-long reads makes it nearly impossible to accurately identify overlaps in difficult regions. Hifiasm (UL) constructs a lightweight integer graph to entirely avoid the expensive all-versus-all base-level read comparison and ensure the accuracy of the ultra-long graph is comparable to that of the HiFi graph. In short, all ultra-long reads are converted from the base pair space to a low-dimensional integer space using the graph alignments of ultra-long reads. By working in the integer space, the graph construction procedure is both efficient and straightforward. The detailed steps of the integer graph construction are listed as follows. 1. _Mapping ultra-long reads into the integer space._ All ultra-long reads are aligned to the HiFi graph to obtain the alignment paths (Fig. 1b). Given an ultra-long read, hifiasm (UL) first collects its linear alignments to the nodes of the HiFi graph using pairwise base-level alignments. Linear alignments are then chained in the graph space using the approach described in minigraph [20]. A graph alignment path is a sequence of the aligned node identifiers. For each ultra-long read, hifiasm (UL) only keeps the node identifiers and disregards all alignment details and base pairs. Node identifiers can be represented as integers, meaning that ultra-long reads of over tens of kilobases are transformed into ultra-long sequences consisting of tens of integers (Fig. 1c). 2. _Calculating overlaps among ultra-long integer sequences._ To construct a string graph in the integer space, obtaining overlaps between integer sequences is essential. As the base-level sequencing errors within ultra-long reads have already been corrected through the graph alignment to the accurate HiFi graph, hifiasm (UL) only allows exact overlaps in the integer space. Notably, this step is considerably faster than the conventional all-versus-all inexact pairwise alignment. 3. _Constructing an integer graph._ An integer graph is a type of string graph where each node is an integer sequence. Hifiasm (UL) constructs an integer graph by utilizing ultra-long integer sequences and their overlaps (Fig. 1d). Specifically, each node in this graph represents an ultra-long integer sequence, and the edges connecting the nodes correspond to exact overlaps between these sequences. However, even after this initial construction, multiple rounds of standard graph cleaning are still necessary to further simplify the integer graph. As ultra-long reads are typically long enough to assemble through repetitive or homozygous regions, hifiasm (UL) employs highly aggressive graph cleaning strategies to eliminate ambiguous edges associated with each node. 4. _Producing integer contigs._ A contig corresponds to a non-branching path in the string graph. Given a contig in the integer graph, hifiasm (UL) produces its sequence by concatenating the subsequences of nodes within the corresponding path (Fig. 1d). After the contig generation process, each resulting contig is an integer sequence that is significantly longer than any individual ultra-long read. In fact, these integer contigs represent the paths that can untangle intricate structures within the initial HiFi graph. **Building final assembly graph by graph incorporation.** The integer graph produces ultra-long integer contigs that correspond to assembly paths within the initial HiFi graph. These integer contigs represent another HiFi string graph that resolves the majority of tangles and homozygous regions within the initial HiFi graph into linear sequences. By incorporating ultra-long integer contigs into the initial HiFi graph, hifiasm (UL) can produce the final assembly. Specifically, hifiasm (UL) first removes all nodes within the initial HiFi graph that also appear in ultra-long integer contigs, and then merges the remaining nodes and overlaps with ultra-long integer contigs. Fig. 1e provides an example. In the final assembly graph, all nodes except h7 come from ultra-long integer contigs. This is because all nodes except node h7 are present in both the initial HiFi graph (Fig. 1b) and ultra-long integer contigs (Fig. 1d). **Constructing haplotype-resolved assemblies.** The high-quality assembly graphs combining HiFi and ultra-long reads significantly simplifies the generation of haplotype-resolved assemblies. With the addition of Hi-C or parental short reads, hifiasm (UL) can reuse previous Hi-C [18] or trio-binning [1] algorithms to assign haplotype-specific markers to the nodes of the assembly graph. The final haplotype-resolved assemblies are then produced using the graph-binning strategy [1]. For polyploid genomes, we implemented a polyploidy graph-binning approach that extends our previous diploid graph-binning method. In the polyploidy graph-binning approach, when emitting the assembly of one haplotype, all nodes with other haplotype-specific markers are discarded from the assembly graph. This is the main difference between the polyploidy graph-binning and the diploid graph-binning approaches. **Optimizing for cloud computing.** To evaluate the computational cost of both hifiasm (UL) and Verkko, we assembled all human samples with hifiasm (UL) and three human samples with Verkko using the Terra platform on top of Google Cloud Platform. We further reduced the computational costs by executing assemblers with preemptible instances. A preemptible instance takes much lower cost but its running times often cannot exceed 24 hours. As a result, both hifiasm (UL) and Verkko were divided into multiple short tasks, which were executed individually using preemptible instances (Supplementary Section 1.4). ## Acknowledgements This study was supported by US National Institutes of Health (grant R01HG010040, U01HG010971 and U41HG010972 to H.L., grant 1K99HG012798 to H.C.). We thank the Human Pangenome Reference Consortium for making Year-1 and Year-2 datasets publicly available. ## Author contributions H.C. and H.L. designed the algorithm, implemented hifiasm (UL) and drafted the manuscript. H.C. benchmarked hifiasm (UL) and other assemblers. M.A., J.L. and S.K. designed the evaluation of human genome assemblies. ## Competing interests The authors declare no competing interests. ## Data availability Human reference genome: GRCh38; HiFi reads of HPRC Year-2 samples: [https://s3-us-west-2.amazonaws.com/human-pange](https://s3-us-west-2.amazonaws.com/human-pange) nomics/index.html?prefix=submissions/1E2DD570-3B26-418B-B50F-5417F64C5679-HIFI_DEEPCONSENSUS/; ONT ultra-long reads of HPRC Year-2 samples: [https://s3-us-west-2.amazonaws.com/human-pangenomics/index.html?prefix=submi](https://s3-us-west-2.amazonaws.com/human-pangenomics/index.html?prefix=submi) ssions/90A1F283-2752-438B-917F-53AE76GpC43E-UCSC_HPRC_nanopore_Year2/; Hi-C reads of HPRC Year-2 samples: [https://s3-us-west-2.amazonaws.com/human-pangenomics/index.html?prefix=submissions/4C696EB9-9AD2-47A2-8011-2F43977CC4E0-Y2-HIC/](https://s3-us-west-2.amazonaws.com/human-pangenomics/index.html?prefix=submissions/4C696EB9-9AD2-47A2-8011-2F43977CC4E0-Y2-HIC/); Parental short reads of HPRC Year-2 samples: [https://s3-us-west-2.amazonaws.com/human-pange](https://s3-us-west-2.amazonaws.com/human-pange) nomics/index.html?prefix=submissions/AD30A684-C7A8-4D24-89B2-040DFF021B0C-Y2_1000G_DATA/; All reads of HPRC Year-1 samples: [https://github.com/human-pangenomics/HPP_Year1_Data_Freeze_v1.0](https://github.com/human-pangenomics/HPP_Year1_Data_Freeze_v1.0); All reads of Arabidopsis: [https://ngdc.cncb.ac.cn/search/?dbId=gsa&q=CRA004538](https://ngdc.cncb.ac.cn/search/?dbId=gsa&q=CRA004538); All reads of potato: [https://ngdc.cncb.ac.cn/gsa/browse/CRA006012](https://ngdc.cncb.ac.cn/gsa/browse/CRA006012); Hifiasm (UL) assemblies of HPRC Year-2 samples: "*hifiasm_v0.19.5*" from [https://s3-us-west-2.amazonaws.com/hum](https://s3-us-west-2.amazonaws.com/hum) an-pangenomics/index.html?prefix=submissions/53FEE631-4264-4627-8FB6-90D7364F4D3B-ASM-COMP/; Verkko assemblies of HPRC Year-2 samples: "*verkko_1.3.1*" from [https://s3-us-west-2.amazonaws.com/human-pangenomics/ind](https://s3-us-west-2.amazonaws.com/human-pangenomics/ind) ex.html?prefix=submissions/53FEE631-4264-4627-8FB6-09D7364F4D3B-ASM-COMP/; All evaluated HPRC Year-1 and plant assemblies are available at [https://zenodo.org/record/7996422](https://zenodo.org/record/7996422) and [https://zenodo.org/record/7962930](https://zenodo.org/record/7962930), respectively. ## Code availability Hifiasm (UL) is available at [https://github.com/chhylp123/hifiasm](https://github.com/chhylp123/hifiasm). ## Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article
2302.09425
A Neurodiversity-Inspired Solver for the Abstraction \& Reasoning Corpus (ARC) Using Visual Imagery and Program Synthesis
Core knowledge about physical objects -- e.g., their permanency, spatial transformations, and interactions -- is one of the most fundamental building blocks of biological intelligence across humans and non-human animals. While AI techniques in certain domains (e.g. vision, NLP) have advanced dramatically in recent years, no current AI systems can yet match human abilities in flexibly applying core knowledge to solve novel tasks. We propose a new AI approach to core knowledge that combines 1) visual representations of core knowledge inspired by human mental imagery abilities, especially as observed in studies of neurodivergent individuals; with 2) tree-search-based program synthesis for flexibly combining core knowledge to form new reasoning strategies on the fly. We demonstrate our system's performance on the very difficult Abstraction \& Reasoning Corpus (ARC) challenge, and we share experimental results from publicly available ARC items as well as from our 4th-place finish on the private test set during the 2022 global ARCathon challenge.
James Ainooson, Deepayan Sanyal, Joel P. Michelson, Yuan Yang, Maithilee Kunda
2023-02-18T21:30:44Z
http://arxiv.org/abs/2302.09425v3
# An Approach for Solving Tasks on the Abstract Reasoning Corpus # An Approach for Solving Tasks on the Abstract Reasoning Corpus James Ainooson ([email protected]) Deepayan Sanyal ([email protected]) Joel P. Michelson ([email protected]) Yuan Yang ([email protected]) Maithilee Kunda ([email protected]) Department of Computer Science, Vanderbilt University 2201 West End Ave, Nashville, TN 37235 ###### Abstract The Abstract Reasoning Corpus (ARC) is an intelligence tests for measuring fluid intelligence in artificial intelligence systems and humans alike. In this paper we present a system for reasoning about and solving ARC tasks. Our system relies on a program synthesis approach that searches a space of potential programs for ones that can solve tasks from the ARC. Programs are in a domain specific language, and in some instances our search algorithm is guided by insights from a corpus of ground truth programs. In particular: We describe an imperative style domain specific language, called Visual Imagery Reasoning Language (VIMRL), for reasoning about tasks in the ARC. We also demonstrate an innovative approach for how large search spaces can be decomposed using special high level functions that determine their own arguments through local searches on a given task item. Finally, we share our results obtained on the publicly available ARC items as well as our system's strong performance on a private test, recently tying for 4th place on the global ARCathon 2022 challenge. **Keywords:** add your choice of indexing terms or keywords; kindly use a semicolon; between each term ## Introduction Chollet (2019) recently introduced the Abstract Reasoning Corpus (ARC) as a general intelligence test for humans and artificial systems. ARC requires solvers to exhibit fluid intelligence while reasoning about a wide range of visual and spatial concepts. The ARC was designed to be solvable without explicit task training, provided the solver possesses a collection of sufficient core knowledge priors. Figure 1 shows two sample tasks from the ARC. Each of these tasks has a training section (left of the grey bar), with a number of input-output grid pairs, and a test section (right of the grey bar) with an input grid for which a solver must provide an output. All tasks from the ARC are presented on coloured, input-output grid pairs. These grids can be anywhere from a single cell to 30x30 cells in size, and each cell can contain one of ten symbols (0 through 9 or appropriately selected unique colours). A property that makes the ARC particularly difficult, especially for artificial systems, is how solvers are required to predict the output grids from scratch: solvers must correctly determine the grid's size, and must also specify the symbols to place in each cell correctly. All tasks from the ARC are claimed to rely on a set of clearly defined core knowledge priors. According to Chollet (2019) these core knowledge priors cover concepts around objectness, goal-directedness, numbers and counting, and topology and geometry. In this paper we present a system for reasoning about and solving ARC tasks. Our system relies on a program synthesis approach that searches a space of potential programs for ones that can solve tasks from the ARC. Programs are in a domain specific language, and in some instances our search algorithm is guided by insights from a corpus of ground truth programs. 1. We describe an imperative style domain specific language, called Visual Imagery Reasoning Language (VIMRL), for reasoning about tasks in the ARC. 2. We demonstrate an innovative approach for how large search spaces can be decomposed using special high level functions that determine their own arguments through local searches on a given task item. 3. We share our results obtained on the publicly available ARC items as well as our system's strong performance on a private test, recently tying for 4th place on the global ARCathon challenge that ended on Dec. 31, 2022. ## The Abstract Reasoning Corpus The Abstract Reasoning Corpus contains a total of 1,000 different tasks. Of these tasks, 800 are publicly available to researchers, while 200 are kept private for evaluation (through the Kaggle Machine Learning Platform1, and the occasional ARCATHON2 competitions). To help researchers develop and test their solvers, the complete solutions (containing both Figure 1: Two sample tasks from the Abstract Reasoning Corpus. The first sample (a) requires the solver to isolate the object in the grid, while the second sample (b) requires the solver to find the repeated grid segment. the input and output pairs) for the test section are supplied for all 800 publicly available task items. Publicly available tasks are supplied in two equal sets of 400 tasks: one set considered as a training set and the other as an evaluation set. According Chollet (2019), the concepts observed in one set do not transfer to the other. This means, agents that specialize on items in one set may not necessarily perform well on items in the other. Interestingly, this fact is supposed to remain true across all the three groups of tasks on the ARC. Formally an ARC task, \(T\), can be defined as: \[T=\left\{\langle I_{1}^{t},O_{1}^{t}\rangle,\ldots,\langle I_{n}^{t},o_{n}^{t} \rangle;\tilde{v}_{0}^{e},\ldots,\tilde{v}_{m}^{e}\right\}\] Here, \(i^{t}\) and \(o^{t}\) are input and output training grids, and \(i^{e}\) represents a test input grid for which the agent must provide an output. In the case of the publicly available ARC tasks, each test input, \(\tilde{v}_{i}^{e}\), has a corresponding output, \(o_{i}^{e}\), with which any solvers can learn about the task. Scoring on the ARC is considered an all or nothing affair: solvers must get all the cells right to correctly solve a task. Any missed cell results in a failure. Because solvers may face some ambiguities in learning the concepts of a task, solvers are given the opportunity to make up to three predictions, and the task will be considered solved if any of those predictions are correct. ### Prior Knowledge on the ARC Chollet (2019) provides brief descriptions of some _core knowledge priors_ that solvers must possess to be successful on ARC tasks. Note that while the ARC is labeled as an "abstract" reasoning test, it presents problems in a visual format, and thus requires some degree of visual processing. In addition, core knowledge priors are about object properties expressed visually/spatially. Thus, visual reasoning abilities representing functions of both perception and inference are key for solving ARC items. The **objectness** prior requires solvers to deal with the segmentation, permanence and interaction of objects. **Goal-directedness** requires solvers to deal with processes. Tasks that require _goal-directedness_ may exhibit input-output grids that can be considered as the start and end states of some abstract process, such as an object moving from one point in the grid to another. **Numbers and counting** priors are required in situations where quantities, like frequencies of object occurrences and sizes of objects, are considered as numbers for operations like comparison and sorting. The **geometry and topology** prior requires an agent to have knowledge about shapes, lines, symmetry, relative positioning of objects, and the ability to replicate objects in different ways. ## Reasoning about the ARC Our approach to reasoning about the ARC involves the use of a program synthesis solver. This solver generates programs in a domain specific language we named Visual Imagery Reasoning Language (VIMRL). When given an ARC task, \(T=\left\{\langle I_{1}^{t},O_{1}^{t}\rangle,\ldots,\langle I_{n}^{t},O_{n}^{t }\rangle;I_{0}^{e},\ldots,I_{m}^{e}\right\}\), the solver searches the space of programs in VIMRL for a candidate program, \(\varphi(x)\), which takes a grid, \(x\), as input, produces a solution output grid. For a program to be ordered as a candidate solution, it must satisfy \(\left(\frac{\sum_{i=1}^{n}\lambda(\varphi(i_{i}^{t}),O_{i}^{t})}{n}>\alpha\right)\), where \(\lambda(x,y)=\begin{cases}1&x=y\\ 0&\end{cases}\) and, \(\alpha\), is a threshold within which the agent should be accurate. This means a candidate program will only be selected if it solves enough items in a task's training problems with an accuracy higher than the value of \(\alpha\). ### Visual Imagery Reasoning Language The Visual Imagery Reasoning Language (VIMRL) is an imperative style language, designed around imagery operations, and built specifically for reasoning about arXiv tasks. Instead of relying on control instructions, VIMRL places emphasis on the sequence of instructions to control the state of a program during execution. See Table 1 for VIMRL's full grammar. Every instruction in the VIMRL involves a call to an operation. Operations can take arguments, which are either literal values or references to variables. And operations always return values. Values in VIMRL can either be literal or variable, and values always have fixed types. All values (literal or variable) in VIMRL have fixed types. Arguments for operations are also expected to have specific types. Currently, values can assume one of 5 given types. Values can be typed as image, object (an image fragment with a location, much like a sprite in a video game), color, number (integers only), or list (of objects). ### Executing VIMRL Programs A VIMRL program under execution can be considered to have a state containing the following: 1. The set of all variables that have been defined throughout \begin{table} \begin{tabular}{l|r} \(\langle\)_instruction_\():= \(\langle\)_assignment_\()\) \\ \(|\)\(\langle\)_operation_\()\) \\ \hline \(\langle\)_assignment_\():= \(\langle\)_identifier_\(\rangle\) ‘=’ \(\langle\)_operation_\()\) \\ \hline \(\langle\)_operation_\():= \(\langle\)_identifier_\(\rangle\) ‘_’ \(\langle\)_arguments_\(\rangle\) ‘\(\rangle\)’ \\ \hline \(\langle\)_arguments_\():= \(\langle\)_argument_\()\) \\ \(|\)\(\langle\)_arguments_\(\rangle\) ‘,’ \(\langle\)_argument_\()\) \\ \hline \(\langle\)_argument_\():= \(\langle\)_identifier_\(\rangle\) \\ \(|\)\(\langle\)_number_\() \\ \(|\)\(\langle\)_operation_\()\) \\ \hline \(\langle\)_number_\():= \((\)‘-’\()?[0-9]+\) \\ \hline \(\langle\)_identifier_\(\rangle\) ::= \([a-zA-Z][a-zA-Z0-9]*\) \\ \hline \end{tabular} \end{table} Table 1: Grammar for the Visual Imagery Reasoning Language-I (VIMRL). These also double as production rules for generating code during program synthesis. the program's lifetime. 2. All the values associated with the defined variables. 3. The current line of instruction being executed. Every program starts execution with two pre-defined variables: input, an image containing the input grid of the problem; and background, which represents the background colour of the input grid. The background value helps in isolating objects, and by default it is set to a value of 0. This value can later be changed through specific operations. During execution, two main types of operations can be performed. First, there are low level operations, which are simple functions that require arguments to be explicitly passed. These operations will typically manipulate their inputs and return an output. The second type of operations are high level operations, which take a single argument and further analyse the grids from the tasks to be solved, and the current state of execution to implicitly select extra arguments. To further explain how the two types of operations work, consider the programs listed in Table 2. The program in cell (a) is a possible VIMRL solution for the task in Figure 1 (a). This program takes the input image, applies the trim operation to remove extra surrounding pixels, and goes on to assign the results to the output variable. Here, trim is a low level operation that removes extra pixels to create a bounding box. In contrast to the trim program (in cell (a)), consider the program listed in cell (b). This single program can solve both of the tasks displayed in Figures 2 (a) and 2(b). The attract operation used in the program is a high level operation that uses simple naive physics simulations to solve the problem of objects being attracted to each other. It takes an input image of the initial state of the objects, and returns an output image with the final state. Because attract is a high level function, all of the tasks training items, \(\left\{\langle I_{1}^{\prime},O_{1}^{\prime}\rangle,\ldots,\langle I_{n}^{ \prime},O_{n}^{\prime}\rangle\right\}\), are available to it at runtime. From these items, the attract function is able to form a rule about which objects are being attracted to what, and it can use this rule to attempt the test item. All high level functions in VIMRL operate this way, albeit each with their own internal rules and search techniques. So far, all sample programs we discussed contain a single instruction that operates on the input. But programs in VIMRL are typically longer. Having multiple instructions, however, complicates execution when high level operations are performed after low level ones. Because every execution of a high level function requires an instance of the task's training items to be analysed, in cases where other instructions have already modified the task's input image, \(I^{\prime}\), the input-output pairs from the training items, \(\left\{\langle I_{1}^{\prime},O_{1}^{\prime}\rangle,\ldots,\langle I_{n}^{ \prime},O_{n}^{\prime}\rangle\right\}\), may no longer be representative of the current state of the input image. As a solution to this problem, before any high level operation is executed, all instructions that have already been executed, are applied to all the input-output training pairs to create a modified version of the task. To do this we consider the sequence of all instructions performed before the high-level operation was executed as a partial program, \(\varphi^{\prime}(x)\), then we generate a modified task, \(T^{\prime}\), with training items, \(\left\{\langle I_{1}^{\prime\prime},O_{1}^{\prime\prime}\rangle,\ldots, \langle I_{n}^{\prime\prime},O_{n}^{\prime\prime}\rangle\right\}\), such that \(\forall_{i\in\{1,\ldots,n\}}\langle I_{i}^{\prime\prime},O_{i}^{\prime\prime }\rangle\rightarrow\langle\varphi^{\prime}(I_{i}^{\prime}),\varphi^{\prime}(O _{i}^{\prime})\rangle\). This modified task, \(T^{\prime}\), is then passed to the high level operation. A walk-through of high level function executionTo further illustrate how partial programs modify the tasks before they are analysed by high level functions, consider the programs from cells (c) and (d) of Table 2. The program in cell (c) is similar to all those we have seen earlier. It uses a single call to a high level function, recolor, which learns the rules by which colours in an image are transformed to solve the task in Figure 2 (c). When a similar recolor operation (recolor_objects which works on a list of images instead of a single image) is encountered in the program from cell (d), however, another operation, find_enclosed_patches has already been executed. Figure provides a walk-through with a visualization of the execution state as the program in cell (d) is runs. From the walk-through we observe that the initial state has the input variable assigned to the input grid of the test \begin{table} \begin{tabular}{l|l} \hline (a) & output = trim(input) \\ \hline (b) & output = attract(input) \\ \hline (c) & output = recolor(input) \\ \hline \multirow{3}{*}{(d)} & enclosed = find\_enclosed\_patches(input) \\ & recolored = recolor\_objects(enclosed) \\ & output = draw(input, recolored) \\ \hline \end{tabular} \end{table} Table 2: Listings for four different VIMRL programs that provide solutions for tasks in the ARC. Cell (a) is a possible solution for the task in Figure 1 (a), cell(b) provides possible solutions for the tasks in Figures 2 (a) and 2 (b), and cell (c) and (d) provide possible solutions for the tasks in Figures 2 (c) and 2 (d) respectively. Figure 2: A couple of sample tasks from the ARC for demonstrating VIMRL solutions. See Table 2 for possible VIMRL solutions. item. The first instruction, enclosed = find_enclosed_patches(input), analyzes the input image and extracts all patches of the grid's image that are enclosed. In the case of this particular example, the call to find_enclosed_patches returns a list with 8 grids and their locations. The next instruction, recolor_objects, which is a high-level function, takes this list of objects as an argument. In addition to this list, the recolor_objects function will also receive a copy of the task, and this copy must reflect any changes the earlier call to find_enclosed_patches may have made to the input image. To build this modified task, the partial program, which contains only single a call to find_enclosed_patches, is executed on all the inputs and output images of the task. Whenever this partial program is executed on an image from the training set, the corresponding image in the task is replaced with the value associated with the enclosed variable after the partial execution. The enclosed variable is chosen as the replacement because it is the variable whose value is being passed to the recolor operation. It is worth noting that the in this case the value of enclosed will be of a list type, leading to a situation which forces all the training images to be replaced with lists. Because the recolor_object operation operates on lists, it is now able to compare the lists of patches found in the input and outputs of the modified tasks to detect that everything that is coloured black (0) is switched to yellow (4). After the execution of the recolor operation, the draw operation is used to paint all the recolored objects back to the input grid, and the results are assigned to the output variable as a solution to the task. ### Operations for Reasoning about the ARC Currently, there are a total of 11 high level and 41 low-level operations, giving a total of 52 operations. The implementation of these operations are inspired by the core knowledge priors suggested for the ARC and similarities observed in tasks from the public ARC datasets. A full list of operations and the core priors upon which they are based are displayed in Figure 4. It is worth noting that all operations designed for VIMRL were only made after observing just the 400 tasks in the train section of the public ARC dataset. We decided not to consider any tasks from the evaluation section in order to further evaluate how well the concepts between the tasks are separated. ### Searching At the core of our reasoning agent is a search algorithm that attempts to find VIMRL programs for ARC tasks. As described at the beginning of this section, the goal of the search is to find programs that solve a given number of items in from the train section of an ARC task. The major issues we had to deal with in building our search algorithm stemmed from tree traversal, successor generation, and node pruning. Our current search algorithm can be described as follows: Given an instance of the ARC task, \(T=\left\{\langle I_{1}^{t},O_{1}^{t}\rangle,\ldots,\langle I_{n}^{t},O_{n}^{t }\rangle;I_{0}^{e},\ldots,I_{m}^{e}\right\}\), generate and collect candidate programs, \(\phi_{i}(x)\), which satisfy \(\left(\frac{\sum_{i=1}^{n}\lambda(\phi_{i}^{(H)},O_{i}^{t})}{n}>\alpha\right)\), where \(\lambda(x,y)=\begin{cases}1&x=y\\ 0&\text{for some threshold }\alpha>0\end{cases}\). The search executes in a _generate-execute-test_ cycle until a given number of programs (50 in the case of our experiments) are found, or a time-out (700 seconds in the case of our experiments) is reached. For all the experiments discussed in this paper, we accepted any program with \(\alpha>0\) as a potential candidate. After the search cycle terminates, the best performing programs are selected from the set of candidate programs such that the top three smallest programs with the highest scores (after sorting all candidates by \(\alpha\) score in descending order and program size in ascending order) are selected as final candidates to generate responses for the task. #### Tree traversal and successor generation In all our experiments, we search the space of VIMRL programs with traditional breadth-first and depth first search algorithms. Starting with an empty program, we rely on successor generators to add on instructions as we build a series of potential programs. We have implemented two main ways of generating new successor programs from existing ones. The first approach involves executing a production system over the VIMRL grammar, with the grammar as productions, to produce all possible successors when a program is given. This yields a brute-force search through the entire VIMRL program space. With the potential to add over 50 operations at each step, the branching factor of the search tree is high, leading to a quick exponential explosion in search space. Nonetheless, a full brute force search serves as a good starting point to help us understand the dynamics of program generation in VIMRL. Our second approach generates successors stochastically. The probabilities of possible successor nodes are computed Figure 3: A walk-through of the execution of the program listed in cell (d) of Table 2 on the task from Figure 2 (d). White boxes show the state of the program’s variables whites the grey boxes show the instructions executed. from hand-coded ground truth programs for a subset of the 400 training tasks of the ARC. The corpus of hand coded programs are used to build models with potential intrinsic knowledge on how operations in VIMRL interact with each other. During search, new instructions to be added to a program are sampled from a set of all possible programs according to probability estimates computed from the ground truth programs. These probabilities are represented as a Markov-chain where the probability of adding an instruction a program is conditioned on instruction that precedes it. We currently have a corpus of 150 ground truth programs from which we are building our models. Interestingly, several of these programs were obtained through the brute-force search described above. During stochastic search, whenever a successor is required for a node, we sample a fixed number of operations according to a pre-computed probability distribution. From these samples we generate all possible calls of these operations considering the values we have available. This means that although we may have a fixed number of operations sampled, the actual number of successors generated could be higher. For the probabilities of the operations we have experimented with a uniform distribution as the base line, and we have also tested a maximum likelihood estimation computed from the ground truth corpus. Search space pruning and optimizationEven when potential successors are stochastically sampled, the search space is still expected to explode. Search space explosion cannot be entirely prevented, but steps can be taken to ensure only meaningful programs are explored and expanded. One approach we used was to limit the depth of search to a fixed number of instructions per program. This step places a hard limit on the entire search space, making it finite. There are also cases where two programs have different sequences of instructions, yet are logically equivalent (because they produce the same final program state). It is not efficient to execute logically-equivalent programs. In order to prevent logically equivalent programs from being executed, we sort all programs to ensure that logically equivalent programs have the same sequence of instructions. To sort these programs we build a dependency tree in which the locations of instructions that consume variables are children to the locations of instructions that create variables. When we topologically sort this dependency tree, the sequence of instruction is re-ordered such that instructions are used almost as soon as they are defined. A side effect of this re-ordering is that all programs that are logically equivalent but are sequenced differently yield the same output. ## Results For our experiments, we ran and optimized three different configurations for our search algorithms. The first was a full breadth first search with our brute-force successor generator. Given how large the brute-force space was, we set this search to work to a maximum depth of 3. Our second search configuration had a stochastic successor generator which uniformly sampled operations. This configuration was our baseline for stochastic search. And our third configuration samples from a maximum likely estimation of operations computed from our ground truth dataset. Both stochastic methods searched to a maximum depth of 5. See Table 3 for a summary of results. Our experiments were executed on an 18-node computer cluster where the search on each task was allowed to run up to about 700 seconds before timing out. We also participated Figure 4: A list of all the instructions available for solving tasks in VIMRL. Functions listed in grey cells are high level functions, and the colored letters provide information about the core knowledge prior from which the function draws. \begin{table} \begin{tabular}{l r r r} \hline \hline **Search** & **Max. Depth** & **Train Score** & **Eval. Score** \\ \hline Brute Force & 3 & 104/400 & 26/400 \\ BFS Uniform & 5 & 34/400 & 9/400 \\ BFS MLE & 5 & 70/400 & 17/400 \\ \hline \hline \end{tabular} \end{table} Table 3: Best results observed for the different forms of search traversal in the global Arcathon challenge 2022, where we tied for 4th place. ## Related Work ARC is a relatively new test, and with its development still in progress, not much work has gone into its verification. Currently, the only known human tests on the ARC are trials performed by the ARC's authors on human subjects during development [1], and work by johnson2021learning to measure how well humans were able to infer the underlying concepts of 40 randomly selected items on the task. Although the scope of tasks for the study by johnson2021learning were quite limited, it showed that humans had the ability to quickly generalize the concepts behind tasks on the ARC to effectively solve them. To show how well this generalization occurred, each participant was made to provide a natural language description of their strategy which was later compared to the sequence of actions they performed while actually solving the task. In a recent work Acquaviva et al. (2021), the authors formulated a two-player game, where one person would solve an ARC task and give a natural language description of the solution to the other player. The second player has to solve the task using the provided description. They found that at least 361 out of 400 tasks in the training set can be solved using the natural language description. When it comes to artificial solvers, kolev2020learning present Neural Abstract Reasononer (NAR), a solver that relies on neural networks, specifically Differential Neural Computers to reason through items on the ARC. Although according to the published results, the NAR scores an accuracy of 78.8% on items of size \(10\times 10\) or lower, it is not clear which section of the ARC was evaluated. rischer2020learning developed a custom domain-specific language for the task and explored the space of programs with an evolutionary algorithm. Their approach however achieved only 3% accuracy on the hidden test set. Ferre (2021) presented an approach following the minimum description length principle. The model consists of two parts: the input grid model, which converts the input grid into a parsing of the grid and the output grid model, which converts the parsing into the output grid. They tried different versions of their model, using the minimum description length principle to guide the search. Their best performing model solved 29 tasks from the training set, with each task taking upto 30 seconds. The DreamCoder method [1, 2] presented a way of learning new abstract functions from current solved tasks and using those functions to solve more tasks in future iterations. The authors provided a set of 5 grid-manipulation to the program and selected a subset of 36 problems which could be solved using these operations. In addition, in every iteration, the method gets a fixed amount of time to solve the 36 problems. Over the iterations, the DreamCoder method is able to increase its performance from 16 tasks to 22 tasks. This is enabled because the method learns new abstract functions from the solutions in the previous iterations, thereby making the search faster in the next iteration. Though the number of problems solved by the method is low, their compression technique presents a possible way to improve search speed for solving ARC problems. ## Discussion and Next Steps Our preliminary results on the ARC task show significant promise for a system that could reason generally about tasks from the ARC. The results also seem to provide some confirmation for the fact that concepts may be different across the training and evaluation sets. The program synthesis approach with which our solver works is not significantly different from what the current best ARC solvers use. There is always a program synthesizer searching some space for possible programs. Where ours differs, however, is in the use of an imperative language that is reliant on high level functions that perform local searches. In the next phase of our work, we intend to expand the number and breadth of programs in our ground truth dataset; streamline operations to focus more on those that are not too specific to work on fewer tasks, and not too general to explode the search space; and improving our search algorithms. Our current efforts have already yielded about 130 ground truth programs. Some of these programs were hand coded, while interestingly, some were discovered through our brute force searches. By expanding the ground truth dataset we have the potential to induce models that "know" the right way to sequence VIMRL programs. We also intend to spend some research effort on understanding the relationship between the visual appearance of the task grids and the functions we have successfully solved them with. We hope this effort will allow us to prune larger search spaces by selecting only the functions that are most likely to solve a given task.
2301.10424
Enhanced tripartite interactions in spin-magnon-mechanical hybrid systems
Coherent tripartite interactions among degrees of freedom of completely different nature are instrumental for quantum information and simulation technologies, but they are generally difficult to realize and remain largely unexplored. Here, we predict a tripartite coupling mechanism in a hybrid setup comprising a single NV center and a micromagnet. We propose to realize direct and strong tripartite interactions among single NV spins, magnons and phonons via modulating the relative motion between the NV center and the micromagnet. Specifically, by introducing a parametric drive (two-phonon drive) to modulate the mechanical motion (such as the center-of-mass motion of a NV spin in diamond trapped in an electrical trap or a levitated micromagnet in a magnetic trap), we can obtain a tunable and strong spin-magnon-phonon coupling at the single quantum level, with up to two orders of magnitude enhancement for the tripartite coupling strength. This enables, for example, tripartite entanglement among solid-state spins, magnons, and mechanical motions in quantum spin-magnonics-mechanics with realistic experimental parameters. This protocol can be readily implemented with the well-developed techniques in ion traps or magnetic traps, and could pave the way for general applications in quantum simulations and information processing based on directly and strongly coupled tripartite systems.
Xin-Lei Hei, Peng-Bo Li, Xue-Feng Pan, Franco Nori
2023-01-25T06:31:27Z
http://arxiv.org/abs/2301.10424v1
# Enhanced tripartite interactions in spin-magnon-mechanical hybrid systems ###### Abstract Coherent tripartite interactions among degrees of freedom of completely different nature are instrumental for quantum information and simulation technologies, but they are generally difficult to realize and remain largely unexplored. Here, we predict a tripartite coupling mechanism in a hybrid setup comprising a single NV center and a micromagnet. We propose to realize _direct and strong tripartite interactions_ among single NV spins, magnons and phonons via modulating the relative motion between the NV center and the micromagnet. Specifically, by introducing a parametric drive (two-phonon drive) to modulate the mechanical motion (such as the center-of-mass motion of a NV spin in diamond trapped in an electrical trap or a levitated micromagnet in a magnetic trap), we can obtain a tunable and strong spin-magnon-phonon coupling at the single quantum level, with up to two orders of magnitude enhancement for the tripartite coupling strength. This enables, for example, _tripartite entanglement_ among solid-state spins, magnons, and mechanical motions in quantum spin-magnonics-mechanics with realistic experimental parameters. This protocol can be readily implemented with the well-developed techniques in ion traps or magnetic traps, and could pave the way for general applications in quantum simulations and information processing based on directly and strongly coupled tripartite systems. _Introduction.--_Coherent interactions between different quantum systems are a fundamental issue in the field of quantum physics and quantum technologies [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. The Jaynes-Cummings (JC) model [22; 23], which describes the pairwise coherent interactions between a two-level quantum system and a quantized field, is a textbook example of light-matter interactions in the quantum regime, and lays the foundations of quantum optics [24; 25; 26; 27]. With the fast development of quantum technologies, like quantum information processing [28; 29; 30; 31] and simulations [32; 33], the exploration of interactions beyond the pairwise interactions of the JC model in quantum optics is increasingly appealing, which could enable performing more complex tasks, like generating multipartite entanglement. However, compared to the bipartite interactions of the JC model, the realization of tripartite interactions among completely different degrees of freedom is an outstanding challenge and remains largely unexplored. Recently, much attention has been paid to studying hybrid quantum systems based on nitrogen-vacancy (NV) centers in diamond [34; 35; 36; 37; 38; 39; 40; 41; 42; 43], magnons in microscopic magnets [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60], and mechanical motions [60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73]. Recent theoretical and experimental advances have revealed the coupling of NV spins to phonons in nanomechanical oscillators [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94], in addition to the interactions between magnons and phonons [57; 58; 59; 60] or photons [95; 96; 97; 98; 99; 100; 101; 102; 103; 104]. However, previous studies mostly focus on pairwise interactions between completely different physical systems to construct hybrid quantum setups; it seems to us that the tripartite coupling among spins, magnons as well as mechanical motions, which is fundamentally different from spin-magnon, spin-phonon, and magnon-phonon couplings, is still lacking. In this work, we theoretically show how it is possible to achieve the tripartite interaction among single spins, magnons, and phonons in a hybrid setup comprising a single NV center in diamond and a micromagnet. We show that when the relative motion between the spin and the micromagnet is modulated, it will change the magnetic field of the magnons felt by the nearby spin, which thus leads to direct coherent couplings among these three degrees of freedom at the single quantum level. To control and enhance this tripartite coupling, we propose to make use of a parametric drive to amplify the mechanical zero-point fluctuations of the vibration mode [105; 106; 107; 108; 109; 110; 111; 112], which can exponentially enhance the spin-magnon-phonon coupling. Specifically, here the mechanical motion could be either the center of mass motion of a NV spin in diamond or that of a levitated micromagnet. For the former, it can be implemented in a setup with a nano-diamond sphere containing a single NV spin in a Paul trap [113; 114; 115; 116; 117; 118; 119] or a diamond cantilever with embedded NV centers, while for the latter it can be realized with a levitated micromagnet (such as a Yttrium iron garnet (YIG) sphere) in a magnetic trap [120; 121; 122; 74]. For both cases, we only need a time-dependent electrical driving to manipulate the effective spring constant of the harmonic motion, thus remarkably simplifying the experimental implementation with only minor modifications of existing experimental setups. But our proposal differs fundamentally from these experimental works with a markedly different kind of spin-magnon-phonon tripartite interaction. As intriguing applications, we also show the appearance and enlargement of the tripartite entanglement via the enhanced interaction of the spin-magnon-phonon coupling system, which could find useful applications in modern quantum technologies. _The model.--_As illustrated in Fig. 1, we consider a hybrid system comprising a single NV center in diamond and a micromagnet of radius \(R\) (such as a YIG sphere), but with three degrees of freedom, including the single NV spin (\(\hat{\sigma}_{i}\)), the magnon mode (\(\hat{a}\)), and the mechanical mode (\(\hat{b}\)). Here, the mechanical mode is the relative motion between the spin and the micromagnet, which is subject to a two-phonon (parametric) drive \(\Omega(t)=\Omega_{\rm p}\cos(2\omega_{\rm p}t)\). The spin operators are defined as the Pauli operators \(\hat{\sigma}_{i}\) (with \(i=x,y,z\)) in the two-level-energy basis \(\{|g\rangle,|e\rangle\}\). The interaction among the NV spin, the mechanical mode, and the magnon can be described by the Hamiltonian (let \(\hbar=1\)) \[\hat{H}_{\rm Trip}=\lambda(\hat{b}+\hat{b}^{\dagger})(\hat{a}^{\dagger}\hat{ \sigma}^{-}+\hat{a}\hat{\sigma}^{+}), \tag{1}\] with tripartite coupling strength \(\lambda\). Here, the spin operators satisfy \(\hat{\sigma}^{\pm}=(\hat{\sigma}_{x}\pm i\hat{\sigma}_{y})/2\). We then present more details regarding the above interactions. The tripartite spin-magnon-phonon coupling results from the magnetic coupling between the spin and the magnon mode of the YIG sphere. First, we focus on the Kittel mode that is supported by the magnetic microsphere [123]. For this mode, all spins in the micromagnet precess in phase and with the same amplitude [47]. The free Hamiltonian of the magnon can be \(\hat{H}_{\rm K}=\omega_{\rm K}\hat{a}^{\dagger}\hat{a}\). Here, \(\omega_{\rm K}=|\gamma|B_{z,{\rm K}}\), with a large external magnetic field \(B_{z,{\rm K}}\) resulting in saturation magnetization of the spherical magnet and the gyromagnetic ratio \(\gamma\). Then, a quantized magnetic field \(\vec{\hat{B}}\) is generated by the Kittel mode. The nearby NV center as a magnetic dipole, with the free Hamiltonian \(\hat{H}_{NV}=\omega_{NV}\hat{\sigma}_{z}/2\), experiences the magnetic field of the magnons. The interaction can be naturally described as the Hamiltonian \(\hat{H}_{\rm int}=-(g_{\rm e}\mu_{\rm B}/\hbar)\vec{\hat{B}}\cdot\vec{\hat{S}}\), with the Lande factor \(g_{\rm e}\), Bohr magneton \(\mu_{\rm B}\) and spin operators \(\hat{\vec{S}}=(\hbar/2)(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z})\). To be much clearer, the interaction Hamiltonian can be written as \(\hat{H}_{\rm int}=g(r)(\hat{a}^{\dagger}\hat{\sigma}^{-}+\hat{a}\hat{\sigma}^ {+})\), with the coupling strength \(g(r)\) dependent on the distance between the NV spin and the micromagnet \(r=r_{0}+z\). Here, \(z\) (\(r_{0}\)) denotes the modulated (static) part of the distance relative to the equilibrium, respectively. Then, by quantizing the modulated motion \(z\), it is possible to introduce a mechanical mode with the vibration frequency \(\omega_{m}\). Up to first order on the quantized coordinate \(\hat{z}=z_{\rm zpf}(\hat{b}+\hat{b}^{\dagger})\), with the zero-point fluctuation \(z_{\rm zpf}=\sqrt{\hbar/(2M\omega_{\rm m})}\), the tripartite interaction appears with the coupling rate [123] \[\lambda=\frac{3g_{\rm e}\mu_{0}\mu_{B}}{8\pi\tau_{0}^{4}}\sqrt{\frac{|\gamma| M_{s}V}{M\omega_{\rm m}}}, \tag{2}\] where \(\mu_{0}\) is the permeability of vacuum, \(M_{\rm s}\) is the saturation magnetization, and \(V\) is the volume of YIG sphere. For the mechanical mode, we propose three probable schemes that can generate the relative motion between the NV center and the YIG sphere: an NV center in a trapped diamond nanoparticle or embedded in a cantilever [66; 68] coupled to the magnon mode of a YIG sphere, and a single NV spin interacting with the magnon mode of a levitated micromagnet [122; 137], as shown in Fig. 1(b). Here, we focus on the setup where the diamond nanoparticle containing a single NV center is trapped in a Paul trap [top part of Fig. 1(b)]. An additional oscillating electrical potential [111; 138] supplies the approach to modulate and drive the center-of-mass motion of the trapped diamond particle, which gives rise to an added potential \(\hat{V}_{\rm dr}=-2qU_{\rm T}(\hat{z}/d_{\rm T})^{2}\cos(2\omega_{\rm p}t)\), with the diamond particle charge \(q\), the voltage amplitude \(U_{\rm T}\), and the characteristic trap dimension \(d_{\rm T}\). Hence, the center-of-mass motion of the diamond particle can be described by the Hamiltonian \(\hat{H}_{\rm m}=\frac{\hat{p}_{z}^{2}}{2M}+\frac{1}{2}M\omega_{\rm m}^{2}\hat{z }^{2}+\frac{1}{2}k_{\rm e}(t)\hat{z}^{2}\), with momentum operator \(\hat{p}_{z}\). Here, the effective mass \(M\) of the mechanical mode is the mass of the diamond particle, while the frequency \(\omega_{\rm m}\) is relevant to the electrical trap and the charge to mass ratio of the diamond particle. The rotation mode of the trapped diamond particle can be safely neglected, since its frequency is vanished with a spherical diamond [139]. The last term is the parametric drive with the time-dependent tunable stiffness coefficient [123] \[k_{\rm e}(t)=-\frac{4qU_{\rm T}}{d_{\rm T}^{2}}\cos(2\omega_{\rm p}t). \tag{3}\] Employing the transformation \(\hat{b}=\hat{z}/(2z_{\rm zpf})+iz_{\rm zpf}\hat{p}_{z}/\hbar\), the Hamiltonian of the mechanical mode can be written as \[\hat{H}_{\rm m}=\omega_{\rm m}\hat{b}^{\dagger}\hat{b}-\Omega_{\rm p}\cos(2 \omega_{\rm p}t)(\hat{b}+\hat{b}^{\dagger})^{2}, \tag{4}\] Figure 1: (color online). (a) Schematic of the physical model. The spin qubit (red circle), the phonon mode (cyan ellipse), and the Kittel magnon mode (green circle) are simultaneously coupled, with the enhanced coupling rate \(\lambda e^{r}\) (blue trichotomous arrow) via a two-phonon driving (blue wavy arrow). (b) Schematic illustration of this proposal: a diamond particle with single NV spins in an electrical trap (top); an NV center embedded in a cantilever (middle); a YIG microsphere levitated in a magnetic trap (bottom). with the parametric-drive amplitude \(\Omega_{\rm p}=2qU_{\rm T}z_{\rm zpf}^{2}/\hbar\theta_{\rm T}^{2}\). As alternatives in the middle and bottom parts of Fig. 1(b), similar results can be obtained for both cases [123]. In a suitable rotation framework, dropping the high-frequency oscillation and the constant terms as well, the total Hamiltonian of the system can be obtained as \[\hat{H}_{\rm Tot}= \delta_{\rm K}\hat{a}^{\dagger}\hat{a}+\delta_{\rm m}\hat{b}^{ \dagger}\hat{b}+\frac{\delta_{\rm NV}}{2}\hat{\sigma}_{z}-\frac{\Omega_{\rm p }}{2}(\hat{b}^{\dagger 2}+\hat{b}^{2})\] \[+\lambda(\hat{b}+\hat{b}^{\dagger})(\hat{a}^{\dagger}\hat{\sigma }^{-}+\hat{a}\hat{\sigma}^{+})+\hat{H}_{\rm JC}, \tag{5}\] with the detunings \(\delta_{\rm K}=\omega_{\rm K}-\omega_{\rm p}\), \(\delta_{\rm m}=\omega_{\rm m}-\omega_{\rm p}\), and \(\delta_{\rm NV}=\omega_{\rm NV}-\omega_{\rm p}\). Here, we have included the spin-magnon coupling term \(\hat{H}_{\rm JC}=g_{0}(\hat{a}^{\dagger}\hat{\sigma}^{-}+\hat{a}\hat{\sigma} ^{+})\), with the coupling rate \(g_{0}=r_{0}\lambda/(3z_{\rm zpf})\).. _Enhanced tripartite interactions.--_For the Hamiltonian (5), we can apply the unitary transformation \(\hat{U}_{\rm S}(r)=\exp[r(\hat{b}^{2}-\hat{b}^{\dagger 2})/2]\) to diagonalize the center-of-mass mechanical mode. Here, the squeezing parameter \(r\) is defined as \(\tanh 2r=\Omega_{\rm p}/\delta_{\rm m}\). In this squeezed frame, the total Hamiltonian can be written as \[\hat{H}_{\rm Tot}^{\rm S}= \delta_{\rm K}\hat{a}^{\dagger}\hat{a}+\Delta_{\rm m}\hat{b}^{ \dagger}\hat{b}+\frac{\delta_{\rm NV}}{2}\hat{\sigma}_{z}\] \[+\lambda_{\rm eff}(\hat{b}+\hat{b}^{\dagger})(\hat{a}^{\dagger} \hat{\sigma}^{-}+\hat{a}\hat{\sigma}^{+})+\hat{H}_{\rm JC}, \tag{6}\] where \(\Delta_{\rm m}=\delta_{\rm m}/\cosh 2r\) and \(\lambda_{\rm eff}=\lambda e^{r}\). The eigenstates of the free Hmiltionian \(\{|g,m,k\rangle,|e,m\pm 1,k-1\rangle\}\) can be applied to clarify the process of the tripartite interaction. Here, \(g\) (\(e\)) denotes the \(|0\rangle\) (\(|+1\rangle\)) state of the NV spin. The particle numbers of the phonons and magnons are denoted by \(\{m,m\pm 1\}\) and \(\{k,k-1\}\). The condition for red (blue) detuning, \(\delta_{\rm K}\sim\delta_{\rm NV}-\Delta_{\rm m}\) (\(\delta_{\rm K}\sim\delta_{\rm NV}+\Delta_{\rm m}\)), allows for the interaction \(\hat{a}\hat{b}\hat{\sigma}^{+}+H.c.\) (\(\hat{a}^{\dagger}\hat{b}\hat{\sigma}^{-}+H.c.\)) in Eq. (1) with the transition between \(\{|g,m,k\rangle\) and \(|e,m-1,k-1\rangle\}\) (\(|e,m+1,k-1\rangle\)), which describes the spin and phonon annihilation upon magnon excitation (the spin annihilation with magnon and phonon excitation) and the inverse process. Remarkably, we find that the tripartite coupling strength \(\lambda_{\rm eff}\) can be exponentially enhanced due to the amplification of the mechanical fluctuation caused by the phonon squeezing [Fig. 2(a)]. For the scheme of the trapped diamond nanoparticle, the tripartite interaction can have the same magnitude as the bipartite interaction. To make this clear, we define the ratio \(\lambda_{\rm eff}/g_{0}=3e^{r}z_{\rm zpf}/(d+R+R_{s})\) with the diamond particle radius \(R_{\rm s}\) and the surface spacing \(d\). With a proper choice of \(r\) and \(R\), this ratio exceeds 1, indicating the coexistence of the two different interactions [see Fig. 2(b) and 2(c)]. Naturally, as shown in Fig. 2(d), the effective tripartite coupling strength \(\lambda_{\rm eff}\) exponentially increases with the squeezing parameter \(r\), and is inversely proportional to \(R^{5/2}\). We now consider this tripartite coupling system in a realistic situation. Here, we take into account the dephasing of the NV center spin (\(\gamma_{\rm s}\)), the decay of the Kittel mode (\(\gamma_{\rm K}\)), and the effective mechanical phonon (\(\Gamma_{\rm m}\)). Though the effective mechanical decay rate is exponentially enlarged as well, one can define a generalized cooperativity \(\mathcal{C}=\lambda_{\rm eff}^{3}/(\Gamma_{\rm m}\gamma_{\rm K}\gamma_{\rm s})\) to quantify the coupling regime. As shown in Fig. 2(e), the system can reach the strong coupling regime (\(\mathcal{C}>1\)) with a large range of \(R\) and \(r\). The result shows that increasing \(r\) and decreasing \(R\) enable a large enhancement of the cooperativity. Note that the results displayed in Fig. 2 are obtained with the surface spacing \(d=5\) nm and the diamond radius \(R_{s}=10\) nm. To give more insight into this proposal, we numerically simulate the time-dependent occupation evolution of the spin qubit, the Kittel magnon, and the mechanical motion, as shown in the Fig. 3. Fig. 3 (a) shows, without mechanical amplification, the population of the mechanical mode can be neglected, and the dominant term is the spin-magnon coupling. As the center-of-mass motion is modulated, the tripartite spin-magnon-phonon interaction needs to be considered. In the intermediate squeezed regime, e.g. \(r=3\), tripartite and dual interactions coexist, as shown in Fig. 3(b) and (c), with different decays. The direct tripartite coupling dominates the pairwise interaction when the squeezing parameter is large enough, despite the large decay of the magnon, as illustrated in Fig. 3(d). Therefore, by properly choosing the experimental parameters, strong spin-magnon-phonon coupling at the single quantum level can be obtained. _Applications.--_We now consider generating tripartite Figure 2: (color online). (a) Tripartite coupling enhancement \(\lambda_{\rm eff}/\lambda\) versus the squeezing parameter \(r\). (b) The ratio \(\lambda_{\rm eff}/g_{0}\) versus \(r\) with different radius of the YIG sphere. (c) The ratio \(\lambda_{\rm eff}/g_{0}\) versus \(R\) for different \(r\). (d) and (e) Contour maps of \(\lambda_{\rm eff}\) and the tripartite cooperativity \(\mathcal{C}\) versus \(R\) and \(r\). The dashed line in (d) indicates the value of 1 MHz. The dashed line in (e) indicates the value of 1. entanglement among the spin qubit, the Kittel magnon and the mechanical phonon via the enhanced tripartite coupling. Here, we employ the measure of genuine tripartite entanglement, _minimum residual contangle_ ranging from 0 to 1, defined as \(E_{l}^{A|B|C}=\min_{(A,B,C)}[E_{l}^{A|(BC)}-E_{l}^{A|B}-E_{l}^{A|C}]\), where (A,B,C) denotes all the permutations of the tripartite system [140]. The contangles \(\{E_{l}^{A|(BC)},E_{l}^{A|B},E_{l}^{A|C}\}\) are defined as the quadratic logarithm of \(\{||\rho^{T_{A}}||,||\rho_{AB}^{T_{A}}||,||\rho_{AC}^{T_{A}}||\}\) with the trace norm (\(||\cdot||\)), partial transpose (superscript), and partial trace (subscript). We consider that the whole system starts from the state \(|e,0,0\rangle\) under the Hamiltonian Eq. (6), with different squeezing parameters \(r=\{0,1.5,3,4.5\}\), as shown in Fig. 4(a). The minimum residual contangle vanishes without mechanical amplification, implying that the tripartite interaction is insignificant. When the center-of-mass motion is modulated by an applied electrical potential, the quality of the produced tripartite entangled state and the speed with which it is generated can be greatly improved. Using another entanglement measure three-tangle extended from the concurrence [141], we can obtain the same result as the one of minimum residual contangle[Fig. 4(b)]. The true tripartite entanglement between the degrees of freedom of the system can be widely used to execute tasks in the field of quantum information, such as quantum teleportation [142; 143], dense coding [144], quantum computation [145], and quantum secure sharing [146; 147]. We proceed to discuss how to detect the tripartite entanglement. The above measure of genuine tripartite entanglement is calculated from the density matrix of the whole system, which indicates that a possible approach can be the measurement of the density matrix using quantum state tomography [148; 149; 150] or direct measurement [151; 152; 153]. The readout of magnons can be realized by single-shot detection with a superconducting qubit [154; 50]. For the NV center, the state can be detected by cycling optical transition [155], or photoelectrical detection of magnetic resonance [156]. The motion of nanodiamond particles can be detected by optical detection [119]. _Experimental feasibility._--To examine the feasibility of this proposal for experiments, the center-of-mass vibration of a diamond particle can be obtained by levitating it in a quadratic potential. The paul trap [113; 114; 115; 116; 117; 118; 119; 157] is a proper electric potential to realize this scheme. At the equilibrium location, the electric field can operate as a force to oppose gravity. The levitated regime has been accomplished experimentally with a large mechanical factor \(Q\sim 10^{8}\)[158; 58]. In this setup, for the spin qubit, we select the transition between the state \(|0\rangle\) and \(|+1\rangle\) in the ground states of NV center with frequency \(\omega_{\rm NV}=D_{0}+|\gamma|B_{z,s}\). Here, \(D_{0}/2\pi=2.87\) GHz is the electronic zero-field splitting. Applying variable external static magnetic field \(B_{z,s}\) and \(B_{z,{\rm K}}\), hence, the detunings of the spin (\(\delta_{\rm NV}\)) and the Kittel mode (\(\delta_{\rm K}\)) can be tunable at the order of magnitudes of 10 GHz. To enlarge the direct tripartite interaction, we assume the driving amplitude \(\Omega_{\rm p}/2\pi\sim\omega_{\rm p}/2\pi\sim 200\) MHz with the voltage amplitude \(U_{\rm T}=12.6\) V and the characteristic trap dimension \(d_{\rm T}=100\)\(\mu\)m [123]. Here, we estimate the charge to mass ratios on the order of mC/kg [116]. At the same time, we estimate the mechanical frequency \(\omega_{\rm m}/2\pi\sim 1\) kHz [58]. Then the squeezing parameter satisfies \(r\in[0,5]\) to allow for the effective tripartite coupling \(\lambda_{\rm eff}\sim 100\lambda\). Given that \(r=4.5\), the enhanced coupling strength is \(\lambda_{\rm eff}/2\pi\sim 1.7\) MHz. Note that the frequencies \(\omega_{\rm K},\omega_{\rm NV}\) are on the order of 10 GHz, far larger than the mechanical frequency \(\omega_{\rm m}\). At low temperature \(T\sim 10\) mK, the thermal magnon number can be ignored with \(\bar{n}_{\rm K}\ll 1\) for the case of \(\omega_{\rm K}/2\pi\sim 10\) GHz. For practical Figure 3: (color online). Quantum dynamics of the NV spin, the Kittel magnon, and the center-of-mass motion (a) without mechanical amplification (\(r=0\)), (b) and (c) with \(r=3\), and (d) with \(r=4.5\). The magnon decay rate is \(\gamma_{\rm K}\sim 5\lambda\) in (a) and (b), while it is \(\gamma_{\rm K}\sim 50\lambda\) in (c) and (d). The other parameters are \(g_{0}\sim 30\lambda\), \(\gamma_{\rm s}\sim 0.05\lambda\), \(\Gamma_{\rm m}\sim 1.1\lambda\), \(R_{s}=10\) nm, \(R=50\) nm, and \(d=5\) nm. The results are obtained with the red detuning \(\delta_{\rm K}\sim\delta_{\rm NV}-\Delta_{\rm m}\) and initial state \(|e,0,0\rangle\). considerations with saturation magnetization, we assume the decay of Kittel mode as \(\gamma_{\text{K}}/2\pi\sim 1\) MHz [58; 99]. For the mechanical mode, the thermal decay rate is \(\gamma_{\text{th}}/2\pi=k_{B}T/(2\pi\hbar Q)\sim 2\) Hz, which comes from the heating due to collisions with gas molecules [159]. Here, the gas damping satisfies \(\gamma_{\text{gas}}=\omega_{\text{m}}/Q\) with a ultra-low pressure \(P_{\text{gas}}\sim 10^{-9}\) mBar [123; 157]. The mechanical amplification also leads to a magnification of the phonon decay by \(e^{2r}\). The effective decay of mechanical mode can be obtained as \(\Gamma_{\text{m}}/2\pi=e^{2r}\gamma_{\text{th}}/2\pi\sim 21\) kHz. For a single NV center spin in diamond, the dephasing rate is about \(\gamma_{\text{s}}/2\pi\sim 1\) kHz [38]. Therefore, we can naturally estimate the tripartite cooperativity \(\mathcal{C}\sim 10^{5}\gg 1\), which definitely indicates the strong coupling regime. _Conclusion_.--In this work, we propose an experimentally feasible method for realizing direct and strong tripartite interactions among single NV spins, the Kittel magnon mode, and the phonon by introducing the relative motion between a single NV center and a nearby micromagnet. We show that the direct tripartite coupling strength can be exponentially enhanced by up to two orders of magnitude via modulating the mechanical motion via parametric amplification. We have shown the presence of tripartite entanglement via the enhanced spin-magnon-phonon coupling, and the possibility to actively control the tripartite coupling for realistic experimental parameters. This is a promising platform for quantum science and technology based on spin-magnon-phonon tripartite strongly coupled systems. P.B.L. is supported by the National Natural Science Foundation of China under Grant No. 92065105, and the Natural Science Basic Research Program of Shaanxi (Program No. 2020JC-02). F.N. is supported in part by Nippon Telegraph and Telephone Corporation (NTT) Research, Japan Science, and Technology Agency (JST) (via the Quantum Leap Flagship Program (Q-LEAP), Moonshot R&D Grant No. JPMJMS2061, Japan Society for the Promotion of Science (JSPS) (via the Grants-inAid for Scientific Research (KAKENHI) Grant No. JP20H00134, Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute (FQXi) (via Grant No. FQXiIAF19-06) The simulations are obtained using QuTiP [160; 161].
2306.05930
Positivity certificates for linear recurrences
We consider linear recurrences with polynomial coefficients of Poincar\'e type and with a unique simple dominant eigenvalue. We give an algorithm that proves or disproves positivity of solutions provided the initial conditions satisfy a precisely defined genericity condition. For positive sequences, the algorithm produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence.
Alaa Ibrahim, Bruno Salvy
2023-06-09T14:44:38Z
http://arxiv.org/abs/2306.05930v2
# Positivity certificates for linear recurrences ###### Abstract. We show that for solutions of linear recurrences with polynomial coefficients of Poincare type and with a unique simple dominant eigenvalue, positivity reduces to deciding the genericity of initial conditions in a precisely defined way. We give an algorithm that produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence. ## 1. Introduction A sequence \((u_{n})_{n\in\mathbb{N}}\) of real numbers is called _P-finite_ if it satisfies a linear recurrence \[p_{d}(n)u_{n+d}=p_{d-1}(n)u_{n+d-1}+\cdots+p_{0}(n)u_{n},\qquad n\in\mathbb{N}, \tag{1}\] with coefficients \(p_{i}\in\mathbb{R}[n]\)1. When the coefficients \(p_{i}\) are constants in \(\mathbb{R}\), the sequence is called _C-finite_. If \(p_{d}\neq 0\), the _order_ of the relation (1) is \(d\). If \(0\not\in p_{d}(\mathbb{N})\), then the sequence is completely determined by the recurrence and initial conditions \((u_{0},\ldots,u_{d-1})\). We make this assumption in the rest of this article. (2) Footnote 1: Other names for such sequences are _P-recursive_[10] and _holonomic_. The name P-finite was introduced by Zeilberger [27]. It is more consistent with the use of ‘C-finite’ for constant coefficients and ‘D-finite’ for linear differential equations. It is also the choice made in recent works by Kauers and Pillwein [12, 13]. Footnote 2: When it does not hold, the sequence can be defined with extra initial conditions \(u_{i+d}\) for \(i\) s.t. \(p_{d}(i)=0\). For positivity questions, dealing with \(k:=\max(i\in\mathbb{N}\mid p_{d}(i)=0)\) initial values of the sequence separately and considering the recurrence satisfied by \((u_{n-k})_{n\in\mathbb{N}}\) reduces to the situation when \(0\not\in p_{d}(\mathbb{N})\). Given the polynomials \(p_{i}\) and initial conditions, the _positivity problem_ is to decide whether \(u_{n}\geq 0\) for all \(n\in\mathbb{N}\) (3). For instance, the rational sequence \[s_{n}=\sum_{k=0}^{n}{(-27)^{n-k}2^{2k-n}\frac{(3k)!}{k!^{3}}\binom{k}{n-k}} \tag{2}\] is not obviously positive. One way of proving its positivity is to use an algorithm for recurrences of order \(2\), due to Kauers and Pillwein [13], directly on the recurrence \[2(n+2)^{2}s_{n+2}=(81n^{2}+243n+186)s_{n+1}-81(3n+2)(3n+4)s_{n},\quad s_{0}=1, s_{1}=12\] that can be computed by Zeilberger's algorithm [14]. (Another proof was given by Straub and Zudilin using hypergeometric identities [11].) In this work, we give an algorithm proving positivity of a large class of sequences of arbitrary order, including those dealt with by the algorithm of Kauers and Pillwein. P-finite and C-finite sequences are closed under addition, product and Cauchy product \(((u_{n})_{n\in\mathbb{N}},(v_{n})_{n\in\mathbb{N}})\mapsto(\sum_{k=0}^{n}u_{k}v_{ n-k})_{n\in\mathbb{N}}\). Also, for any \(\ell\in\mathbb{N}_{>0}\) and \(q\in\{0,\ldots,\ell-1\}\), the subsequence \((u_{\ell n+q})_{n\in\mathbb{N}}\) satisfies a linear recurrence (of order at most \(d\)). These operations are all effective, so that recurrences can be computed for these sequences given recurrences for the input [10]. These closure properties allow to reduce other problems to that of positivity. _Example 1_.: If \((u_{n})_{n\in\mathbb{N}}\) is a C-finite sequence of rational numbers and \(m\) is the lcm of the denominators of the initial conditions \(u_{0},\ldots,u_{d-1}\) and of the coefficients \(p_{0},\ldots,p_{d-1}\), then the sequence defined by \(v_{n}=m^{n}u_{n}\) is a C-finite sequence of integers, \((w_{n})_{n\in\mathbb{N}}=(v_{n}^{2}-1)_{n\in\mathbb{N}}\) is another C-finite sequence of integers, which is positive if and only if \(u_{n}\neq 0\) for all \(n\). This reduction of Skolem's problem, which is notoriously difficult, to positivity, shows that positivity is also likely to be hard [1, 11]. _Example 2_.: Deciding whether \(u_{n}\geq v_{n}\) for all \(n\in\mathbb{N}\) reduces to the positivity of \((u_{n}-v_{n})_{n\in\mathbb{N}}\). Similarly, deciding that \((u_{n})_{n\in\mathbb{N}}\) is increasing (\(u_{n+1}\geq u_{n}\) for all \(n\)), or convex (\(u_{n+1}+u_{n-1}\geq 2u_{n}\)) or log-convex (\(u_{n+1}u_{n-1}\geq u_{n}^{2}\)) reduce to the positivity problem, by constructing recurrences for these new sequences. For applications of the positivity problem of C-finite sequences, we refer to the numerous references in the work of Ouaknine and Worrell [11]. Motivations for studying positivity in the more general context of P-finite sequences also come from various areas of mathematics and its applications, including number theory [13], combinatorics [14], special function theory [21], or biology [15]. In computer science, the verification of loops allowing multiplication by the loop counter leads to P-finite sequences [1, 16]. Such recurrences also occur in the floating-point error analysis of simple loops obtained by discretization of linear differential equations [1]. The positivity of sequences also plays a role for the numerical stability of the computation of the sum of convergent power series [12]. **Previous works.** For C-finite sequences of rational numbers, Ouaknine and Worrell have shown decidability of positivity for recurrences of order up to \(5\), and a relation between the decidability in higher order and the computability of the homogeneous Diophantine approximation of a specific set of transcendental numbers, a problem related to difficult questions in analytic number theory [11]. We refer to their work for earlier references. When the characteristic polynomial of the sequence does not have multiple roots, this extends to order up to \(9\). For reversible recurrences of integers (reversible means that unrolling the recurrence backwards produces only integers for negative indices), decidability of positivity is known for order up to \(11\)[13, 14] and this goes up to \(17\) if the recurrence is both reversible and with square-free characteristic polynomial [14]. Closer to our work, for recurrences having one dominant eigenvalue, decidability is proven for arbitrary order [11]. This is the property we use for P-finite sequences. For P-finite sequences of order \(1\), positivity is easy. For order \(2\), it is reducible to the problem of _minimality_[14], itself a special case of _genericity of initial conditions_ that appears in our work. Another approach to the positivity of P-finite sequences starts with the work of Gerhold and Kauers [14], who suggest to check for increasingly large \(k\) whether \[u_{n}\geq 0\wedge u_{n+1}\geq 0\wedge\cdots\wedge u_{n+k}\geq 0\Rightarrow u_{n+k+1 }\geq 0.\] Using the recurrence, this can be rewritten as a decision problem in the existential theory of the reals. Gerhold and Kauers use cylindrical algebraic decomposition [10] for this; other approaches are possible [1, ch. 13]. They obtained several successes with this method, notably an automatic proof of Turan's inequality \[P_{n}(x)^{2}-P_{n-1}(x)P_{n+1}(x)\geq 0,\qquad x\in[-1,1]\] for the Legendre polynomials, that involves a parameter [14]. But termination is not guaranteed in general and sufficient conditions for the success of this method are unclear [13]. Kauers and Pillwein focused on the application of this method for P-finite sequences [13]. They added the idea of looking for a proof of the stronger inequality \(u_{n+1}\geq\mu u_{n}\) for a well-chosen \(\mu>0\). They showed that this works for order \(2\) with generic initial conditions. They isolated a class of recurrences of order \(3\) for which this approach also works. Pillwein [11] explored variants of this method and extended the class of recurrences that can be handled with this type of method. Recently, Pei, Wang, Wang [21] revisited the case of order \(2\) and gave a simple way to compute \(\mu\) as above, and \(N\) such that \(u_{n+1}\geq\mu u_{n}>0\) for \(n\geq N\). ### Contributions Our starting point is a result of Friedland on the convergence of products of the successive elements of a convergent sequence of matrices [12]. We make explicit the effective aspects of some of his proofs and apply them to questions of positivity. We deal with P-finite sequences of Poincare type, which means that after dividing by the leading coefficient and taking the limit \(n\to\infty\), each of the coefficients has a finite limit. (We show in section 2.2 how to reduce to this case.) Moreover, we demand that the characteristic polynomial of this new recurrence has only one root of maximum modulus and that it is a simple root. Then, we show that, except for a hyperplane of initial conditions, positivity can be proved by an induction that proves \(d(d-1)/2\) linear inequalities simultaneously. Note that for order \(d=2\), \(d(d-1)=2\) is also the number of inequalities used by Kauers and Pillwein. These inequalities have a geometric nature: they describe a convex cone containing the vector \((u_{n},u_{n+1},\ldots,u_{n+d-1})\), bounded by \(d(d-1)\) hyperplanes and contained in \(\mathbb{R}_{>0}^{d}\). The proof by induction consists in proving that successive vectors do not leave that cone. Our algorithm thus produces that cone and an integer \(N\) such that at index \(N\), the vector has entered the cone and no \(u_{n}\) of smaller index is negative. Capturing the geometry of the iteration by means of over-approximations by cones or related geometric surfaces is natural in this context. For the less general C-finite case and more general questions than positivity, related (but distinct) surfaces have been used recently [1]. Like Friedland's result, our approach applies to the more general situation of a linear recurrence \(U_{n+1}=A(n)U_{n}\), where \(A(n)\) is a square matrix that is invertible for all \(n\geq 0\) and whose limit as \(n\to\infty\) is finite. For positivity, we further require that the limit has a unique eigenvalue of maximal modulus and a corresponding eigenvector with positive coordinates. This work is structured as follows. First, background on eigenvalues and asymptotics of linear recurrences is recalled in section 2. Section 3 presents our result, the positivity certificates and how they are verified. The ideas leading to the algorithm are presented in section 4, where we describe the relevant tools from Friedland's work. The algorithm is then given with its proof in section 5. ## 2. Background ### Algebraic Coefficients P-recursivity can be defined over arbitrary fields, but as we are interested in positivity issues, it is natural to restrict our attention to subfields of \(\mathbb{R}\). More precisely, we denote by \(\mathbf{Q}\) a field that is either the field \(\mathbb{Q}\) of rational numbers, or a real number field \(\mathbb{Q}(\alpha)\), where \(\alpha\) is given, for instance, by a square-free polynomial and an isolating interval [1, 2]. In particular, with this data structure, it is possible to determine the sign of an element of \(\mathbf{Q}\), where'sign' means any of \(<0\) or \(>0\) or \(=0\). From there, using Sturm sequences, one can compute the number of roots of a polynomial in \(\mathbf{Q}[x]\) in an interval with endpoints that are either infinite or in \(\mathbf{Q}\). A direct consequence used repeatedly in this work is that one can determine an integer beyond which a polynomial in \(\mathbf{Q}[x]\) has fixed sign. (For this problem, one can also use simple Cauchy-type bounds [13, Thm. 4.2].) In some cases, we also use the fact that these algorithms extend to \(\mathbf{Q}(\lambda)\) with \(\lambda\in\mathbb{R}\) algebraic over \(\mathbf{Q}\). ### Dominant Eigenvalues If \(U_{n}\) denotes the vector \((u_{n},\ldots,u_{n+d-1})^{\mathsf{T}}\), the linear recurrence (1) of order \(d\) is a special case of a first-order linear recurrence \[U_{n+1}=A(n)U_{n}, \tag{3}\] where \(A(n)\) is the companion matrix \[A(n)=\begin{pmatrix}0&1&0&\ldots&0\\ 0&0&1&\ldots&0\\ \ldots&\ldots&\ldots&\ldots&\ldots\\ 0&0&0&\ldots&1\\ \frac{p_{0}(n)}{p_{d}(n)}&\frac{p_{1}(n)}{p_{d}(n)}&\frac{p_{2}(n)}{p_{d}(n)}& \ldots&\frac{p_{d-1}(n)}{p_{d}(n)}\end{pmatrix},\qquad n\in\mathbb{N}.\] The sequence \(U_{n}\) is then recovered from the vector of initial conditions by the matrix factorial \(U_{n}=A(n-1)A(n-2)\cdots A(0)U_{0}\). **Definition 1**.: The linear recurrence (3) (and also (1) as a special case) is said to be of _Poincare type_ if the matrix \(A:=\lim_{n\to\infty}A(n)\) is finite. The motivation for considering this notion is that the finite case corresponds to the situation of a linear recurrence with constant coefficients. Then the P-finite case can be viewed as a perturbation of the C-finite case. For linear recurrences of the type of eq. (1), being of Poincare type is not a strong restriction for positivity questions. If the recurrence is not of Poincare type, then one of the \(p_{i}\) has degree higher than that of \(p_{d}\). This means that a solution behaves asymptotically like a rational power \(p/q\) of \(n!\). The maximal such power can be found by a Newton polygon [10] and then one can consider the P-finite sequence obtained by multiplying \(u_{n}\) by the solution of \(n^{p}u_{n+q}=u_{n}\), with initial conditions \((1,\ldots,1)\). The same operation can also be used if the matrix \(A\) is nilpotent, using a recurrence of the form \(u_{n+q}=n^{p}u_{n}\) instead, so that we can always assume that the recurrence is of Poincare type, with \(A\) having a nonzero eigenvalue. **Definition 2** (Dominant eigenvalues).: Let \(\lambda_{1},\ldots,\lambda_{m}\) be the complex eigenvalues of the limit matrix \(A\), numbered by decreasing modulus so that \[|\lambda_{1}|=|\lambda_{2}|=\cdots=|\lambda_{k}|>|\lambda_{k+1}|\geq|\lambda_{k+ 2}|\cdots\geq|\lambda_{m}|.\] Then \(\lambda_{1},\ldots,\lambda_{k}\) are called the _dominant eigenvalues of \(A\)_. We say that an eigenvalue is _simple_ when it is a simple root of the characteristic polynomial. Given a characteristic polynomial in \(\mathbf{Q}[x]\), one can isolate the dominant eigenvalues in polynomial complexity [10], see also [14]. ### Asymptotics For C-finite sequences, a starting point is the closed-form \[u_{n}=\sum_{i=1}^{k}C_{i}(n)\lambda_{i}^{n}+\sum_{i>k}C_{i}(n)\lambda_{i}^{n},\] split into one sum over dominant eigenvalues and one sum over smaller ones. Since the basis of solutions is known explicitly, the polynomials \(C_{i}(n)\) can be computed easily from the initial conditions. They belong to \(\mathbf{Q}(\lambda_{1},\ldots,\lambda_{m})[n]\). The difficulty when using this formula to prove positivity is that for \(k>1\), the first sum contains oscillating sequences that can come very close to \(0\). This is where tools from analytic number theory, such as Baker's theorem on linear forms in logarithms, come into play for deciding positivity [13, 14, 15]. For P-finite sequences the situation is made harder by the fact that there is no'simple' basis of solutions. Also, the constants that appear, even in the leading coefficient, are difficult to relate to the initial conditions. This is illustrated by the following. _Example 3_.: The number of 'fragmented permutations' of size \(n\) is \(c_{n}/n!\) where \((c_{n})_{n\in\mathbb{N}}\) is defined by \[(n+2)c_{n+2}=(2n+3)c_{n+1}-nc_{n},\quad c_{0}=c_{1}=1.\] It satisfies [10, Prop. VIII.4] \[c_{n}\sim\frac{n^{-3/4}e^{2\sqrt{n}}}{2\sqrt{e\pi}},\quad n\to\infty.\] Here, the leading coefficient \(1/2\sqrt{e\pi}\) is computed (with the rest of the asymptotic behaviour) by exploiting a closed-form expression of the generating function of \((c_{n}/n!)_{n\in\mathbb{N}}\). In general, this is not available. While we know how to compute a basis of formal asymptotic expansions that are solutions of linear recurrences (by the results of Birkhoff-Tryitzinsky improved by Immink [11, 12]), we do not know how to compute the leading coefficient exactly in general. Currently, the closest we have is a certified numerical approximation in the form of an interval that can be made arbitrarily small, but \(0\) cannot be excluded. This is known as the _connexion problem_ for linear differential equations. Still, this is a good basis for an analytic proof of positivity, as was recently shown by Melczer and Mezzarobba on a recurrence of order \(7\) with polynomial coefficients, themselves of degree \(7\)[13, 14]. In the case of Poincare-type recurrences, Poincare related the asymptotic behaviour to the C-finite case, showing that when all the eigenvalues are simple and of distinct moduli, any solution \((u_{n})_{n\in\mathbb{N}}\) of the recurrence is either ultimately \(0\) (all its terms are \(0\) from a certain index on) or satisfies \(\lim u_{n+1}/u_{n}=\lambda_{i}\) for some \(i\)[10]. Solutions that are ultimately \(0\) exist if and only if the trailing coefficient \(p_{0}(n)\) of eq. (1) vanishes at a positive integer, or equivalently, when the matrix \(A(n)\) is not invertible for some \(n\in\mathbb{N}\). For positivity testing, one can proceed as for the leading coefficient: treat the initial terms of the sequence separately up to the largest integer where this happens, and then shift the index. Perron, Kreuser and later Kooman gave results of a converse type: sufficient conditions for a solution to exist with limit \(u_{n+1}/u_{n}=\lambda_{i}\)[14]. We rely on the following more recent analytic result, which we will use with sequences of invertible matrices with entries in \(\mathbf{Q}\). **Theorem 1** (Friedland [11]).: _Let \(A(n)\) be in \(\mathrm{GL}_{d}(\mathbb{C})\) for \(n\in\mathbb{N}\) and tend to a finite limit \(A\) as \(n\to\infty\), such that \(A\) has exactly one dominant eigenvalue \(\lambda\). Then there exist two nonzero vectors \(v,w\) s.t. \(Av=\lambda v\) and_ \[\lim_{n\to\infty}\frac{A(n)\cdots A(1)A(0)}{\|A(n)\cdots A(1)A(0)\|}=vw^{ \mathsf{T}}.\] _A vector of initial conditions \(U_{0}\) is called generic when \(w^{\mathsf{T}}U_{0}\neq 0\)._ Thus, for a generic vector of initial conditions, the sequence \(U_{n}\) has a direction that tends to that of \(v\), in a sense made more precise in section 4. In the constant case, when \(A(n)=A\) for all \(n\), this theorem gives a proof of the convergence of the classical power method [13]. In that situation, the vector \(w\) is a left eigenvector of \(A\) for \(\lambda\), i.e., \(w^{\mathsf{T}}A=\lambda w^{\mathsf{T}}\). In the case of polynomial coefficients, this vector \(w\) is much more elusive. _Example 4_.: The recurrence used by Apery in his proof of the irrationality of \(\zeta(3)\)[13] is \[(n+2)^{3}u_{n+2}=(2n+3)(17n^{2}+51n+39)u_{n+1}-(n+1)^{3}u_{n}.\] It has eigenvalues \(\lambda_{\pm}=(3\pm 2\sqrt{2})^{2}\), and corresponding eigenvectors \((1,\lambda_{\pm})^{\mathsf{T}}\). Up to a nonzero scalar, the vector \(w^{\mathsf{T}}\) is \((1,6/\zeta(3)-5).\) Since \(\zeta(3)\) is irrational, any nonzero vector of initial conditions in \(\mathbb{Q}\) is generic. In order \(2\), the non-generic situation is called _minimal_ as it corresponds to a vector space of dimension \(1\) of solutions that do not have the dominant order of growth. For recurrences of order \(2\), deciding positivity reduces to deciding minimality [12]. Theorem 2 below generalizes this situation to arbitrary order. ## 3. Positivity certificates **Definition 3**.: We say that a vector or a matrix \(V\) is _positive_ (resp. non-negative), and write \(V>0\) (resp. \(V\geq 0\)), when all the coordinates are positive (resp. non-negative). Our main result is the following. **Theorem 2**.: _Let \(A(n)\) be in \(\mathrm{GL}_{d}(\mathbf{Q})\) for \(n\in\mathbb{N}\) and tend to a finite limit \(A\) as \(n\to\infty\), that has a unique simple dominant eigenvalue and a corresponding positive eigenvector. Then proving positivity of the solution of \(U_{n+1}=A(n)U_{n}\) given \(A(n)\) and \(U_{0}\) reduces to deciding genericity of the initial condition \(U_{0}\)._ _Algorithm_PositivityProof _in section 5 either disproves positivity or computes a positivity certificate, in the generic situation. If the initial condition is not generic and the sequence is positive, then, and only then, our algorithm does not terminate. Constructing examples of minimal-order recurrences with coefficients in \(\mathbb{Q}[n]\) and initial conditions in \(\mathbb{Q}\) where this occurs does not seem to be easy. Note that it is a consequence of Pringsheim's theorem [14] (see also [13, 15]) that if \((u_{n})_{n\in\mathbb{N}}\) is a positive solution of eq. (1), and \(\lambda\) is a dominant eigenvalue of the limit matrix \(A\), then \(|\lambda|\) itself is an eigenvalue of it. Thus if there is a unique dominant eigenvalue \(\lambda\), it is real and positive. Moreover, if \(\lambda\) is an eigenvalue of a companion matrix, then \((1,\lambda,\ldots,\lambda^{d-1})\) is a corresponding eigenvector. Thus the condition that the dominant eigenvalue has a positive eigenvector is automatically satisfied for positive P-finite sequences. In the matrix case, it has to be added as a hypothesis in our approach. ### Certificates In theorem 2, a positivity certificate is a data-structure for a proof by induction: it consists of a quadruple \((T,r,N,m)\) formed of an invertible matrix \(T\in\operatorname{GL}_{d}(\mathbb{Q})\), a rational number \(r>1\in\mathbb{Q}\) (or \(r=\infty\)), an integer \(N\in\mathbb{N}\) and a positive integer \(m\in\mathbb{N}_{>0}\). Verification is reduced to checking positivity of polynomials in \(\mathbf{Q}(\lambda)[n]\) for \(n\geq N\), where \(\lambda\) is the dominant eigenvalue of \(A\). Let \(e\) be a positive eigenvector of \(A\) for \(\lambda\), \(v=Te\) and consider two convex cones pointed at \(0\). The first one is \[B_{r}(v)=\{x\in\mathbb{R}_{>0}^{d}\mid x_{i}v_{j}\leq rx_{j}v_{i}\text{ for all }i,j\}. \tag{4}\] If \(r=\infty\), this cone is \(\mathbb{R}_{>0}^{d}\). Otherwise, it is generated by \(2^{d}-2\) vectors obtained by choosing the \(i\)th coordinate in \(\{v_{i},rv_{i}\}\) so that the result is neither \(v\) nor \(rv\). The second cone is its image \[C_{r}(v)=T^{-1}B_{r}(v).\] Verification proceeds in three steps. We first present it when \(m=1\): _Sanity checks:_ check \(\lambda>0\), \(v>0\), \(C_{r}(v)\in\mathbb{R}_{>0}^{d}\). _Initialization:_ check that \(U_{n}\geq 0\) for \(n\leq N\); check that \(U_{N}\in C_{r}(v)\). _Induction step:_ check that \(A(n)(C_{r}(v))\subset C_{r}(v)\) for \(n\geq N\). When these steps are completed, it follows that for all \(n\geq N\), one has \(U_{n}\in C_{r}(v)\subset\mathbb{R}_{>0}^{d}\): positivity is proved. This induction effectively proves \(d(d-1)\) linear inequalities on \((u_{n})_{n\in\mathbb{N}}\) simultaneously, originating in the inequalities that define \(B_{r}(v)\) (and only \(d\) inequalities when \(r=\infty\)). If \(m>1\), the initialization also checks that \(U_{N+1},\ldots,U_{N+m-1}\) belong to \(C_{r}(v)\) and the induction step checks that \(A(n+m-1)\cdots A(n)(C_{r}(v))\subset C_{r}(v)\) instead of \(A(n)(C_{r}(v))\subset C_{r}(v)\). The same argument shows that this proves positivity by induction. In terms of algorithmic complexity, there are two expensive steps: one related to the recurrence and another one related to the initial conditions. The induction step can be performed by checking that each of the \(2^{d}-2\) vectors generating the cone \(C_{r}(v)\) (resp. \(d\) vectors when \(r=\infty\)) has for image by \(A(n)\) a vector of rational functions that satisfies the \(d(d-1)\) inequalities (resp. \(d\) inequalities) defining the cone. This amounts to \(d(d-1)(2^{d}-2)\) (resp. \(d^{2}\)) polynomials in \(\mathbb{Q}[n]\) that have to be proved positive for \(n\geq N\) (e.g., by Sturm sequences, or simply by certified numerical evaluation of the roots). The complexity of that step is thus singly exponential in the order of the recurrence. Concerning the initial conditions, checking \(U_{n}\geq 0\) for \(n\leq N\) has complexity that is clearly polynomial in \(N\) (one can also use multipoint polynomial evaluation to reduce further the cost by evaluating the coefficients \(p_{i}(n)\) for \(n\leq N\) efficiently); this has complexity singly exponential in the _bit size_ of \(N\). Still, at the moment we do not have an upper bound on \(N\) in terms of the input, in particular in relation to a distance of the vector of initial conditions to the hyperplane of non-genericity. ### Examples of certificate verification _Example 5_.: We start with an example where \(r=+\infty\), where verification is easier. The sequence defined by \[u_{n}:=\sum_{k=0}^{n}(-1)^{k}\frac{(4n-3k)!(4!)^{k}}{(n-k)!^{4}k!}\] is the first of a family related to a former conjecture of Gillis, Reznick and Zeilberger [10]. Its positivity was proved automatically by Pillwein [14], using the linear recurrence of order \(4\) that can be computed by Zeilberger's algorithm: \[(2n+5)(4n+11)(4n+7)(n+4)^{3}u_{n+4}\\ -8(4n+7)(4n+13)(n+3)(40n^{3}+380n^{2}+1193n+1240)u_{n+3}\\ +576(192n^{6}+3072n^{5}+20108n^{4}+68918n^{3}+130513n^{2}+129613n +52815)u_{n+2}\\ +13824(4n+15)(32n^{5}+344n^{4}+1424n^{3}+2855n^{2}+2801n+1085)u_{n+1}\\ +331776(4n+15)(4n+11)(2n+7)(n+1)^{3}u_{n}=0\] This sequence admits a relatively small certificate of positivity: \[T=\begin{pmatrix}1&0&0&0\\ -1&1&0&0\\ 0&-2&1&0\\ -3000&-1000&-40&1\end{pmatrix},\quad r=+\infty,\quad N=3,\quad m=1.\] The verification of this certificate thus consists in a proof by induction that the following inequalities are all satisfied for \(n\geq 3\): \[u_{n}>0,\quad u_{n+1}>u_{n},\quad u_{n+2}>2u_{n+1},\quad u_{n+3}>40u_{n+1}+1000 u_{n+2}+3000u_{n+3}.\] We now turn to the verification. The characteristic polynomial has one dominant root \(\lambda\approx 130.\), of much larger modulus than the other ones. The corresponding eigenvector \(v=(1,\lambda,\lambda^{2},\lambda^{3})^{\mathsf{T}}\) is also positive. As \(T^{-1}\) is a triangular matrix with positive elements under the diagonal, we get that \(C_{r}=T^{-1}\mathbb{R}_{>0}^{d}>0\), which concludes the 'Sanity checks'. Checking that the first \(4\) vectors \(U_{0},U_{1},U_{2},U_{3}\) are positive is done by checking \(u_{i}>0\) for \(i=0,\ldots,6\). With \(U_{3}=(18816,1785816,177396480,18271143360)^{\mathsf{T}}\), it is easy to check that \(TU_{3}>0\), i.e., \(TU_{3}\in B_{r}(v)\) or equivalently \(U_{3}\in C_{r}(v)\), concluding the initialization step. Finally, as the cone \(B_{r}\) is \(\mathbb{R}_{>0}^{4}\), the induction step, which consists in checking that \(TA_{n}T^{-1}(\mathbb{R}_{>0}^{4})\subset\mathbb{R}_{>0}^{4}\) for \(n\geq 3\) is readily achieved by a direct computation of \(TA_{n}T^{-1}\), which has the form \[TA_{n}T^{-1}=\begin{pmatrix}1&1&0&0\\ 1&1&1&0\\ 4076&1076&38&1\\ a_{1}(n)&a_{2}(n)&a_{3}(n)&a_{4}(n)\end{pmatrix},\] with \(a_{i}(n)\) rational functions. For instance, \(a_{1}(n)\) is \[\frac{8(362464n^{6}+12010912n^{5}+121406462n^{4}+567578151n^{3}+1363921108n^{2}+1 636882352n+779476880)}{(32n^{6}+608n^{5}+4738n^{4}+19353n^{3}+43628n^{2}+51376n+2 4640)}\] making its positivity apparent. The same is true for the other ones. Therefore the image of any vector with positive coordinates also has positive coordinates. _Example 6_.: As an example with a finite \(r\), we consider the sequence \((u_{n})_{n\in\mathbb{N}}\) defined by the third-order recurrence \[(n+1)u_{n+3}=\left(\frac{77}{30}n+2\right)u_{n+2}-\left(\frac{13}{6}n-3\right) u_{n+1}+\left(\frac{3}{5}n+2\right)u_{n},\qquad n\in\mathbb{N},\] with initial conditions \(u_{0}=1\), \(u_{1}=15/14\), \(u_{2}=8/7\). This is a recurrence that falls outside of the domain reachable by the methods of Kauers and Pillwein [10, 11]. Here is a certificate for its positivity: \[T=\begin{pmatrix}-36/7&76/7&-33/7\\ 162/7&-405/7&250/7\\ 303/14&-4783/84&3049/84\end{pmatrix},\quad r=5/3,\quad N=3040,\quad m=1.\] The dominant eigenvalue is \(\lambda=1\) and the vector \(v\) is \((1,1,1)^{\mathsf{T}}\). So the first part of the 'Sanity checks' is easy. The \(2^{d}-2=6\) edge vectors of the cone \(B_{r}(v)\) are \[\begin{pmatrix}r\\ 1\\ 1\end{pmatrix},\quad\begin{pmatrix}1\\ r\\ 1\end{pmatrix},\quad\begin{pmatrix}1\\ 1\\ r\end{pmatrix},\quad\begin{pmatrix}r\\ r\\ 1\end{pmatrix},\quad\begin{pmatrix}1\\ r\\ r\end{pmatrix},\quad\begin{pmatrix}r\\ 1\\ r\end{pmatrix},\quad\text{with }r=5/3.\] In order to check that \(C_{r}(v)\subset\mathbb{R}^{d}_{>0}\), it is sufficient to test that \(T^{-1}V>0\) for each of these vectors, concluding the 'Sanity checks'. For the initialization, one checks the positivity of the first \(N\) terms of the sequence and that \(TU_{N}\) satisfies the \(d(d-1)=6\) inequalities that define \(B_{r}(v)\). Finally, for the induction step, for each generator \(G\) of the cone \(B_{r}(v)\), one checks that the vector of polynomials \(TA(n)T^{-1}G\) satisfies the linear inequalities that define \(B_{r}(v)\). For instance, for the generator \(G_{1}=\begin{pmatrix}r&1&1\end{pmatrix}^{\mathsf{T}}\) one gets the polynomials \[360612n+392450160,1939140n-264007440,1247967n+399271660,\\ 1406727n-268100340,1839915n+153100060,420147n+142185660,\] that have to be proved positive for \(n\geq N\). As they are all linear in \(n\) in this example, this is straightforward. In terms of the sequence \((u_{n})_{n\in\mathbb{N}}\), this proof shows by induction that for \(n\geq n_{0}\), the following six inequalities are satisfied \[0<-666u_{n}+1595u_{n+1}-915u_{n+2}, 0<918u_{n}-2253u_{n+1}+1349u_{n+2},\] \[0<-2538u_{n}+6303u_{n+1}-3709u_{n+2}, 0<3258u_{n}-9335u_{n+1}+6245u_{n+2},\] \[0<1422u_{n}-3317u_{n+1}+1951u_{n+2}, 0<10386u_{n}-26651u_{n+1}+16433u_{n+2}.\] ## 4. Convergent Contractions The geometric insight on the convergence in Friedland's theorem makes use of Hilbert's pseudo-metric. **Definition 4**.: Hilbert's pseudo-metric on \(\mathbb{R}^{d}_{>0}\) is defined by \[d_{H}(x,y)=\log\frac{\max_{i}(x_{i}/y_{i})}{\min_{i}(x_{i}/y_{i})}.\] Being a pseudo-metric means that: \(d_{H}(x,x)=0\); \(d_{H}(x,y)=d_{H}(y,x)\) and \(d_{H}(x,y)\leq d_{H}(x,z)+d_{H}(y,z).\) All are easy to check. Moreover, \(d_{H}(x,y)=0\) if and only if there exists \(\mu>0\) such that \(y=\mu x\). For this pseudo-metric, the open ball centered at \(v\) and of radius \(\log r\) is the cone \(B_{r}(v)\) from eq. (4). **Theorem 3** (Birkhoff [1]).: _For a positive matrix \(A\in\mathbb{R}^{d\times d}_{>0}\), let \(L(A)=\sup_{x\neq\alpha y}d_{H}(Ax,Ay)/d_{H}(x,y)\). Then_ \[L(A)=\frac{1-\sqrt{\psi(A)}}{1+\sqrt{\psi(A)}}\quad\text{with}\quad\psi(A)= \min_{i,j,k,\ell}\frac{a_{ik}a_{j\ell}}{a_{i\ell}a_{jk}},\] _showing that \(L\) is continuous and that \(A\) is a contraction._ This was used by Birkhoff to give a new proof of Perron's result that any positive matrix admits a unique positive eigenvector (and a generalization in arbitrary dimension). The key result for our method is the following theorem at the heart of Friedland's proof, that we make explicit for later use. **Theorem 4**.: _If \(A(n)>0\) tends to \(A>0\), let \(v>0\) be such that \(Av=\lambda v\) (which exists by Perron's theorem), then for \(n\) sufficiently large, \(A(n)(B_{r}(v))\subset B_{r}(v)\)._ Proof.: Let \(x\in B_{r}(v)\), then \[d_{H}(A(n)x,v) \leq d_{H}(A(n)x,A(n)v)+d_{H}(A(n)v,v)\] \[\leq L(A(n))\log r+d_{H}(A(n)v,Av).\] The first summand tends to \(L(A)\log r<\log r\), the second one to \(0\), so the sum is smaller than \(\log r\) for \(n\) sufficiently large, i.e., \(A_{n}(x)\in B_{r}(v)\). Reduction to the positive case is achieved by the following. **Lemma 1** (Friedland).: _For a matrix \(A\in\mathbf{Q}^{d\times d}\) with a simple dominant eigenvalue \(\lambda>0\), there exists \(T\in\operatorname{GL}_{d}(\mathbb{Q})\) such that \(TAT^{-1}\) has positive right and left eigenvectors \(a,b\) for \(\lambda\)._ Then \(TA^{m}T^{-1}/\lambda^{m}\) tends to \(ab^{\mathsf{T}}\) (by theorem 1) and thus has to be positive for some finite \(m\). Proof.: Friedland's proof of the lemma is constructive (and leaves a lot of freedom in the construction of \(T\)). We reproduce it here to make the algorithmic part of this work self-contained. Let \(\lambda\) be the dominant eigenvalue of \(A\). There exists \(Q\in\operatorname{GL}_{d}(\mathbf{Q}(\lambda))\) such that \(B=QAQ^{-1}=(\lambda)\oplus B^{\prime}\) for some \(B^{\prime}\in\mathbf{Q}(\lambda)^{(d-1)\times(d-1)}\). If \(e_{1}\) denotes the vector \((1,0,\ldots,0)^{\mathsf{T}}\), then \(Be_{1}=B^{\mathsf{T}}e_{1}=\lambda e_{1}\). Choose \(a=(1,\ldots,1)^{\mathsf{T}}\) and \(b\) a positive vector with coordinates in \(\mathbb{Q}\) such that \(a^{\mathsf{T}}b=1\). Let \((s_{2},\ldots,s_{d})\) be a basis of vectors, all in \(\mathbb{Q}^{d}\) and orthogonal to \(b\) and form \(S\in\mathbb{Q}^{d\times d}\), the matrix with columns \((a,s_{2},\ldots,s_{d})\) so that \(Se_{1}=a\) and \(S^{\mathsf{T}}b=e_{1}\). Let next \(T=SQ\) and \(M=TAT^{-1}\). Then, \[Ma =SBS^{-1}a=SBe_{1}=\lambda Se_{1}=\lambda a,\] \[M^{\mathsf{T}}b =(S^{-1})^{\mathsf{T}}B^{\mathsf{T}}S^{\mathsf{T}}b=(S^{-1})^{ \mathsf{T}}B^{\mathsf{T}}e_{1}=\lambda(S^{-1})^{\mathsf{T}}e_{1}=\lambda b.\] By continuity and density of \(\mathbb{Q}\), one can further restrict to \(T\in\operatorname{GL}_{d}(\mathbb{Q})\). ## 5. Algorithm and Proof ``` Input : A recurrence of Poincare type, in the form of a matrix \(A(n)\in\mathbf{Q}(n)^{d\times d}\); a vector \(U_{0}\in\mathbf{Q}^{d}\) of initial conditions. It is assumed that \(A=\lim_{n\to\infty}\) has a unique simple dominant eigenvalue \(\lambda\) with eigenvector \(e>0\). (This can be checked.) Output : One of (Positive,\(T,r,N,m\)), (Non-positive) 1if\(\lambda<0\)thenreturn(Non-positive); 2 Find \(T\in\operatorname{GL}_{d}(\mathbb{Q})\) and \(m>0\in\mathbb{N}\) such that \(TA^{m}T^{-1}>0\) ; 3\(v\gets Te\); if\(v<0\)then\(T\leftarrow-T\), \(v\leftarrow-v\) ; 4 Find \(r>1\) such that \(T^{-1}(B_{r}(v))>0\) // \(B_{r}(v)\) from eq. (4); 5 Find \(K\geq 0\in\mathbb{N}\) such that \(n\geq K\Rightarrow TA(n+m-1)\cdots A(n)T^{-1}(B_{r}(v))\subset B_{r}(v)\); 6for\(i=0,\ldots,K\)do 7if\(U_{i}\not\geq 0\)thenreturn(Non-positive); 8if\(i=K,K+1,\ldots,\infty\)do 9if\(U_{i}\not\geq 0\)thenreturn(Non-positive); 10if\(TU_{j}\in B_{r}(v)\) for \(j=K,\ldots,K+m-1\)then 11return(Positive,\(T,m,r,i\)) ``` **Algorithm 1**PostivityProof Algorithm PositivityProof is a direct consequence of the results of the previous section. We now prove its correctness, thereby proving theorem 2. By Friedland's theorem, the direction of \(U(n)\) tends to that of the eigenvector corresponding to the unique dominant eigenvalue \(\lambda\) of \(A\), which, being unique, is real. If \(\lambda\) is negative, then for sufficiently large \(n\), one of \(U(n)\) and \(A(n)U(n)\) has a negative coordinate (by Pringsheim's theorem). This is checked by step 1. The next step is to compute a \(T\in\operatorname{GL}_{d}(\mathbb{Q})\) and an integer \(m>0\) such that \(TA^{m}T^{-1}>0\). This is possible by lemma 1. Next, by definition of \(e\) and \(v\), \(TA^{m}T^{-1}v=\lambda^{m}v\). So \(v\) is an eigenvector for the positive eigenvalue of a positive matrix. By Perron's theorem, it is a real multiple of a positive vector. Thus either \(v>0\) or \(v<0\) and then changing \(T\) into \(-T\) and \(v\) into \(-v\) turns \(v\) into a positive eigenvector of \(TA^{m}T^{-1}.\) This is what is done in step 3. Since \(e=T^{-1}v>0\), by continuity of the linear map \(T^{-1}\), there exists a small enough ball \(B_{r}(v)\) around \(v\) such that \(T^{-1}B_{r}(v)>0\). This can be computed for instance by starting from \(r=2\) and using dichotomy to divide the distance between \(r\) and \(1\) until an appropriate \(r\) is found. This proves that step 4 succeeds. By theorem 4 applied to \(TA(n+m-1)\cdots A(n)T^{-1}\), there exists a \(K\) as required by step 5. In order to compute it, one can compute \(TA(n+m)\ldots A(n)T^{-1}G\) for each generator \(G\) of the cone \(B_{r}(v)\), which gives a vector of polynomials in \(\mathbf{Q}(\lambda)[n]\) that has to be ultimately positive by the existence of \(K\). For instance, for each polynomial, one can start from \(i=1\) and check whether the polynomial is positive on \([i,\infty)\) using Sturm sequences, and if not, double \(i\). In the end, \(K\) can be taken as the maximum of the values obtained for each coordinate for each generator \(G\). This proves that step 5 succeeds. Step 6 consists simply in checking that the initial values up to \(K\) are nonnegative. Finally, steps 8-11 rely on Friedland's theorem 1 which shows that for any generic vector of initial conditions, the direction of \(U_{n}\) tends to that of \(e\) and therefore \(d_{H}(TU_{n},v)\to 0\) as \(n\to\infty\). Thus for large enough \(j\), all \(TU_{j}\) belong to \(B_{r}(v)\), showing that the seemingly infinite loop always terminates for generic initial conditions and concluding the proof. **Example**.: For the sequence \((s_{n})_{n\in\mathbb{N}}\) from eq. (2) in the introduction, denoting by \(\mathrm{S}(n)\) the vector \((s_{n},s_{n+1})^{\mathsf{T}}\), the recurrence is \(\mathrm{S}_{n+1}=A(n)\mathrm{S}_{n}\) with \[A(n)=\begin{pmatrix}0&1\\ \frac{-81(3n+2)(3n+4)}{2(n+2)^{2}}&\frac{(81n^{2}+243n+186)}{2(n+2)^{2}}\end{pmatrix} \qquad\text{and}\qquad\mathrm{S}_{0}=\begin{pmatrix}1\\ 12\end{pmatrix}.\] The limit matrix \(A\) of \(A(n)\) has one simple dominant eigenvalue \(\lambda=27>0\) with \(e=(1,\lambda)\) its associated eigenvector. We follow the steps of the algorithm. First, following the steps in the proof of lemma 1, a possible choice of matrix is \[T=\frac{1}{13}\begin{pmatrix}-14&1\\ 1&0\end{pmatrix}.\] Since \(TAT^{-1}>0\), we have \(m=1\) and note that the vector \(v=Te=(1,1)>0\). Next, as the inverse of \(T\) is triangular with positive elements under the anti-diagonal, for all real \(r>0\), \(T^{-1}B_{r}(v)>0\), showing that we can take \(r=+\infty\). In Step 5, \(K\) can be chosen as the rank for which the matrix \(TA(n)T^{-1}\) becomes positive. The value of this matrix is \[\begin{pmatrix}\frac{53n^{2}+131n+74}{2(n+1)^{2}}&\frac{13n^{2}+376n+388}{26(n +1)^{2}}\\ 13&14\end{pmatrix}\] showing that \(K=0\) works. After checking that \(U_{0}\), \(U_{1}\) and \(TU_{1}\) are positive, the positivity is concluded and the final step of the algorithm finds \(N=1\). Note that this choice of matrix \(T\) means that the algorithm proves the positivity of \((s_{n})_{n\in\mathbb{N}}\) by synthesizing and proving the inequalities \[s_{n+1}>14s_{n}>0\] for \(n\geq 1\). In this example, this recovers the stragegy of Kauers and Pillwein [10] of looking for an inequality \(u_{n+1}\geq\mu u_{n}\). One way of seeing the improvement brought by our algorithm is that it will always succeed in producing a matrix \(T\) when the conditions of theorem 2 are met, while the inequalities \(u_{n+1}\geq\mu u_{n}\) correspond to a restricted set of matrices. ## 6. Conclusion Informally speaking, this work consists in providing certificates for a large class of P-finite sequences, whose positivity follows 'in an easy way' from its asymptotic behaviour. These certificates can be viewed as a finite set of linear inequalities satisfied by the shifted sequences \((u_{n+k})_{n\in\mathbb{N}}\), whose simultaneous proof by induction implies the positivity of \((u_{n})_{n\in\mathbb{N}}\) and reduces to checking the positivity of a finite number of univariate polynomials. The constraint that the matrix \(A\) in theorem 2 has a unique dominant eigenvalue that is simple is a limitation of our approach. For instance, the sequence proved positive by Melczer and Mezzarobba [14] has a dominant eigenvalue that is double and thus inaccessible by the approach presented here. Work is in progress to extend our approach to situations of this type and to improve the choice of change of basis \(T\), that has a strong impact on the value of the number \(N\) of terms that has to be tested positive. **Acknowledgements.** Alin Bostan and Mohab Safey El Din made many very useful suggestions on previous versions of this article. The presentation also benefited from the feedback of the participants to the workshop _Algorithmic Aspects of Dynamical Systems_ at McGill University's Bellairs Research Institute. This work has been supported in part by the ANR project NuSCAP ANR-20-CE48-0014.
2307.04907
SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation
SimpleMTOD is a simple language model which recasts several sub-tasks in multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is built on a large-scale transformer-based auto-regressive architecture, which has already proven to be successful in uni-modal task-oriented dialogues, and effectively leverages transfer learning from pre-trained GPT-2. In-order to capture the semantics of visual scenes, we introduce both local and de-localized tokens for objects within a scene. De-localized tokens represent the type of an object rather than the specific object itself and so possess a consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0 test-std dataset while performing on par in other multimodal sub-tasks: Disambiguation, Coreference Resolution, and Dialog State Tracking. This is despite taking a minimalist approach for extracting visual (and non-visual) information. In addition the model does not rely on task-specific architectural changes such as classification heads.
Bhathiya Hemanthage, Christian Dondrup, Phil Bartie, Oliver Lemon
2023-07-10T21:16:46Z
http://arxiv.org/abs/2307.04907v1
SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation ###### Abstract SimpleMTOD is a simple language model which recasts several sub-tasks in multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is built on a large-scale transformer-based auto-regressive architecture, which has already proven to be successful in uni-modal task-oriented dialogues, and effectively leverages transfer learning from pre-trained GPT-2. In-order to capture the semantics of visual scenes, we introduce both local and _de-localized_ tokens for objects within a scene. De-localized tokens represent the type of an object rather than the specific object itself and so possess a consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0 test-std dataset while performing on par in other multimodal sub-tasks: Disambiguation, Coreference Resolution, and Dialog State Tracking. This is despite taking a minimalist approach for extracting visual (and non-visual) information. In addition the model does not rely on task-specific architectural changes such as classification heads. ## 1 Introduction Multimodal conversational agents have witnessed a rapidly growing level of interest among the conversational AI community as well as within the computer vision community. Most multimodal conversational datasets to-date are an extension of visual question answering (VQA) (Das et al., 2016; Hudson and Manning, 2019). Consequently building upon the success of other vision-linguistic tasks such as VQA, state-of-the-art multimodal conversational agents commonly depend on non-autoregressive models (Wang et al., 2020; Murahari et al., 2019) most of which are based on BERT (Devlin et al., 2018). However, dialogues with such systems significantly differ from what the conversational AI community has typically viewed as a multi-turn dialogue. First, most of the current multimodal dialogue datasets are focused on querying the visual content whereas _external knowledge bases_ have been an integral part of traditional unimodal dialogue datasets (Budzianowski et al., 2018; Galley et al., 2019). Second, in traditional unimodal dialogues, co-reference resolution (explicitly or implicitly) plays a major role within the dialogues. Additionally, state-of-the-art unimodal conversational agents predominantly rely on GPT-based auto-regressive models (Radford et al., 2018) due to their proven language generation capabilities (Peng et al., 2020; Hosseini-Asl et al., 2020; Ham et al., 2020). The SIMMC 2.0 (Kottur et al., 2021) task-oriented dialogue dataset bridges this gap between multimodality and the more traditional view of a multi-turn dialogue. Due to the simultaneous presence of signals from multiple modalities, which a user can refer to at any point in the conversation, the multimodal task-oriented dialogues proposed in the SIMMC 2.0 are challenging compared to both text-only counterparts and _image querying_ dialogue datasets. In spite of the inherent complexity of multimodal dialogues, we propose SimpleMTOD, recasting all sub-tasks into a simple language model. SimpleMTOD combines the idea of _'de-localized visual object representations'_ with a GPT-like auto-regressive architecture. The idea of de-localized representations stems from the analogous process of _de-lexicalization_ that has been extensively used in task-oriented dialogues. In de-lexicalization Mrksic et al. (2017), slot-values such as _vegan_ are replaced by a more general abstracted token such as _food-type_. Likewise, when de-localized, objects are represented by the catalogue type of the object instance rather than the instance itself. These de-localized tokens then possess a consistent meaning throughout the dataset. Along with the dataset, Moon et al. (2020) pro pose four benchmark tasks decomposing multimodal task oriented dialogue into sub-tasks: Multimodal Disambiguation, Multimodal Co-reference Resolution, Multimodal Dialog State Tracking, and Response Generation. The first three tasks deal with the dialogue context understanding, analogous to NLU and DST in unimodal agents. The last task is similar to unimodal NLG, but expects the generated responses to be sensible within a multimodal context with visual signals and associated knowledge base. The main objective this work is to evaluate the effectiveness of de-localized object representations within SimpleMTOD. Despite the simplicity, SimpleMTOD achieves the state-of-the-art BLEU score of 0.327 for assistant response generation in the SIMMC2.0 test-std 1 dataset. Furthermore, the model achieves an accuracy of 93.6% in Multimodal Disambiguation (MM-Disambiguation), Object-F1 of 68.1% in Multimodal Co-reference Resolution (MM-Coref), and 87.7% (Slot-F1) and 95.8 (Intent-F1) in Multimodal Dialogue State Tracking (MM-DST). Other than the proposed benchmark settings, we also evaluate SimpleMTOD in an end-to-end setting. Major contributions of our work are as follows: Footnote 1: The testing dataset (test-std) is not publicly available and was part of the SIMMC 2.0 challenge used for scoring the submitted systems. * We formalise notion of _multimodal task oriented dialogues_ as an end-to-end task. * We propose a GPT-based simple language model combined with visual object de-localization and token based spatial information representation, that addresses four subtasks in multimodal dialogue state tracking with a _single architecture_. * We analyse the behaviour of our model using salience scores from the Ecco (Alammar, 2021) framework, which provide an intuition into which previous token mostly influence predicting the next token. ## 2 Background Traditional task-oriented dialogue datasets consist of a dialogue corpus, a dialogue ontology with a pre-defined set of slot-value pairs, and annotations required for related sub-tasks in a set of domains (Budzianowski et al., 2018). The SIMMC 2.0 dataset follows a similar structure and contains dialogues in both the fashion and the furniture domains. However, in the SIMMC 2.0 multimodal dialogue corpus, each dialogue is also associated with an image representing the scene where each dialogue takes place. A _scene_ is made by re-arranging a known set of items (objects) in different configurations. Along with the raw-image, the dataset provides a file (scene JSON) containing details of the images such as objects and relationships between objects. Furthermore, a meta-data file contains visual and non-visual attributes of objects that recur within a scene. ### Benchmark Tasks Multimodal Disambiguation:In real-world conversations, references made by humans related to objects or entities can be ambiguous. For example, consider _A: Blue trousers are priced at $149.99. U: What about the red ones?_, in a setting where there are multiple red trousers. In these situations, there is insufficient information available for co-reference resolution. This task is aimed at identifying such ambiguous scenarios, given the dialogue history. Multimodal Co-reference Resolution:The goal of this task is to resolve any reference in a user utterance to canonical object ids of the object as defined per each scene (see image in Figure 1(b)). Users may refer to 1) dialogue context 2) visual context, or 3) both. Multimodal Dialogue State Tracking:Similar to unimodal DST, this tracks the belief states of users across multiple turns. The belief state consists of an intent, slot-value pairs, and user requested slots. Assistant Response GenerationGiven the user utterance, ground-truth APIs, and ground-truth canonical object ids (with meta-data), the model needs to generate a natural language response describing objects as _observed and understood_ by the user. ## 3 Methods In the first part of this section, we model multimodal task oriented dialogues as a sequence generation task. We define the problem in a more general setup and discuss some empirical limitations applied to the model. Figure 1: Sample dialogue instance in SIMMC 2.0: a) First four turns of a sample dialogue with user and system transcript annotations. U: and A: tokens are used to differentiate user and system utterances respectively. First row of annotations are in INTENT \(|\) SLOT-VALUE \(|\) REQUEST-SLOTS format. Second row identifies referred canonical objects id tags in the utterance (e.g. [29]). It should be noted that, these object ids are specific to a given scene. In the case of user utterances, this identifier is the target of the MM-Coref task. b) Sample image with cannonical object id tags over items. This image is mapped to the dialogue by scene id. c) Single entry of the fashion object meta-data file. ### Multimodal Task-Oriented Dialogues Similar to unimodal setting, we view dialogue state (belief-state) tracking, action prediction, and response generation to be the core components of multi-modal task-oriented dialogues. However, outputs of each of the sub-tasks should be conditioned not only on the dialogue history, but also on the associated scene. Multimodal dialogues consist of multiple turns. In a turn \(t\), there exists an associated visual scene \(V_{t}\), the user-provided input \(U_{t}\) and the system-generated response \(S_{t}\). Theoretically, the dialogue context can be denoted as \(C_{t}=[V_{0},U_{0},S_{0}|V_{0},...S_{t-1}|M_{t-1},V_{t},U_{t}]\). Here \(S_{t-1}|M_{t-1}\) denotes that the statement \(S_{t-1}\) is associated with the representation of multimodal information such as objects viewed and mentioned to the user during that turn. Given the context, \(C_{t}\), SimpleMTOD generates the belief-state \(B_{t}\): \[B_{t}=SimpleMTOD(C_{t}) \tag{1}\] \(B_{t}\) is a concatenation of intent, slot-values, requested slots, and resolved object references \(MRef_{t}\). However, it should be noted that, SimpleMTOD models the context as \(C_{t}=[V_{t},U_{t-n},S_{t-n}|M_{t-n},...S_{t-1}|M_{t-1},U_{t},]\) where the \(n\) is the context window. Major deviations from the theoretical representation of \(C_{t}\) are, 1) we ignore the history of visual signals and only consider the current visual scene; 2) we consider only \(n\) previous turns in contrast to the entire dialogue. Then, in a more generalized setting where the system have access to an external database, which can be queried,\(B_{t}\) would be used to retrieve database results \(D_{t}\). These \(D_{t}\) along with context and belief states can be used to generate the system action \(A_{t}\). \[A_{t}=SimpleMTOD(C_{t},B_{t},D_{t}) \tag{2}\] Action \(A_{t}\) is a triplet containing system intent, slot-value pairs, and details on requested slots. However, in our setup, no such database exists. Hence we model action \(A_{t}\) from \(B_{t}\) and \(C_{t}\) keeping \(D_{t}=\emptyset\). Finally, the concatenation of the context, belief state, (database results), and action is used to generate system responses \(S_{t}\). \[S_{t}=SimpleMTOD(C_{t},B_{t},D_{t},A_{t}) \tag{3}\] ### De-localized Visual Representation Here we discuss how visual information of a scene is represented within the SimpleMTOD as delocalized tokens and how \(V_{t}\) is derived from those tokens. In the SIMMC 2.0 dataset a scene is a spatial configuration of a set of object instances. From here on we will refer to these instances simply as objects. Probable types of these objects are predefined in two meta-data files, with one for each Figure 3: A scene is divided into 9 regions. Each region is identified by combination of 2 tokens. Figure 2: SimpleMTOD architecture with training and inference time setting domain. We will refer to these files as catalogues and an entry of these catalogues as a catalogue-item. See Figure1(c) for an example catalogue-item with visual and non-visual attributes defined. For benchmark tasks, non-visual attributes can be used during inference while visual attributes are not allowed. However, we use neither of these attributes in the SimpleMTOD visual representation explained below. In our setup, we assign a unique token (eg: _INV_278_) to each catalogue-item. These catalogue-items are used as a de-localized version of objects within a scene. While these catalogue-item tokens are consistent across the entire dataset, spatial relationships associated with the objects will be lost. Therefore we encode spatial details of objects as follows: Each scene is divided into 9 regions as shown in Figure 3. Every object is assigned to a region based on the center-point of the object bounding box. Then concatenation of catalogue-item tokens and assigned region description (eg: _INV_278@TOP:LEFT_) tokens are used as object representations. A scene-description is obtained by concatenating all such tokens representing every object within a scene. This is our \(V_{t}\) in SimpleMTOD. ### SimpleMTOD Training and Inference For training, we follow routine causal language modeling with teacher forcing. A training sequence \(X_{t}\) in SimpleMTOD is obtained by concatenating all the components; context,user belief state, database results (which is null in our case), system actions and system utterance. \[X_{t}=[C_{t},B_{t},D_{t},A_{t},S_{t}] \tag{4}\] In terms of tokens, \(X_{t}\) can be denoted as \(X_{t}=(x_{t}^{0},x_{t}^{1},...x_{t}^{n(t)})\) when \(n(t)\) represent the number of tokens in turn \(t\). In general, the goal of the model is to learn \(\rho(X)\) given \(X=(x^{0},x^{1},..x^{i}..x^{n}):\) \[\rho(X)=\Pi_{i=1}^{n}\rho(x^{i}|x^{<i}) \tag{5}\] For this, we train the neural network with parameterization \(\theta\) minimizing the negative log-likelihood over the multimodal dialogue corpus \(MD\) where \(MD=\{X_{1},X_{2}....X_{|MD|}\}\). However, in our setup the tokens related to scene-description \(V\) are ignored during the loss calculation. When \(n(V)\) is the number of tokens related to the scene description: \[L(D)=-\sum_{t=1}^{|MD|}\sum_{i=n(V)}^{n(t)}log\rho_{\theta}(x_{t}^{i}|x_{t}^{< i}) \tag{6}\] During inference, the learnt parameter \(\theta\) is used to predict a token at a time. Unlike training time where ground-truth tokens are used every time, generated tokens become part of the left-context. For inference, we stick to a simple greedy prediction approach with top-k=1. That is we always generate the token with highest probability as the next token. ## 4 Experiments In Section 3.1 we defined an end-to-end setting for SimpleMTOD. However, some of the benchmark tasks allow more ground-truth information to be utilized during training and inference time. For the MM-Disambiguation task, we consider two setups. In the task-specific scenario, we train the model to predict YES or NO tokens directly from context \(C_{t}\). In the end-to-end setup, we consider the label to be YES only if the system intent predicted is to Disambiguate. Two similar setups are considered for MM-Coref as well. It should be noted that end-to-end version of SimpleMTOD predicts de-localized tokens with spatial information and we obtain the canonical object id by reversing the de-localization process explained in Section 3.2. If multiple objects were found in the same region with same catalogue-item token, the area of the object bounding box is used as a tie-breaker. In the case of assistant response generation, the benchmark task defined in SIMMC 2.0 allows ground-truth system belief state to be used as an input. Therefore, we evaluate both from action response generation as well as end-to-end setting. ### Baselines We consider 2 baselines which were provided as part of the SIMMC2.0 challenge. Gpt-2:This extends Ham et al. (2020) to multi modal task-oriented dialogues, encoding objects in a scene using canonical object ids concatenated with the token OBJECT_ID. For the MM-Disambiguation task, a classification head is used, while other tasks are modeled in a generative manner. Multimodal Transformer Networks (MTN):Adapts Le et al. (2019) (only) for the MM-DST and Response Generation sub-tasks 2. In contrast to the auto-regressive modeling of SimpleMTOD, MTN uses an encoder-decoder architecture. Footnote 2: MTN-SIMMC2 implementation [https://github.com/henryhungle/MTN/tree/simmc2](https://github.com/henryhungle/MTN/tree/simmc2) ### Training and Evaluation We follow the experimental setup of the SIMMC 2.0 challenge with same dataset-splits, inference time limitations, and performance metrics. See Appendix:B for details. It should be noted that the test-std split of the SIMMC2.0 dataset is not publicly available and is a held-out set for evaluating submissions to SIMMC2.0 challenge. Therefore, the final version of our model could only be evaluated on the dev-test split. However, the prior version of the model SimpleMTOD\({}_{Sub}\), which did not encode region information or scene information, was submitted to the SIMMC2.0 challenge. ## 5 Results MM-DisambiguationAs shown in Table 2 and Column 2 of Table 4, SimpleMTOD\({}_{Sub}\) achieves accuracy scores of 92.17% and 93.6 on devtest and test-std respectively when trained to predict YES/NO tokens. This is a 27% relative improvement over the GPT-2 based baseline with a classification head. Furthermore, we evaluate the model on the MM-Disambiguation task as part of the end-to-end model. based on the system intent predicted by the model. Here, we consider any _INFORM:DISAMBIGUATE_ prediction as a YES. This approach demonstrates a very similar accuracy score of 92.12. The best performing model (94.5% : Team-6) on test-std, ensembles two models trained on RoBERTa and BART 3. Footnote 3: This is based on the description provided at: [https://github.com/NLPlab-skku/DSTC10_SIMMC2.0](https://github.com/NLPlab-skku/DSTC10_SIMMC2.0) MM-CorefTable 2 and the Third column of the Table 4 show the MM-Coref Object-F1 scores of on devtest and test-std respectively. SimpleMTOD achieved 68.2 (54% relative gain over baseline) in test-std dataset and 67.6 (84% gain) on the devtest split. While there is no information available on Team-2's leading solution, the BART-based model of Team-4 which is trained end-to-end with task-specific heads achieves 75.8% on this task. MM-DSTDespite being a simple language model, both our Intent-F1 (95.8%) and Slot-F1 (87.7%) scores on test-std split are comparable with complex visual-language models. Furthermore, as in Table 1, there is significant improvement in the Joint Accuracy scores from 57.3% to 63.1% when positional information is used. Response GenerationA prior version of the model, SimpleMTOD\({}_{Sub}\) achieves a state-of-the-art BLEU score of 0.327 on the test-std split of the SIMMC2.0 dataset. This is in comparison with models which rely on sophisticated feature extraction processes. In our view, the simplified representation of visual information preserves and complements the generative capabilities of pre-trained models. Furthermore, as shown in Table 3, SimpleMTOD achieves a BLEU score of 0.49 on devtest when the ground-truth actions are used. The end-to-end version of SimpleMTOD also achieves \begin{table} \begin{tabular}{l c c c c} \hline Model & Intent-F1 & Slot-F1 & Request Slot-F1 & Joint Accuracy \\ \hline GPT-2 Baseline & 94.5 & 81.7 & 89.6 & 44.6 \\ MTN-SIMMC & 94.3 & 74.8 & 85.4 & 28.3 \\ SimpleMTOD\({}_{Sub}\) & **95.8** & 83.3 & 89.7 & 57.3 \\ SimpleMTOD & 94.0 & **85.8** & **91.7** & **63.1** \\ \hline \end{tabular} \end{table} Table 1: Evaluation results for MM-DST task on Devtest split \begin{table} \begin{tabular}{l c} \hline Model & BLEU \\ \hline GPT-2 Baseline & 0.192 \\ MTN-SIMMC & 0.217 \\ SimpleMTOD\({}_{Sub}\) & 0.43 \\ SimpleMTOD(ground truth actions) & **0.49** \\ SimpleMTOD & 0.45 \\ \hline \end{tabular} \end{table} Table 3: BLEU scores for Assistant Response Generation task on Devtest split. a BLEU score of 0.45. It should be noted that this is an improvement over the \(SimpleMTOD_{Sub}\) model score of 0.43. This indicates the importance of associating region related information. ## 6 Discussion In order to understand the behaviour of SimpleMToD, we use gradient-based salience Atanasova et al. (2020) provided with the Ecco framework Alammar (2021). Using Ecco, we inspect salience scores for all the tokens in the left side of the token of interest. In the heat-maps presented in this section, darker colors mean a higher salience score. It should also be noted that the model assigns high salience scores on separator tokens (such as \(<USB>\), _[, ]_ ) that define the structure of the generation. While proper attention to the structure is of paramount importance, our **discussion focuses on salience scores assigned to the rest of the tokens, which represent the semantics** of the multimodal conversations. **Effect of De-localization and Scene Descriptions:** The introduction of de-localized tokens significantly improves the Object-F1 of MM-coref and joint accuracy of MM-DST. Accordingly, we first analyse the behaviour of the model when predicting co-references. Figures 5 and 4 show example utterances with and without scene descriptions respectively. In the case where scene description is not provided, the model puts a high salience on tokens 'yellow' and'shirt', and predicts the token INV_146 which represents a yellow color shirt as shown in Table 5. (It should be noted that none of the metadata shown in the diagram are provided to the model explicitly and the model figures this out from globally consistent use of tokens). However, in this case, a particular catalogue item INV_146 is not present in the scene. When we observe the confidence values of the prediction from the last layer (shown in Table 6), it can be seen that the model is not quite certain about the prediction with 13.75 for INV_146 and 13.04 for INV_247, both of which represent yellow shirts. This is to indicate that even though the model has learnt to associate object attributes necessary for co-reference resolution, it lacks information to be certain about the prediction. To this end, we provide the model with a scene description as described in 3.2. When the scene descriptions are provided, SimpleMTOD correctly predicts the token INV_247 with 92.63% confidence and high salience score over the same token from the scene description, as well as tokens'shirt' and 'yellow'. Additionally from Figure 5 it can be noted that INV_199 also shows a high salience score. From the metadata, we can see it is a pink color shirt. However, there is a significant salience score over the token 'yellow' that results in generating the correct token INV_247 over INV_199 (which is the second ranked token with only had 7.17 confidence). Extending the analysis, we modified the original utterance to _"I need a pink shirt"_ and generated the next token, and SimpleMToD accordingly predicted the token INV_199 (with high confidence of 99.79%) as observed in Figure 6. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **MM-Disam’n** & **MM-Coref** & \multicolumn{2}{c}{**DST**} & \multicolumn{2}{c}{**Response Generation**} \\ \cline{2-6} & **Accuracy** & **Object-F1** & **Intent-F1** & **Slot-F1** & **BLEU** \\ \hline **GPT-2 Baseline** & 73.5 & 44.1 & 94.1 & 83.8 & 0.202 \\ **MTN - Baseline** & NA & NA & 92.8 & 76.7 & 0.211 \\ **Team-2** & NA & **78.3** & 96.3 & 88.4 & NA \\ **Team-5** & 93.8 & 56.4 & **96.4** & 89.3 & 0.295 \\ **Team-6** & **94.7** & 59.5 & 96.0 & **91.5** & 0.322 \\ **SimpleMTOD\({}_{Sub}\)** & 93.6 & 68.2 & 95.8 & 87.7 & **0.327** \\ \hline \hline \end{tabular} \end{table} Table 4: Test-std results for SIMMC2.0 Challenge. NA denotes model is not applicable to the particular sub-task. Test-std split of SIMMC2.0 dataset is held-out set, which is not publicly available and used to evaluate submissions in SIMMC2.0 challenge. An earlier version of the system, SimpleMTOD\({}_{Sub}\), without scene information, was submitted for the evaluation. \begin{table} \begin{tabular}{|l|c c c|} \hline \hline \multicolumn{1}{|c|}{ \begin{tabular}{c} Token \\ Feature \\ \end{tabular} } & INV\_146 & INV\_199 & INV\_247 \\ \hline Color & yellow & pink & yellow \\ Type & shirt & shirt & shirt \\ \hline \hline \end{tabular} \end{table} Table 5: Relevant catalogue items represented by tokens INV_146, INV_199, INV_247. None of these metadata were explicitly presented to the model. Effect on Intent prediction:Even though scene descriptions play a key role in overall belief tracking as described earlier, the Intent-F1 score drops from 95.8% to 94.0% when the scene descriptions are encoded. In order to understand the effect, we inspect salience scores when predicting the user intent. It can be observed that when the scene descriptions are omitted, higher salience scores are assigned to the user utterance suggesting more focus on that. However, when the scene information is included, salience scores assigned to the utterance decreased to an extent, resulting in wrong predictions in certain cases. This is to indicate that scene descriptions are either redundant or act as a distractor when we consider intent-detection, which explains reduction in score. Furthermore, this behaviour aligns with our intuition that the intent parts of the user utterances are predominantly language-driven. Figure 7 shows an example where omitting the scene information produces the correct intent of _REQUEST:COMPARE_, whereas our final version of SimpleMTOD wrongly predicted the intent as _ASK:GET_ ## 7 Related Work Peng et al. (2020); Hosseini-Asl et al. (2020); Ham et al. (2020) are closely related to our work as they all model task-oriented dialogues in an end-to-end manner with GPT-2-like large-scale transformer-based architectures. However, all those models focus on _text-only_ task-oriented dialogues. The GPT-2 adaptation Kottur et al. (2021), which is provided as a baseline along with the SIMMC2.0 dataset, is also closely related to our work. However, this baseline represents visual objects by canonical ids and demonstrates subpar results to our model in all four tasks. Generative encoder-decoder models Liang et al. (2020); Zhao et al. (2017) are a promising alternative to decoder-only (GPT-2 based) dialogue models \begin{table} \begin{tabular}{l l l} \hline \hline Original(color=yellow) & INV\_247 (92.63) & INV\_199 (7.17) & INV\_155(0.08) \\ Original w/o desc. & INV\_146(13.75) & INV\_247 (13.04) & INV\_249 (12.60) \\ Modified(color=pink) & INV\_199(99.79) & INV\_247 (0.19) & INV\_235(\textless{}0.01) \\ \hline \hline \end{tabular} \end{table} Table 6: For the example utterances discussed, we inspected top-3 tokens and their confidence scores. Figure 4: Salience score heat-map when predicting the token _INV_146_ for utterance _I need a yellow shirt_ without scene information. Darker colors represents higher salience score. See Figure:8 in appendix for actual values Figure 5: Salience scores heat-map _when scene information_ when predicting the token _INV_247_ in utterance _I need a yellow shirt_. See Figure:9 in appendix for actual values Figure 6: Salience score heat-map when predicting the token _INV_199_ for modified utterance _I need a pink shirt_ See Figure:10 in appendix for actual values Figure 7: Salience score heat-map when predicting the correct intent token _REQUEST:COMPARE_ for the dialogue turn with final utterance _“Can you tell me the brands for the purple and maroon ones on the left and how much they are?”_ without providing scene information that have been extensively investigated in unimodal task-oriented dialogues. The MTN-baseline Le et al. (2019), which we compare to, is based on the encoder-decoder architecture. While being inferior with respect to performance in both the tasks considered, this model involves sophisticated feature extraction process. Mrksic et al. (2017) coined the term 'de-lexicalization' for abstraction in neural dialogue state tracking tasks. This idea has been extensively used in goal oriented dialogues. Our notion of delocalized object representation is influenced by this work. ## 8 Conclusion We explore a simple, single generative architecture (SimpleMTOD) for several sub-tasks in multimodal task-oriented dialogues. We build on large-scale auto-regressive transformer-based language modeling, which has been effectively utilized in task-oriented dialogues, and formalize the multimodal task-oriented dialogue as a sequence prediction task. Our model employs a 'de-localization' mechanism for visual object representation that ensures the consistency of those tokens throughout the dataset. Furthermore, we encoded spatial information of object instances with a very small number of special (globally consistent) tokens. Despite the simplicity in representing visual information, our model demonstrates comparable or better performance with models that heavily rely on visual feature extraction, on four multimodal sub-tasks in the SIMMC2.0 challenge. ## 9 Future Directions Most current vision-language research relies on fusing pixel-level vision information with token-level language representations. However, their applicability for dialogues where the language is sophisticated remain sparsely studied. In contrast, we explore a symbolic approach for representing visual information and combining it with auto-regressive language models. While we rely on smaller scale models (with 17 million parameters), our work is readily extendable for large language models (LLMs). Unlike pixel level visual representations, special tokens representing visual information being more similar to the word tokens which the LLMs area trained on, symbolic visual representation would facilitate effective transfer learning. SimpleMTOD represents visual information using carefully designed input tokens. Capturing these information through semantic scene-graphs, which would provide richer representation, and fusing them with LLMs would be an interesting future direction of research for multimodal dialogues. Development in knowledge-graph based language grounding would complement this line of work. ## Acknowledgements This work is partially supported by the European Commission under the Horizon 2020 framework programme for Research and Innovation (H2020-ICT-2019-2, GA no. 871245), SPRING project, [https://spring-h2020.eu](https://spring-h2020.eu)
2303.08412
On the convergence analysis of the decentralized projected gradient descent method
In this work, we are concerned with the decentralized optimization problem: \begin{equation*} \min_{x \in \Omega}~f(x) = \frac{1}{n} \sum_{i=1}^n f_i (x), \end{equation*} where $\Omega \subset \mathbb{R}^d$ is a convex domain and each $f_i : \Omega \rightarrow \mathbb{R}$ is a local cost function only known to agent $i$. A fundamental algorithm is the decentralized projected gradient method (DPG) given by \begin{equation*} x_i(t+1)=\mathcal{P}_\Omega\Big[\sum^n_{j=1}w_{ij} x_j(t) -\alpha(t)\nabla f_i(x_i(t))\Big] \end{equation*} where $\mathcal{P}_{\Omega}$ is the projection operator to $\Omega$ and $ \{w_{ij}\}_{1\leq i,j \leq n}$ are communication weight among the agents. While this method has been widely used in the literature, its convergence property has not been established so far, except for the special case $\Omega = \mathbb{R}^n$. This work establishes new convergence estimates of DPG when the aggregate cost $f$ is strongly convex and each function $f_i$ is smooth. If the stepsize is given by constant $\alpha (t) \equiv\alpha >0$ and suitably small, we prove that each $x_i (t)$ converges to an $O(\sqrt{\alpha})$-neighborhood of the optimal point. In addition, we further improve the convergence result by showing that the point $x_i (t)$ converges to an $O(\alpha)$-neighborhood of the optimal point if the domain is given the half-space $\mathbb{R}^{d-1}\times \mathbb{R}_{+}$ for any dimension $d\in \mathbb{N}$. Also, we obtain new convergence results for decreasing stepsizes. Numerical experiments are provided to support the convergence results.
Woocheol Choi, Jimyeong Kim
2023-03-15T07:27:27Z
http://arxiv.org/abs/2303.08412v2
# On the convergence analysis of the decentralized projected gradient descent ###### Abstract. In this work, we are concerned with the decentralized optimization problem: \[\min_{x\in\Omega}\ f(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x),\] where \(\Omega\subset\mathbb{R}^{d}\) is a convex domain and each \(f_{i}:\Omega\to\mathbb{R}\) is a local cost only known to agent \(i\). A fundamental algorithm is the decentralized projected gradient method (DPG) given by \[x_{i}(t+1)=\mathcal{P}_{\Omega}\Big{[}\sum_{j=1}^{n}w_{ij}x_{j}(t)-\alpha(t) \nabla f_{i}(x_{i}(t))\Big{]}\] where \(\mathcal{P}_{\Omega}\) is the projection operator to \(\Omega\) and \(\{w_{ij}\}_{1\leq i,j\leq n}\) are communication weight among the agents. While this method has been widely used in the literatures, its sharp convergence property has not been established well so far, except for the special case \(\Omega=\mathbb{R}^{n}\). This work establishes new convergence estimates of DPG when the aggregate cost \(f\) is strongly convex and each function \(f_{i}\) is smooth. If the stepsize is given by constant \(\alpha(t)\equiv\alpha>0\) and suitably small, we prove that each \(x_{i}(t)\) converges to an \(O(\sqrt{\alpha})\)-neighborhood of the optimal point. In addition, we take a one-dimensional example \(f_{i}\) and prove that the point \(x_{i}(t)\) converges to an \(O(\alpha)\)-neighborhood of the optimal point. Also, we obtain convergence estimates for decreasing stepsizes. Numerical experiments are also provided to support the convergence results. ## 1. Introduction Let us consider a multi-agent system with \(n\) agents that form a connected network and cooperatively solve the following constrained optimization problem: \[\min_{x\in\Omega}f(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x), \tag{1.1}\] where \(f_{i}:\Omega\to\mathbb{R}\) is a local cost function only known to agent \(i\in\mathcal{V}=\{1,2,\cdots,m\}\), and \(\Omega\subset\mathbb{R}^{d}\) denotes a common convex closed set. This problem arises in many applications like engineering problems [6, 8], signal processing [3, 24] and machine learning problems [4, 12, 25]. We consider the decentralized projected gradient (DPG) algorithm [20, 26] given by \[x_{i}(t+1)=\mathcal{P}_{\Omega}\bigg{[}\sum_{j=1}^{n}w_{ij}x_{j}(t)-\alpha(t) \nabla f_{i}(x_{i}(t))\bigg{]}, \tag{1.2}\] Introduction Let \(\Omega\) be a bounded bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) be a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\). The domain \(\Omega\) is a bounded domain with Lipschitz boundary \(\Omega\) and \(\Omega\) is a where \(x_{*}\) denotes an optimizer of (1.1) and \(\gamma_{i}=\beta^{i}\|x_{0}\|+\frac{2\alpha D}{1-\beta}\). The above mentioned results are summarized in Table 1. We remark that the right hand side of (1.4) involves the following term \[\sum_{i=0}^{t}(1-c\alpha)^{t-i}\frac{\alpha D}{\sqrt{n}}=\frac{D}{c\sqrt{n}} \Big{[}1-(1-c\alpha)^{t+1}\Big{]},\] which converges to \(\frac{D}{c\sqrt{n}}\) as the number of iterations \(t\) goes to infinity. This limit is independent of the stepsize \(\alpha>0\). Therefore, the right hand side of above estimate (1.4) in the limit \(t\to\infty\) involves the term \(\frac{D}{c\sqrt{n}}\). However, this convergence estimate is not as strong as the estimate for (1.3) by the work in [27] which showed that the sequence of (1.3) with constant stepsize \(\alpha(t)\equiv\alpha>0\) (below a certain threshold) converges exponentially fast to an \(O(\alpha)\)-neighborhood of the optimal point. Having these results, it is natural to pose the following question: _Question: Does the algorithm (1.2) with constant stepsize \(\alpha>0\) converges to the optimizer \(x_{*}\) up to an \(O(\alpha^{c})\) error for some \(c>0\)?_ It is worth mentioning that this fundamental question is still open and has not been resolved as of now. In this work, we show that the convergence property of this question holds with \(c=1/2\) if the total cost function \(f\) is \(\mu\) strongly convex and each local cost function \(f_{i}\) is \(L_{i}\) smooth. In addition, we exhibit a concrete example where the property holds with \(c=1\). To explain the difficulty in the convergence analysis of (1.2) compared to the case \(\Omega=\mathbb{R}^{n}\), we note that averaging (1.3) gives \[\bar{x}(t+1)=\bar{x}(t)-\frac{\alpha(t)}{n}\sum_{i=1}^{n}\nabla f_{i}(x_{i}(t)), \tag{1.5}\] where \(\bar{x}(t)=\frac{1}{n}\sum_{i=1}^{n}x_{i}(t)\). Then, if the stepsize is set to \(\alpha\leq\frac{2}{\mu+L}\), one can obtain the following inequality: \[\|\bar{x}(t+1)-x_{*}\|\leq\Big{(}1-\frac{\mu L}{\mu+L}\alpha\Big{)}\|\bar{x}(t )-x_{*}\|+\frac{L\alpha}{n}\sum_{i=1}^{n}\|x_{i}(t)-\bar{x}(t)\|,\] when \(f\) is \(\mu\)-strongly convex and each \(f_{i}\) is \(L\)-smooth. This inequality is a major ingredient in the convergence estimate of (1.3) in the work [27, 9], but the identity (1.5) no longer holds for (1.2) due to the projection. Instead, we proceed to obtain a sequential estimate of the quantity \[\sum_{i=1}^{N}\|x_{i}(t)-x_{*}\|^{2},\] which enables us to offset the projection operator efficiently using the contraction property of the projection operator (see Section 4 for the detail). As a result, we obtain a convergence result up to an error \(O(\sqrt{\alpha})\). We point out that our result is obtained for (1.2) with the projection operator to an arbitrary convex set \(\Omega\subset\mathbb{R}^{d}\) which is possibly unbounded. The rest of this paper is organized as follows. In Section 2 we introduce some assumptions and state our main results. In Section 3, we recall some preliminary results and give some useful estimates that we will use throughout the paper. Section 4 is devoted to obtaining sequential estimates for the algorithm. Based on these sequential estimates, we establish the uniform boundedness of the sequence in Section 5. Then we obtain consensus estimates in Section 6 and prove the main convergence results in Section 7. In Section 8, we derive an optimal convergence result for a specific example in dimension one. Finally, we perform numerical experiments to support the main theorems in Section 9. ## 2. Assumptions and main results In this section, we state the assumptions on the total and local cost functions in (1.1) and communication patterns among agents. Then we give the main results of this paper. We are interested in (1.1) when the local cost functions and the total cost functions satisfy the following strong convexity and smoothness assumption. **Assumption 1**.: _For each \(i\in\{1,\cdots n\}\), the local cost function \(f_{i}\) is \(L_{i}\)-smooth for some \(L_{i}>0\), i.e., for any \(x,y\in\Omega\) we have_ \[\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L_{i}\|x-y\|\quad\forall\ x,y\in\Omega.\] _We set \(L=\max_{1\leq i\leq n}L_{i}\). Then the total cost function \(f(\cdot)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\cdot)\) is \(L\)-smooth._ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & Cost & Smooth & Learning rate & Regret & Rate \\ \hline [26] & C & \(\|\nabla f_{i}\|_{\infty}<\infty\) & \(\begin{array}{c}\sum_{t=1}^{\infty}\alpha(t)=\infty\\ \sum_{t=1}^{\infty}\alpha(t)^{2}<\infty\end{array}\) & \(\|x_{i}(t)-x_{*}\|\) & \(o(1)\) \\ \hline [11] & C & \(\|\nabla f_{i}\|_{\infty}<\infty\) & \(\alpha(t)=\frac{c}{t^{\alpha}}\) & \(\begin{array}{c}\min_{1\leq k\leq n}\\ f(x_{k}(t))-f_{*}\end{array}\) & \(\begin{array}{c}O(\frac{1}{n^{\alpha}})\text{ if }0<\alpha<\frac{1}{2}\\ O(\frac{\log n}{\sqrt{n}})\text{ if }\alpha=\frac{1}{2}\end{array}\) \\ \hline [18] & SC & \(\|\nabla f_{i}\|_{\infty}<\infty\) & \(\alpha(t)=c/t\) & \(\|x_{i}(t)-x_{*}\|\) & \(O(1/\sqrt{t})\) \\ \hline [17] & SC & L-smooth & \(\alpha(t)\equiv\alpha\) & \(\|x_{i}(t)-x_{*}\|\) & \(O(e^{-ct})+O(\sqrt{\alpha})+O(\frac{D}{\sqrt{n}})\) \\ \hline This work & SC & L-smooth & \(\alpha(t)\equiv\alpha\) & \(\|\bar{x}(t)-x_{*}\|\) & \(O(e^{-ct})+O(\sqrt{\alpha})\) \\ \hline This work & Specific example & L-smooth & \(\alpha(t)\equiv\alpha\) & \(\|\bar{x}(t)-x_{*}\|\) & \(O(e^{-ct})+O(\alpha)\) \\ \hline This work & SC & L-smooth & \(\alpha(t)=\frac{c}{t^{\alpha}}\) & \(\|\bar{x}(t)-x_{*}\|\) & \(O(\frac{1}{t^{\alpha/2}})\text{ if }0<\alpha<1\) \\ \hline \end{tabular} \end{table} Table 1. Convergence results for DPG. Here C and SC mean that convex and \(\mu\)-strongly convex, respectively. Throughout the paper, we use \(\|\cdot\|\) to denote the euclidean norm. **Assumption 2**.: _The total cost function \(f(\cdot)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\cdot)\) is \(\mu\)-strongly convex for some \(\mu>0\), i.e., for any \(x,y\in\Omega\), we have_ \[f(y)\geq f(x)+(y-x)\nabla f(x)+\frac{\mu}{2}\|y-x\|^{2}.\] Under this assumption, the function \(f\) has a unique optimizer \(x_{*}\in\Omega\). In decentralized optimization, a local agent informs its own information to other agents relying on shared communication networks which are characterized by an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where each node in \(\mathcal{V}\) represents an agent, and each edge \(\{i,j\}\in\mathcal{E}\) means that \(i\) can send messages to \(j\). The graph \(\mathcal{G}\) is assumed to satisfy the following assumption. **Assumption 3**.: _The communication graph \(\mathcal{G}\) is fixed and connected._ We define the mixing matrix \(W=[w_{ij}]_{1\leq i,j\leq n}\) as follows. The nonnegative weight \(w_{ij}\) is given for each communication link \(\{i,j\}\in\mathcal{E}\), where \(w_{ij}\neq 0\) if \(\{i,j\}\in\mathcal{E}\) and \(w_{ij}=0\) if \(\{i,j\}\notin\mathcal{E}\). In this paper, we make the following assumption on the mixing matrix \(W\). **Assumption 4**.: _The mixing matrix \(W=\{w_{ij}\}_{1\leq i,j\leq n}\) is symmetric and doubly stochastic. The network is strongly connected and the weight matrix \(W\) satisfies null\((I-W)=\text{span}\{1_{n}\}\)._ Without loss of generality, we arrange the eigenvalues of \(W\) to satisfy \[1=|\lambda_{1}(W)|>|\lambda_{2}(W)|\geq\cdots\geq|\lambda_{n}(W)|\geq 0.\] It is well-known that we have \(\beta:=|\lambda_{2}(W)|<1\) under Assumption 4. Furthermore, we have the following lemma. **Lemma 2.1**.: _Suppose that the mixing matrix \(W\) satisfies the Assumption 4. Then, for any \(x=(x_{1},\cdots,x_{n})\in\mathbb{R}^{d\cdot n}\) we have_ \[\sum_{i=1}^{n}\Big{\|}\sum_{j=1}^{n}w_{ij}(x_{j}-\bar{x})\Big{\|}^{2}\leq \beta^{2}\sum_{i=1}^{n}\|x_{i}-\bar{x}\|^{2},\] _where \(\bar{x}=\frac{1}{n}\sum_{i=1}^{n}x_{i}\)._ Proof.: We refer to Lemma 1 in [23]. ### Main Results In centralized optimization, it is enough to show that the sequence generated by (1.5) converges to the optimal solution of (1.1) since the central coordinate control all agents simultaneously. On the other hand, in decentralized optimization, each agent makes its own sequence and only informs its own information to its neighbor agents. Therefore, we also need to show that each sequence generated by (1.2) converges to the same point, in which case we say the consensus is achieved. Then we reveal this point converges to the optimal point. Before stating the results, we introduce some constants used to state and prove the results. Let \(D:=\max_{1\leq i\leq n}\|\nabla f_{i}(x_{*})\|\). We fix a variable \(\delta>0\) such that \((1+\delta)\beta^{2}<1\) and let \(\tilde{\beta}:=(1+\delta)\beta^{2}\). Also, we set the following constants \[c_{1}:=3L^{2}\bigg{(}1+\frac{1}{\delta}\bigg{)},\ c_{2}:=3nD^{2}\bigg{(}1+\frac {1}{\delta}\bigg{)},\ c_{3}:=c_{1}+L^{2},\ c_{4}:=\frac{4L^{2}}{\mu} \tag{2.1}\] and denote \(\mathbf{x}(t)\), \(\bar{\mathbf{x}}(t)\) and \(\mathbf{x}_{*}\in\mathbb{R}^{d\cdot n}\) by \[\mathbf{x}(t)=[x_{1}(t),x_{2}(t)\cdots,x_{n}(t)]^{T},\ \bar{\mathbf{x}}(t)=[ \bar{x}(t),\cdots\bar{x}(t)]^{T},\ \mathbf{x}_{*}=[x_{*},\cdots,x_{*}]^{T}.\] where \(\bar{x}(t)=\frac{1}{n}\sum_{i=1}^{n}x_{i}(t)\). We note that \[\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}=\sum_{i=1}^{n}\|x_{i}(t)-\bar{x}(t) \|^{2}\ \text{and}\ \|\mathbf{x}(t)-\mathbf{x}_{*}\|^{2}=n\|\bar{x}(t)-x_{*}\|^{2}.\] In addition, since \(\frac{1}{n}\sum_{i=1}^{n}x_{i}(t)-\bar{x}(t)=0\), it follows directly that for all \(t\geq 0\), \[\|\mathbf{x}(t)-\mathbf{x}_{*}\|^{2}=\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2 }+\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}. \tag{2.2}\] Now we introduce our results on the projected decentralized gradient descent (1.2). The first result provides the conditions for the uniform boundedness of the sequence \(\{x_{i}(t)\}_{t\geq 0}\) in the sense that \(\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\) is uniformly bounded for all \(t\geq 0\). **Theorem 2.1**.: _Suppose that Assumptions 1,3,4 hold. Also, assume that one of the following statements holds true:_ 1. \(\Omega\) _is bounded._ 2. _Each cost_ \(f_{i}\) _is convex and the stepsize is constant, i.e.,_ \(\alpha(t)\equiv\alpha\)_, satisfying_ \(\alpha\leq\frac{1+\lambda_{n}(W)}{L}\)_._ 3. _Assumption_ 2 _holds. Also the stepsize_ \(\{\alpha(t)\}_{t\geq 0}\) _is non-increasing and_ \(\alpha(0)\) _satisfies_ \[\alpha(0)<\min\bigg{\{}Z,\frac{\mu}{4c_{1}},\frac{2}{L+\mu}\bigg{\}}.\] _Here we have set the positive constant_ \(Z\) _by_ \[Z:=\frac{1}{2c_{3}}\Big{[}-\Big{(}c_{4}+\frac{\mu}{4}\Big{)}+\sqrt{\Big{(}c_{ 4}+\frac{\mu}{4}\Big{)}^{2}+4c_{3}(1-\tilde{\beta})}\Big{]}.\] _Then there exists a constant \(R>0\) such that_ \[\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\leq R\] _holds for all \(t\geq 0\)._ The following assumption formulates the above uniform boundedness property: **Assumption 5**.: _There exists a constant \(R>0\) such that_ \[\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\leq R\] _holds for all \(t\geq 0\)._ Although the result of this assumption is proved in Theorem 2.1, we proposed this assumption because the result may hold for larger ranges of \(\alpha(t)\) than that guaranteed by Theorem 2.1. Proving a sharper range of \(\alpha(t)\) for the uniform boundedness property would be an interesting future work. Now we state the consensus and convergence results for (1.2) both for the constant stepsize and decreasing stepsize. We first introduce the following consensus results based on the estimates for the consensus error \(\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\). **Theorem 2.2**.: _Suppose that Assumptions 1-5 hold. If \(\{\alpha(t)\}_{t\geq 0}\) satisfies \(\alpha(0)\leq\frac{2}{L+\mu}\), then we have_ \[\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\leq\beta^{t}\|\mathbf{x}(0)-\bar{ \mathbf{x}}(0)\|^{2}+\frac{J\alpha(t)^{2}}{(1-\beta)^{2}}.\] _Here for constant stepsize, i.e. \(\alpha(t)\equiv\alpha\), we set_ \[J:=3(L^{2}R^{2}+nD^{2}).\] _For a decreasing stepsize, we set_ \[J:=3(L^{2}R^{2}+nD^{2})\cdot\sup_{s\geq 0}\frac{\alpha(0)^{2}\beta^{s}+\alpha ([s/2])^{2}}{\alpha(s)^{2}}.\] Theorem 2.2 demonstrates that the consensus is reached exponentially up to an \(O(\alpha(t))\) error. Next we state the convergence results for the sequence \(\{\bar{x}(t)\}_{t\geq 0}\) towards the optimal point. Before stating the results, we introduce the following constants \(G_{1}\) and \(G_{2}\): \[G_{1} =2R\Big{(}\tilde{c}_{3}\alpha(0)^{2}+\tilde{c}_{4}\alpha(0)+\beta \Big{)}\] \[G_{2} =\Big{[}(\tilde{c}_{3}\alpha(0)^{2}+\tilde{c}_{4}\alpha(0)+\beta )\frac{2J^{2}}{(1-\beta)^{2}}+\tilde{c}_{1}R+\tilde{c}_{2}\Big{]},\] where \[\tilde{c}_{1}:=\frac{3L^{2}}{1-\beta},\;\tilde{c}_{2}:=\frac{3nD^{2}}{1-\beta },\;\tilde{c}_{3}:=\tilde{c}_{1}+L^{2},\;\tilde{c}_{4}:=\frac{4L^{2}}{\mu}.\] These constants are obtained by setting \(\delta=(1-\beta)/\beta\) in the constants of (2.1). The following convergence result holds when the stepsize is given by a constant. **Theorem 2.3**.: _Suppose that Assumptions 1-5 hold. If the stepsize is given by a constant \(\alpha>0\) such that \(\alpha\leq\frac{2}{L+\mu}\), then we have_ \[\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\leq\Big{(}1-\frac{\mu\alpha}{2} \Big{)}^{t}\|\bar{\mathbf{x}}(0)-\mathbf{x}_{*}\|^{2}+\frac{2G_{2}}{\mu}\alpha +\frac{2}{\mu\alpha}\left(\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t-1}+\beta^{ \frac{t-1}{2}}\right).\] Theorem 2.3 implies the sequence generated by (1.2) converges to an \(O(\sqrt{\alpha})\)-neighborhood of the optimal point exponentially fast. We recall from [27] that the sequence of the algorithm (1.2) on the whole space \(\mathbb{R}^{d}\) converges to an \(O(\alpha)\)-neighborhood of \(x_{*}\). This naturally leads us to pose the following question. _Question_: _Is the convergence error \(O(\sqrt{\alpha})\) in Theorem 2.3 optimal? or can we improve the convergence error to \(O(\alpha)\)?_ We give a partial answer to this question in Section 8. Precisely, we find a one-dimensional example such that the algorithm (1.2) converges to an \(O(\alpha)\) neighborhood of the optimal point. In the below, we provide the convergence results when the stepsize is given by the decreasing stepsize \(\alpha(t)=v/(t+w)^{p}\) for \(0<p\leq 1\). **Theorem 2.4**.: _Suppose that Assumptions 1-5 hold. Let \(p\in(0,1)\) and assume that \(\alpha(t)=\frac{v}{(t+w)^{p}}\) with \(v,w>0\) satisfying_ \[\alpha(0)=\frac{v}{w^{p}}\leq\frac{2}{L+\mu}.\] _Then we have_ \[\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\leq\frac{2eQ\Big{(}\rho G_{1}+G_{2} \Big{)}v}{\mu}([t/2]+w-1)^{-p}+\mathcal{R}_{1}(t)+\mathcal{R}_{2}(t),\] _where \(Q=\Big{(}\frac{w+1}{w}\Big{)}^{2p}\), \(\rho=\sup_{t\geq 0}\beta^{t}/\alpha(t)^{2}\) and_ \[\mathcal{R}_{1}(t) =e^{-\sum_{s=0}^{t-1}\frac{\mu v}{2(s+w)^{p}}}\|\bar{\mathbf{x}} (0)-\mathbf{x}_{*}\|^{2},\] \[\mathcal{R}_{2}(t) =Q\Big{(}\rho G_{1}+G_{2}\Big{)}v^{2}e^{-\frac{\mu vt}{4(t+w)^{p }}}\sum_{s=1}^{[t/2]-1}\frac{1}{(s+w)^{2p}}.\] In the above result, we easily see that for any fixed \(N>0\), there exists a constant \(C_{N}>0\) independent of \(t\geq 0\) such that \[\mathcal{R}_{1}(t)+\mathcal{R}_{2}(t)\leq C_{N}t^{-N}.\] **Theorem 2.5**.: _Suppose that Assumptions 1-5 hold. Let \(\alpha(t)=\frac{v}{(t+w)}\) with \(v,w>0\) satisfying_ \[\alpha(0)=\frac{v}{w}\leq\frac{2}{L+\mu}.\] _Also, choose \(v>0\) such that \(\mu v/2>1\). Then we have_ \[\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\leq\bigg{(}\frac{w}{t+w}\bigg{)}^{ \mu v/2}\|\bar{\mathbf{x}}(0)-\mathbf{x}_{*}\|^{2}+\mathcal{R}_{3}(t),\] _where \(Q=\Big{(}\frac{w+1}{w}\Big{)}^{2}\), \(\rho=\sup_{t\geq 0}\beta^{t}/\alpha(t)^{2}\) and_ \[\mathcal{R}_{3}(t)=\frac{Q}{(\mu v/2)-1}\Big{(}\frac{w+1}{w}\Big{)}^{\mu v/2} \frac{\Big{(}\rho G_{1}+G_{2}\Big{)}v^{2}}{(t+w-1)}.\] ## 3. Preliminary results In this section, we prepare several estimates which will be used to derive two sequential estimates in the next section. We first study the projection operator \(\mathcal{P}_{\Omega}\) in (1.2) defined by \[\mathcal{P}_{\Omega}[x]=\arg\min_{y\in\Omega}\|y-x\|.\] This projection operator has the following property: **Lemma 3.1**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be convex and closed._ 1. _For any_ \(x,y\in\mathbb{R}^{d}\)_, we have_ \[\|\mathcal{P}_{\Omega}[x]-\mathcal{P}_{\Omega}[y]\|\leq\|x-y\|.\] (3.1) 2. _For any_ \(x\in\mathbb{R}^{d}\) _and_ \(y\in\Omega\)_, we have_ \[\|\mathcal{P}_{\Omega}[x]-y\|\leq\|x-y\|.\] (3.2) Proof.: We refer to Theorem 1.5.5 in [13] for the proof of estimate (3.1). Taking \(y\in\Omega\) in (3.1) leads to the estimate (3.2). The following lemma will be used to obtain the sequential estimates in Section 4. **Lemma 3.2**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be convex and closed. Then, for any \(x_{1},\cdots,x_{n}\in\mathbb{R}^{d}\), we have_ \[\sum_{i=1}^{n}\left\|\mathcal{P}_{\Omega}[x_{i}]-\frac{1}{n}\sum_{j=1}^{n} \mathcal{P}_{\Omega}[x_{j}]\right\|^{2}\leq\sum_{i=1}^{n}\left\|x_{i}-\frac{1 }{n}\sum_{j=1}^{n}x_{j}\right\|^{2}.\] Proof.: For given \(\{a_{i}\}_{i=1}^{n}\subset\mathbb{R}^{d}\), consider the function \(F:\mathbb{R}^{d}\to\mathbb{R}\) defined by \[F(x)=\sum_{i=1}^{n}\|a_{i}-x\|^{2},\] This function is minimized when \(x=\frac{1}{n}\sum_{i=1}^{n}a_{i}\), and using this we find \[\sum_{i=1}^{n}\left\|\mathcal{P}_{\Omega}[x_{i}]-\frac{1}{n}\sum_{j=1}^{n} \mathcal{P}_{\Omega}[x_{j}]\right\|^{2}\leq\sum_{i=1}^{n}\left\|\mathcal{P}_{ \Omega}[x_{i}]-\mathcal{P}_{\Omega}\bigg{[}\frac{1}{n}\sum_{j=1}^{n}x_{j} \bigg{]}\right\|^{2}.\] Combining this with (3.1), we get the desired inequality. **Lemma 3.3**.: _Suppose that Assumptions 1 and 2 hold. If \(\alpha(t)\leq\frac{2}{L+\mu}\) for all \(t\geq 0\), then we have_ \[\left\|\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_{i=1}^{n}(\nabla f_{i}(\bar{x }(t))-\nabla f_{i}(x_{*}))\right\|^{2}\leq\bigg{(}1-\frac{\mu\alpha(t)}{2} \bigg{)}^{2}\|\bar{x}(t)-x_{*}\|^{2}. \tag{3.3}\] Proof.: We expand the left hand side in (3.3) as \[\|\bar{x}(t)-x_{*}\|^{2}-2\alpha(t)\Big{\langle}\bar{x}(t)-x_{*},\nabla f( \bar{x}(t))-\nabla f(x_{*})\Big{\rangle}+\alpha(t)^{2}\left\|\nabla f(\bar{x }(t))-\nabla f(x_{*})\right\|^{2}, \tag{3.4}\] where \(f(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x).\) Note that \(f\) is \(L\)-smooth and \(\mu\)-strongly convex by Assumptions 1 and 2, and so we have the following inequality (see e.g., [5, Lemma 3.11]): \[\Big{\langle}\bar{x}(t)-x_{*},\ \nabla f(\bar{x}(t))-\nabla f(x_{*})\Big{\rangle} \geq\frac{L\mu}{L+\mu}\|\bar{x}(t)-x_{*}\|^{2}+\frac{1}{L+\mu}\|\nabla f(\bar {x}(t))-\nabla f(x_{*})\|^{2}.\] Putting the above inequality in (3.4), we get \[\left\|\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_{i=1}^{n}(\nabla f _{i}(\bar{x}(t))-\nabla f_{i}(x_{*}))\right\|^{2}\] \[\leq\bigg{(}1-\frac{2L\mu\alpha(t)}{L+\mu}\bigg{)}\|\bar{x}(t)-x_ {*}\|^{2}+\alpha(t)\bigg{(}\alpha(t)-\frac{2}{L+\mu}\bigg{)}\|\nabla f(\bar{x} (t))-\nabla f(x_{*})\|^{2}.\] Using the assumption \(\alpha(t)\leq\frac{2}{L+\mu}\) and \(2L\geq L+\mu\), we then have \[\left\|\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_{i=1}^{n}(\nabla f _{i}(\bar{x}(t))-\nabla f_{i}(x_{*}))\right\|^{2} \leq\left(1-\frac{2L\mu\alpha(t)}{L+\mu}\right)\|\bar{x}(t)-x_{*} \|^{2}\] \[\leq\left(1-\frac{\mu\alpha(t)}{2}\right)^{2}\|\bar{x}(t)-x_{*}\| ^{2}.\] The proof is done. **Lemma 3.4**.: _Suppose that Assumption 1 holds. For \((x_{1},\cdots,x_{n})\in\mathbb{R}^{dn}\) and \(\bar{x}=\frac{1}{n}\sum_{k=1}^{n}x_{k}\) we have_ \[\sum_{i=1}^{n}\|\nabla f_{i}(x_{i})-\nabla f_{i}(\bar{x})\|^{2}\leq L^{2}\| \mathbf{x}-\bar{\mathbf{x}}\|^{2} \tag{3.5}\] _and_ \[\sum_{i=1}^{n}\left\|\nabla f_{i}(x_{i})-\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l} (x_{l})\right\|^{2}\leq 3L^{2}\|\mathbf{x}-\bar{\mathbf{x}}\|^{2}+3L^{2}\| \bar{\mathbf{x}}-\mathbf{x}_{*}\|^{2}+3nD^{2}. \tag{3.6}\] Proof.: By Assumption 1, we have \[\sum_{i=1}^{n}\|\nabla f_{i}(x_{i})-\nabla f_{i}(\bar{x})\|^{2}\leq L^{2}\sum _{i=1}^{n}\|x_{i}-\bar{x}\|^{2},\] which directly implies (3.5). Next, we prove (3.6). Note that for any \(a=(a_{1},\cdots,a_{n})\in\mathbb{R}^{n}\), we have \[\sum_{i=1}^{n}\left\|a_{i}-\frac{1}{n}\sum_{l=1}^{n}a_{l}\right\| ^{2} =\sum_{i=1}^{n}\left(\|a_{i}\|^{2}-2\left\langle a_{i},\frac{1}{n} \sum_{l=1}^{n}a_{l}\right\rangle+\frac{1}{n^{2}}\left\|\sum_{l=1}^{n}a_{l} \right\|^{2}\right)\] \[=\sum_{i=1}^{n}\|a_{i}\|^{2}+\left(\frac{1}{n^{2}}-\frac{2}{n} \right)\left\|\sum_{l=1}^{n}a_{l}\right\|^{2}\] \[\leq\sum_{i=1}^{n}\|a_{i}\|^{2}.\] Using this, it follows that \[\sum_{i=1}^{n}\left\|\nabla f_{i}(x_{i})-\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l }(x_{l})\right\|^{2}\leq\sum_{i=1}^{n}\|\nabla f_{i}(x_{i}(t))\|^{2}.\] By the triangle inequality, one has \[\|\nabla f_{i}(x_{i})\|^{2}\leq 3\|\nabla f_{i}(x_{i})-\nabla f_{i}(\bar{x})\|^ {2}+3\|\nabla f_{i}(\bar{x})-\nabla f_{i}(x_{*})\|^{2}+3\|\nabla f_{i}(x_{*}) \|^{2}.\] This, together with (3.5) and Assumption 1, gives \[\sum_{i=1}^{n}\|\nabla f_{i}(x_{i})\|^{2}\leq 3L^{2}\|\mathbf{x}-\bar{\mathbf{x }}\|^{2}+3L^{2}\|\bar{\mathbf{x}}-\mathbf{x}_{*}\|^{2}+3nD^{2}.\] Combining this with (3.7) gives the desired estimate. **Lemma 3.5**.: _Suppose that Assumptions 1 and 2 hold. If the diminishing sequence \(\{\alpha(t)\}_{t\geq 0}\) satisfy \(\alpha(0)\leq\frac{2}{L+\mu}\), then the sequence \(\{x_{i}(t)\}_{t\geq 0}\) generating by (1.2) for all \(1\leq i\leq n\) satisfies the following inequality_ \[n\left\|\bar{x}(t)-x_{*}-\alpha(t)\left(\frac{1}{n}\sum_{i=1}^{n }\nabla f_{i}(x_{i}(t))-\nabla f(x_{*})\right)\right\|^{2} \tag{3.7}\] \[\qquad\leq\left(1-\frac{\mu\alpha(t)}{2}\right)\|\bar{\mathbf{x }}(t)-\mathbf{x}_{*}\|^{2}+\left(L^{2}\alpha(t)^{2}+\frac{4L^{2}\alpha(t)}{\mu }\right)\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}.\] Proof.: By Young's inequality, for any \(x\) and \(y\) in \(\mathbb{R}^{n}\) we have \[\|x+y\|^{2}\leq(1+\eta)\|x\|^{2}+\Big{(}1+\frac{1}{\eta}\Big{)}\|y\|^{2} \tag{3.8}\] for all \(\eta>0\). For notational convenience, we let \(H(t)=\frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}(x_{i}(t))\). Using (3.8), we obtain the following inequality: \[\|\bar{x}(t)-x_{*}-\alpha(t)\left(H(t)-\nabla f(x_{*})\right)\|^ {2}\] \[=\|\bar{x}(t)-x_{*}-\alpha(t)\left(\nabla f(\bar{x}(t))-\nabla f( x_{*})\right)+\alpha(t)\left(\nabla f(\bar{x}(t))-H(t)\right)\|^{2}\] \[\leq(1+\eta)\left\|\bar{x}(t)-x_{*}-\alpha(t)\left(\nabla f(\bar {x}(t))-\nabla f(x_{*})\right)\right\|^{2}\] \[\qquad\qquad\qquad\qquad+\left(1+\frac{1}{\eta}\right)\alpha(t )^{2}\left\|\nabla f(\bar{x}(t))-H(t)\right\|^{2}.\] Now we estimate the right hand side of the last inequality. By Lemma 3.3, it follows that \[(1+\eta)\left\|\bar{x}(t)-x_{*}-\alpha(t)\left(\nabla f(\bar{x}(t))-\nabla f (x_{*})\right)\right\|^{2}\leq(1+\eta)\bigg{(}1-\frac{\mu\alpha(t)}{2}\bigg{)} ^{2}\|\bar{x}(t)-x_{*}\|^{2}. \tag{3.9}\] Note that by the Cauchy-Schwarz inequality, we get \[\|\nabla f(\bar{x}(t))-H(t)\|^{2} =\frac{1}{n^{2}}\left\|\sum_{i=1}^{n}\left(\nabla f_{i}(\bar{x}( t))-\nabla f_{i}(x_{i}(t))\right)\right\|^{2}\] \[\leq\frac{1}{n}\sum_{i=1}^{n}\left\|\nabla f_{i}(\bar{x}(t))- \nabla f_{i}(x_{i}(t))\right\|^{2}.\] Using this inequality and Lemma 3.4, we obtain \[\left(1+\frac{1}{\eta}\right)\alpha(t)^{2}\left\|\nabla f(\bar{x }(t))-H(t)\right\|^{2} \leq\left(1+\frac{1}{\eta}\right)\frac{\alpha(t)^{2}}{n}\sum_{i=1} ^{n}\|\nabla f_{i}(\bar{x}(t))-\nabla f_{i}(x_{i}(t))\|^{2}\] \[\leq\left(1+\frac{1}{\eta}\right)\frac{L^{2}\alpha(t)^{2}}{n} \sum_{i=1}^{n}\|\bar{x}(t)-x_{i}(t)\|^{2}.\] Now setting \(\eta=\frac{\mu\alpha(t)}{4}\) and combining (3.9) and (3.10), we have \[n\left\|\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_{l=1}^{n}\left( \nabla f_{l}(x_{l}(t))-\nabla f_{l}(x_{*})\right)\right\|^{2}\] \[\leq n\bigg{(}\bigg{(}1+\frac{\mu\alpha(t)}{4}\bigg{)}\bigg{(}1- \frac{\mu\alpha(t)}{2}\bigg{)}^{2}\|\bar{x}(t)-x_{*}\|^{2}+\bigg{(}1+\frac{4}{ \mu\alpha(t)}\bigg{)}\frac{L^{2}\alpha(t)^{2}}{n}\sum_{i=1}^{n}\|\bar{x}(t)-x_ {i}(t)\|^{2}\bigg{)}\] \[=\bigg{(}1+\frac{\mu\alpha(t)}{4}\bigg{)}\bigg{(}1-\frac{\mu \alpha(t)}{2}\bigg{)}^{2}\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}+\bigg{(}1+ \frac{4}{\mu\alpha(t)}\bigg{)}L^{2}\alpha(t)^{2}\|\mathbf{x}(t)-\bar{\mathbf{ x}}(t)\|^{2}.\] Here we used \[n\|\bar{x}(t)-x_{*}\|^{2}=\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\text{ and }\sum_{i=1}^{n}\|\bar{x}(t)-x_{i}(t)\|^{2}=\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\] for the last equality. Using \(\alpha(t)\leq\frac{2}{\mu}\) we have \[\Big{(}1+\frac{\mu\alpha(t)}{4}\Big{)}\Big{(}1-\mu\alpha(t)+\frac {\mu^{2}\alpha(t)^{2}}{4}\Big{)} =1-\frac{3}{4}\mu\alpha(t)+\frac{\mu^{3}}{16}\alpha(t)^{3}\] \[=1-\frac{\mu\alpha(t)}{2}-\Big{(}\frac{\mu\alpha(t)}{4}-\frac{ \mu^{3}\alpha(t)^{3}}{16}\Big{)}\] \[\leq 1-\frac{\mu\alpha(t)}{2}.\] Using this, we obtain the desired estimate. The proof is done. ## 4. Sequential estimates In this section, we establish sequential estimates which will be used importantly to derive the convergence results of the algorithm (1.2). As discussed in the end of Section 1, the projection operator makes it difficult to average the equation (1.2) to obtain (1.5). To get around this difficulty, we estimate the quantity \(\|\mathbf{x}(t+1)-\mathbf{x}_{*}\|^{2}\) instead of \(\|\bar{\mathbf{x}}(t+1)-\mathbf{x}_{*}\|\) to analyze the sequence of (1.2). Precisely, we aim to establish an estimate of \(\|\mathbf{x}(t+1)-\mathbf{x}_{*}\|^{2}\) in terms of \(\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\) and \(\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\) by applying the contraction property of the projection operator. We start this section by deriving an estimate of \(\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}\). For the reader's convenience, we recall the constant \(c_{1}\) and \(c_{2}\) as \[c_{1}:=3L^{2}\bigg{(}1+\frac{1}{\delta}\bigg{)},\ c_{2}:=3nD^{2}\bigg{(}1+ \frac{1}{\delta}\bigg{)}.\] **Proposition 4.1**.: _Suppose that Assumptions 1-4 hold. If \(\{\alpha(t)\}_{t\geq 0}\) satisfies \(\alpha(0)\leq\frac{2}{L+\mu}\), then the sequence \(\{x_{i}(t)\}_{t\geq 0}\) generated by (1.2) for all \(1\leq i\leq n\) satisfies the following inequality._ \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2}\leq(c_{1}\alpha(t)^{2}+\tilde{ \beta})\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}+c_{1}\alpha(t)^{2}\|\bar{ \mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}+c_{2}\alpha(t)^{2}.\] Proof.: For the reader's convenience, we recall the algorithm (1.2) as \[x_{i}(t+1)=\mathcal{P}_{\Omega}\bigg{[}\sum_{j=1}^{n}w_{ij}x_{j}(t)-\alpha(t) \nabla f_{i}(x_{i}(t))\bigg{]},\] and note that \(\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2}=\sum_{i=1}^{n}\|x_{i}(t+1)-\bar{x} (t+1)\|^{2}\) by definition. Using this equality, we derive the following estimate. \[\sum_{i=1}^{n}\|x_{i}(t+1)-\bar{x}(t+1)\|^{2}\] \[=\sum_{i=1}^{n}\left\|\mathcal{P}_{\Omega}\bigg{[}\sum_{j=1}^{n}w _{ij}x_{j}(t)-\alpha(t)\nabla f_{i}(x_{i}(t))\bigg{]}-\frac{1}{n}\sum_{k=1}^{n }\mathcal{P}_{\Omega}\bigg{[}\sum_{j=1}^{n}w_{kj}x_{j}(t)-\alpha(t)\nabla f_{i }(x_{k}(t))\bigg{]}\right\|^{2}\] \[\leq\sum_{i=1}^{n}\left\|\sum_{j=1}^{n}w_{ij}\left(x_{j}(t)-\bar{ x}(t)\right)-\alpha(t)\nabla f_{i}(x_{i}(t))+\frac{\alpha(t)}{n}\sum_{l=1}^{n} \nabla f_{l}(x_{l}(t))\right\|^{2}\] \[\leq\sum_{i=1}^{n}(1+\delta)\left\|\sum_{j=1}^{n}w_{ij}\left(x_{j }(t)-\bar{x}(t)\right)\right\|^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{i=1}^{n}\left(1+\frac {1}{\delta}\right)\alpha(t)^{2}\left\|\nabla f_{i}(x_{i}(t))+\frac{1}{n}\sum_ {l=1}^{n}\nabla f_{l}(x_{l}(t))\right\|^{2}.\] Here we used Lemma 3.2 for the first inequality and Young's inequality for the last inequality for \(\delta>0\). We can further write the right hand side of the last inequality as \[(1+\delta)\left\|W(\mathbf{x}(t)-\bar{\mathbf{x}}(t))\right\|^{2 }+\sum_{i=1}^{n}\left(1+\frac{1}{\delta}\right)\alpha(t)^{2}\left\|\nabla f_{ i}(x_{i}(t))+\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l}(x_{l}(t))\right\|^{2} \tag{4.1}\] \[\leq(1+\delta)\beta^{2}\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2 }+\sum_{i=1}^{n}\left(1+\frac{1}{\delta}\right)\alpha(t)^{2}\left\|\nabla f_{ i}(x_{i}(t))+\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l}(x_{l}(t))\right\|^{2}.\] Here we applied Lemma 2.1 to the last inequality. Combining Lemma 3.4 with (4.1), we obtain the desired estimate. The proof is done. Next, we give an estimate of the quantity \(\|\mathbf{x}(t)-\mathbf{x}_{*}\|^{2}\). Before starting the statement, we recall the constants \(c_{3}\) and \(c_{4}\): \[c_{3}:=c_{1}+L^{2},\ c_{4}:=\frac{4L^{2}}{\mu}.\] Using the relation (2.2), we state the following propostion. **Proposition 4.2**.: _Suppose that Assumptions 1-4 hold. If the diminishing sequence \(\{\alpha(t)\}_{t\geq 0}\) satisfy \(\alpha(0)\leq\frac{2}{L+\mu}\), then the sequence \(\{x_{i}(t)\}_{t\geq 0}\) generated by (1.2) for all \(1\leq i\leq n\) satisfies the following inequality._ \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2}+\|\bar{\mathbf{x}}(t+1)- \mathbf{x}_{*}\|^{2}\] \[\leq\left(c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta}\right) \|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}+\left(1-\frac{\mu}{2}\alpha(t)+c_{1 }\alpha(t)^{2}\right)\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}+c_{2}\alpha(t) ^{2}.\] Proof.: Note that since \(x_{*}=\arg\min_{x\in\Omega}f(x)\), it follows that \(x_{*}=\mathcal{P}_{\Omega}\left[x_{*}-\alpha(t)\nabla f(x_{*})\right]\). By the algorithm (1.2) and using (3.1) we deduce \[\|x_{i}(t+1)-x_{*}\|^{2} =\left\|\mathcal{P}_{\Omega}\bigg{[}\sum_{j=1}^{n}w_{ij}x_{j}(t)- \alpha(t)\nabla f_{i}(x_{i}(t))\bigg{]}-\mathcal{P}_{\Omega}\left[x_{*}-\alpha (t)\nabla f(x_{*})\right]\right\|^{2}\] \[\leq\bigg{\|}\sum_{j=1}^{n}w_{ij}x_{j}(t)-x_{*}-\alpha(t)\left( \nabla f_{i}(x_{i}(t))-\nabla f(x_{*})\right)\bigg{\|}^{2}.\] Summing up the above inequality from \(i=1\) to \(n\), we have \[\sum_{i=1}^{n}\|x_{i}(t+1)-x_{*}\|^{2}\leq\sum_{i=1}^{n}\bigg{\|}\sum_{j=1}^{n }w_{ij}x_{j}(t)-x_{*}-\alpha(t)\left(\nabla f_{i}(x_{i}(t))-\nabla f(x_{*}) \right)\bigg{\|}^{2}. \tag{4.2}\] We find the following identity of the right hand side of (4.2): \[\sum_{i=1}^{n}\bigg{\|}\sum_{j=1}^{n}w_{ij}x_{j}(t)-x_{*}-\alpha( t)\left(\nabla f_{i}(x_{i}(t))-\nabla f(x_{*})\right)\bigg{\|}^{2}\] \[=\sum_{i=1}^{n}\bigg{\|}\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_ {l=1}^{n}\left(\nabla f_{l}(x_{l}(t))-\nabla f(x_{*})\right) \tag{4.3}\] \[\qquad+\sum_{j=1}^{n}w_{ij}\Big{(}x_{j}(t)-\bar{x}(t)\Big{)}- \alpha(t)\Big{(}\nabla f_{i}(x_{i}(t))-\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l}( x_{l}(t))\Big{)}\bigg{\|}^{2}\] \[=\sum_{i=1}^{n}\bigg{\|}\bar{x}(t)-x_{*}-\frac{\alpha(t)}{n}\sum_ {l=1}^{n}\left(\nabla f_{l}(x_{l}(t))-\nabla f(x_{*})\right)\bigg{\|}^{2}\] \[\quad+\sum_{i=1}^{n}\bigg{\|}\sum_{j=1}^{n}w_{ij}\Big{(}x_{j}(t)- \bar{x}(t)\Big{)}-\alpha(t)\Big{(}\nabla f_{i}(x_{i}(t))-\frac{1}{n}\sum_{l=1} ^{n}\nabla f_{l}(x_{l}(t))\Big{)}\bigg{)}\bigg{\|}^{2}.\] Here we used \[\sum_{i=1}^{n}\left(\sum_{j=1}^{n}w_{ij}x_{j}(t)-\bar{x}(t)-\alpha (t)\Big{(}\nabla f_{i}(x_{i}(t))-\frac{1}{n}\sum_{l=1}^{n}\nabla f_{l}(x_{l}( t))\Big{)}\right)=0\] for the second equality. Now we estimate the second term of the right hand side of the last equality in (4.3). By the proof of Proposition 4.1, we have \[\sum_{i=1}^{n}\bigg{\|}\sum_{j=1}^{n}w_{ij}\Big{(}x_{j}(t)-\bar{ x}(t)\Big{)}-\alpha(t)\Big{(}\nabla f_{i}(x_{i}(t))-\frac{1}{n}\sum_{l=1}^{n} \nabla f_{l}(x_{l}(t))\Big{)}\bigg{\|}^{2}\] \[\leq(c_{1}\alpha(t)^{2}+\tilde{\beta})\|\mathbf{x}(t)-\bar{ \mathbf{x}}(t)\|^{2}+c_{1}\alpha(t)^{2}\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\| ^{2}+c_{2}\alpha(t)^{2}.\] By Lemma 3.5, the first term of the right hand side of the last equality in (4.3) is bounded by \[\bigg{(}1-\frac{\mu\alpha(t)}{2}\bigg{)}\|\bar{\mathbf{x}}(t)- \mathbf{x}_{*}\|^{2}+\bigg{(}L^{2}\alpha(t)^{2}+\frac{4L^{2}\alpha(t)}{\mu} \bigg{)}\|\bar{\mathbf{x}}(t)-\mathbf{x}(t)\|^{2}.\] Putting the above two estimates in the last term of (4.3), we get \[\sum_{i=1}^{n}\|x_{i}(t+1)-x_{*}\|^{2} =\|{\bf x}(t+1)-\bar{\bf x}(t+1)\|^{2}+\|\bar{\bf x}(t+1)-{\bf x}_{* }\|^{2}.\] \[\leq\bigg{(}\Big{(}c_{1}+L^{2}\Big{)}\alpha(t)^{2}+\frac{4L^{2} \alpha(t)}{\mu}+\tilde{\beta}\bigg{)}\|{\bf x}(t)-\bar{\bf x}(t)\|^{2}\] \[\qquad\qquad+\bigg{(}1-\frac{\mu\alpha(t)}{2}+c_{1}\alpha(t)^{2} \bigg{)}\|\bar{\bf x}(t)-{\bf x}_{*}\|^{2}+c_{2}\alpha(t)^{2}.\] The proof is done. ## 5. Uniform boundedness of the sequence In this section, we prove the uniform boundedness of the sequence \(\{x_{i}(t)\}_{t\geq 0}\) stated in Theorem 2.1. We note that the uniform boundedness of the sequence is trivial for the first case where \(\Omega\) is assumed to be bounded. Next we prove the theorem for the second case. Proof of Theorem 2.1 for case 2.: Consider the following functional \(E_{\alpha}:(\mathbb{R}^{d})^{n}\to\mathbb{R}\) defined as \[E_{\alpha}(x)=\frac{1}{2}\Big{(}\sum_{k=1}^{n}\|x_{k}\|^{2}-\sum_{k=1}^{n} \sum_{j=1}^{n}w_{kj}\langle x_{k},x_{j}\rangle\Big{)}+\alpha\sum_{k=1}^{n}f_{ k}(x_{k}).\] Then \[x(t+1)=P_{\Omega^{n}}\Big{(}x(t)-\nabla E_{\alpha}(x(t))\Big{)}.\] The function \(E_{\alpha}\) is convex and smooth with constant \(1-\lambda_{n}(W)+\alpha L\), where \(\lambda_{n}(W)\) is the smallest eigenvalue of \(W\) (refer to [27]). Then, we may use the general result for the projected gradient descent (see e.g., [5]) to conclude that the sequence \(\{x(t)\}_{t\geq 0}\) is uniformly bounded if \[1\leq\frac{2}{1-\lambda_{n}(W)+\alpha L},\] which is equivalent to \(\alpha\leq\frac{1+\lambda_{n}(W)}{L}\). To hanlde the third case of Theorem 2.1, we set \[A(t)=\|{\bf x}(t)-\bar{\bf x}(t)\|^{2},\quad B(t)=\|\bar{\bf x}(t)-{\bf x}_{*} \|^{2},\quad C(t)=\|{\bf x}(t)-{\bf x}_{*}\|^{2}. \tag{5.1}\] Using (2.2), it follows that \(C(t)=A(t)+B(t)\). In the following lemma, we find a sequential inequality fo the sequence \(\{C(t)\}_{t\in\mathbb{N}_{0}}\) and find the uniform boundedness of \(\{C(t)\}_{t\geq 0}\), which also implies the uniform boundedness of \(\{A(t)\}_{t\geq 0}\) and \(\{B(t)\}_{t\geq 0}\). It contains the proof of Theorem 2.1 for case 3. **Lemma 5.1**.: _Suppose that Assumptions 1-4 hold and the stepsize \(\{\alpha(t)\}_{t\geq 0}\) is nonincreasing and satisfies_ \[\alpha(0)\leq\min\left\{Z,\frac{\mu}{4c_{1}},\frac{2}{L+\mu}\right\} \tag{5.2}\] _where_ \[Z:=\frac{1}{2c_{3}}\left[-\Big{(}c_{4}+\frac{\mu}{4}\Big{)}+\sqrt{\Big{(}c_{4 }+\frac{\mu}{4}\Big{)}^{2}+4c_{3}(1-\tilde{\beta})}\right].\] _Suppose also that \(\tilde{\beta}:=(1+\delta)\beta^{2}<1\). Then the sequence \(\{x_{i}(t)\}_{t\in\mathbb{N}_{0}}\) generated by (1.2) satisfies the following statements._ 1. _We have_ \[C(t+1)\leq\Big{(}1-\frac{\mu}{4}\alpha(t)\Big{)}C(t)+c_{2}\alpha(t)^{2}.\] (5.3) 2. _There exists_ \(R>0\) _such that_ \[C(t)\leq R,\text{ for all }t\in\mathbb{N}.\] _In fact, we may set_ \(R=\max\Big{\{}\frac{4c_{2}\alpha(0)}{\mu},C(0)\Big{\}}\)__ Proof.: We first prove (5.3). By Proposition 4.2, we have the following estimate \[C(t)\leq(c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta})A(t)+\bigg{(}1-\frac {\mu}{2}\alpha(t)+c_{1}\alpha(t)^{2}\bigg{)}B(t)+c_{2}\alpha(t)^{2}.\] Note that since \(\alpha(0)\leq\frac{\mu}{4c_{1}}\) by (5.2), it follows that \(\mu/2-c_{1}\alpha(0)\geq\frac{\mu}{4}\). Suppose that the following inequality hold. \[1-\frac{\mu}{2}\alpha(t)+c_{1}\alpha(t)^{2}<1-\frac{\mu}{4}\alpha(t), \tag{5.4}\] \[c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta}<1-\frac{\mu}{4}\alpha(t). \tag{5.5}\] Then we obtain (5.3) as follows. \[C(t+1) \leq(c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta})A(t)+\bigg{(} 1-\frac{\mu}{2}\alpha(t)+c_{1}\alpha(t)^{2}\bigg{)}B(t)+c_{2}\alpha(t)^{2}\] \[\leq\bigg{(}1-\frac{\mu}{4}\alpha(t)\bigg{)}C(t)+c_{2}\alpha(t)^{ 2}.\] Now we show that (5.4) and (5.5) hold under the assumption (5.2). Since \(\{\alpha(t)\}_{t\geq 0}\) is diminishing sequence and \(\alpha(0)\leq\frac{\mu}{4c_{1}}\), we obtain (5.4) as follows: \[1-\frac{\mu}{2}\alpha(t)+c_{1}\alpha(t)^{2}=1-\Big{(}\frac{\mu}{2}-c_{1} \alpha(t)\Big{)}\,\alpha(t)\leq 1-\frac{\mu}{4}\alpha(t).\] To obtain (5.5), we note that (5.5) is equivalent to \[c_{3}\alpha(t)^{2}+\Big{(}c_{4}+\frac{\mu}{4}\Big{)}\alpha(t)+\tilde{\beta}-1 \leq 0. \tag{5.6}\] Therefore, the following inequality is a sufficient condition for (5.6): \[\alpha(t)\leq\alpha(0)\leq Z:=\frac{1}{2c_{3}}\left[-\Big{(}c_{4}+\frac{\mu}{ 4}\Big{)}+\sqrt{\Big{(}c_{4}+\frac{\mu}{4}\Big{)}^{2}+4c_{3}(1-\tilde{\beta})} \right].\] Since \(\tilde{\beta}<1\), we have \(Z>0\). This proves the first estimate of the lemma. In order to show the second estimate, we argue by induction. Fix a value \(R>0\) and assume that \(C(t)\leq R\) for some \(t\in\mathbb{N}_{0}\). Then, it follows from (5.3) that \[C(t+1)\leq(1-\frac{\mu}{4}\alpha(t))R+c_{2}\alpha(t)^{2}=R-(\frac{\mu}{4}R-c_{ 2}\alpha(t))\alpha(t).\] If we set \[R=\frac{4c_{2}\alpha(0)}{\mu},\] then we have \[\frac{\mu}{4}R-c_{2}\alpha(t)\geq\frac{\mu}{4}R-c_{2}\alpha(0)=0.\] This implies \(C(t+1)\leq R\). Therefore we have \(C(t)\leq R\) for any \(t\geq 0\). The proof is done. ## 6. Consensus estimates In this section, we establish the consensus result of Theorem 2.2 by using Proposition 4.1 and 4.2 in Section 4. Proof of Theorem 2.2.: Let \(\delta=\frac{1}{\beta}-1\) so that \(\tilde{\beta}=\beta<1\). Then, the estimate of Proposition 4.1 reads as \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2}\leq(\tilde{c}_{1}\alpha(t)^{2}+ \beta)\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}+\tilde{c}_{1}\alpha(t)^{2}\| \bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}+\tilde{c}_{2}\alpha(t)^{2}.\] Since \(\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}<R\), it follows that \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2} \leq\beta\|\mathbf{x}(t)-\bar{\mathbf{x}}(t)\|^{2}+\frac{3\alpha (t)^{2}}{1-\beta}(L^{2}R^{2}+nD^{2}) \tag{6.1}\] \[\leq\beta^{t+1}\|\mathbf{x}(0)-\bar{\mathbf{x}}(0)\|^{2}+\frac{3 }{1-\beta}(L^{2}R^{2}+nD^{2})\sum_{s=0}^{t}\alpha(s)^{2}\beta^{t-s}.\] For a decreasing stepsize, we have \[\sum_{s=0}^{t}\alpha(s)^{2}\beta^{t-s} =\sum_{s=0}^{[t/2]-1}\alpha(s)^{2}\beta^{t-s}+\sum_{s=[t/2]}^{t} \alpha(s)^{2}\beta^{t-s} \tag{6.2}\] \[\leq\alpha(0)^{2}\frac{\beta^{t}}{1-\beta}+\alpha([t/2])^{2}\frac {1}{1-\beta}.\] Inserting this inequality to (6.1), we have \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2}\leq\beta^{t+1}\|\mathbf{x}(0)- \bar{\mathbf{x}}(0)\|^{2}+\frac{J\alpha(t)^{2}}{(1-\beta)^{2}},\] where \[J=3(L^{2}R^{2}+nD^{2})\cdot\sup_{s\geq 0}\frac{\alpha(0)^{2}\beta^{s}+\alpha ([s/2])^{2}}{\alpha(s)^{2}}.\] For a constant stepszie \(\alpha(t)\equiv\alpha\), we can estimate (6.1) as \[\|\mathbf{x}(t+1)-\bar{\mathbf{x}}(t+1)\|^{2} \leq\beta^{t+1}\|\mathbf{x}(0)-\bar{\mathbf{x}}(0)\|^{2}+\frac{3 \alpha^{2}}{1-\beta}(L^{2}R^{2}+nD^{2})\sum_{s=0}^{t}\beta^{t-s}\] \[\leq\beta^{t+1}\|\mathbf{x}(0)-\bar{\mathbf{x}}(0)\|^{2}+\frac{3 \alpha^{2}}{(1-\beta)^{2}}(L^{2}R^{2}+nD^{2})\] \[=\beta^{t+1}\|\mathbf{x}(0)-\bar{\mathbf{x}}(0)\|^{2}+\frac{J \alpha^{2}}{(1-\beta)^{2}},\] where \(J=3(L^{2}R^{2}+nD^{2})\). The proof is done. ## 7. Convergence results In this section, we prove the convergence results, Theorems 2.3, 2.4 and 2.5 using the estimates in Section 4 and the following lemma. **Lemma 7.1**.: _Let \(p\in(0,1]\) and \(q>0\). Take \(C_{1}>0\) and \(w\geq 1\) such that \(C_{1}/w^{p}<1\). Suppose that the sequence \(\{H(t)\}_{t\geq 0}\) satisfies_ \[H(t)\leq\bigg{(}1-\frac{C_{1}}{(t+w-1)^{p}}\bigg{)}H(t-1)+\frac{C_{2}}{(t+w-1) ^{p+q}}\quad\text{for all }t\geq 1.\] _Set \(Q=\left(\frac{w+1}{w}\right)^{p+q}\). Then \(H(t)\) satisfies the following bound._ _Case 1. If \(p<1\), then we have_ \[H(t)\leq\delta\cdot([t/2]+w-1)^{-q}+\mathcal{R}(t),\] _where \(\delta=\frac{QC_{2}}{C_{1}}e^{\frac{C_{1}}{w^{p}}}\) and_ \[\mathcal{R}(t)=e^{-\sum_{s=0}^{t-1}\frac{C_{1}}{(s+w)^{p}}}H(0)+QC_{2}e^{- \frac{C_{1}t}{2(t+w)^{p}}}\sum_{s=1}^{[t/2]-1}\frac{1}{(s+w)^{p+q}}.\] _Here the second term on the right hand side is assumed to be zero for \(1\leq t\leq 3\)._ _Case 2. If \(p=1\), then we have_ \[H(t)\leq\Big{(}\frac{w}{t+w}\Big{)}^{C_{1}}H(0)+\mathcal{R}(t),\] _where_ \[\mathcal{R}(t)=\left\{\begin{array}{ll}\frac{w^{C_{1}-q}}{q-C_{1}}\cdot \frac{QC_{2}}{(t+w)^{C_{1}}}&\text{if }q>C_{1}\\ \log\big{(}\frac{t+w}{w}\big{)}\cdot\frac{QC_{2}}{(t+w)^{C_{1}}}&\text{if }q=C_{1} \\ \frac{1}{C_{1}-q}\cdot\big{(}\frac{w+1}{w}\big{)}^{C_{1}}\cdot\frac{QC_{2}}{(t +w+1)^{q}}&\text{if }q<C_{1}.\end{array}\right.\] Proof of Theorems 2.3, 2.4 and 2.5.: Using the notation (5.1), we can rewrite Theorem 2.2 and Proposition 4.2 as \[A(t)\leq\beta^{t}A(0)+\frac{J\alpha(t)^{2}}{(1-\beta)^{2}} \tag{7.1}\] and \[\begin{split}& A(t+1)+B(t+1)\\ &\leq(c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta})A(t)+\bigg{(}1 -\frac{\mu}{2}\alpha(t)+c_{1}\alpha(t)^{2}\bigg{)}B(t)+c_{2}\alpha(t)^{2}. \end{split} \tag{7.2}\] Combining (7.1) and (7.2), we get \[\begin{split} B(t+1)&\leq A(t+1)+B(t+1)\\ &\leq\left(1-\frac{\mu}{2}\alpha(t)\right)B(t)+(c_{3}\alpha(t)^{2 }+c_{4}\alpha(t)+\tilde{\beta})A(0)\beta^{t}\\ &\quad+\left((c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta}) \frac{J}{(1-\beta)^{2}}+c_{1}B(t)+c_{2}\right)\alpha(t)^{2}.\end{split}\] Since \(A(t)+B(t)<R\) and \(\alpha(t)\leq\alpha(0)\), it follows that \[B(t+1)\leq\Big{(}1-\frac{\mu}{2}\alpha(t)\Big{)}\,B(t)+G_{1}\beta^{t}+G_{2}\alpha (t)^{2}, \tag{7.3}\] where \[G_{1} =(c_{3}\alpha(0)^{2}+c_{4}\alpha(0)+\beta)R,\] \[G_{2} =(c_{3}\alpha(t)^{2}+c_{4}\alpha(t)+\tilde{\beta})\frac{J}{(1- \beta)^{2}}+c_{1}R+c_{2}.\] **Case \(\alpha(t)\equiv\alpha\).** To estimate the sequence \(B(t)\) we consider the following two sequences \(\{q_{1}(t)\}_{t\geq 0}\) and \(\{q_{2}(t)\}_{t\geq 0}\) satisfying \[q_{1}(t+1) =\Big{(}1-\frac{\mu}{2}\alpha\Big{)}q_{1}(t)+G_{1}\beta^{t},\quad q _{1}(0)=0, \tag{7.4}\] \[q_{2}(t+1) =\Big{(}1-\frac{\mu}{2}\alpha\Big{)}q_{2}(t)+G_{2}\alpha^{2}, \quad q_{2}(0)=B(0).\] It then easily follows that \(B(t)\leq q_{1}(t)+q_{2}(t)\) for all \(t\geq 0\). Similarly as in (6.2), we have \[q_{1}(t)=\sum_{s=0}^{t-1}G_{1}\beta^{s}\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t- 1-s}\leq\frac{2}{\mu\alpha}\left(\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t-1}+ \beta^{\frac{t-1}{2}}\right)\] Note that \[\sum_{s=0}^{t-1}\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t-1-s}\leq\frac{2}{\mu \alpha}.\] Using this, we estimate \(q_{2}(t)\) as \[q_{2}(t)\leq\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t}q_{2}(0)+\frac{2G_{2}}{\mu }\alpha.\] Combining the above estimates, we obtain \[B(t)\leq\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t}B(0)+\frac{2G_{2}}{\mu}\alpha+ \frac{2}{\mu\alpha}\left(\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t-1}+\beta^{ \frac{t-1}{2}}\right)\] We next consider decreasing stepsize \(\alpha(t)=\frac{v}{(t+w)^{p}}\), where \(0<p\leq 1\). For decreasing stepsize, we set \[\rho:=\sup_{t\geq 0}\frac{\beta^{t}}{\alpha(t)^{2}}.\] Then it follows that \[B(t+1)\leq\Big{(}1-\frac{\mu}{2}\alpha(t)\Big{)}B(t)+\Big{(}\rho G_{1}+G_{2} \Big{)}\alpha(t)^{2},\] which verifies the result of Theorem 2.3. **Case \(\alpha(t)=\frac{v}{(t+w)^{p}}\).** The estimate (7.3) reads as \[B(t+1)\leq\Big{(}1-\frac{\mu v}{2(t+w)^{p}}\Big{)}B(t)+\Big{(}\rho G_{1}+G_{2} \Big{)}\frac{v^{2}}{(t+w)^{2p}}.\] By applying Lemma 7.1, we have \[B(t)\leq\frac{4Q\Big{(}\rho G_{1}+G_{2}\Big{)}v}{\mu}([t/2]+w-1)^{-p}+\mathcal{ R}_{1}(t)+\mathcal{R}_{2}(t),\] where \[\mathcal{R}_{1}(t) =e^{-\sum_{s=0}^{t-1}\frac{\mu\nu}{2(s+w)^{p}}}B(0)\] \[\mathcal{R}_{2}(t) =Q\Big{(}\rho G_{1}+G_{2}\Big{)}v^{2}e^{-\frac{\mu\nu t}{4(t+w)^{p} }}\sum_{s=1}^{[t/2]-1}\frac{1}{(s+w)^{2p}}\] with constant \(Q=\Big{(}\frac{w+1}{w}\Big{)}^{2p}\). The proof of Theorem 2.4 is done. **Case \(\alpha(t)=\frac{v}{t+w}\).** The estimate (7.3) gives \[B(t+1)\leq\Big{(}1-\frac{\mu v}{2(t+w)}\Big{)}B(t)+\Big{(}\rho G_{1}+G_{2} \Big{)}\frac{v^{2}}{(t+w)^{2}}.\] Choose \(v>0\) so that \(C_{1}=\mu v/2>1\). Then we use Lemma 7.1 to derive the following estimate \[B(t)\leq\Big{(}\frac{w}{t+w}\Big{)}^{C_{1}}B(0)+\frac{1}{C_{1}-1}\Big{(}\frac {w+1}{w}\Big{)}^{C_{1}}\frac{Q\Big{(}\rho G_{1}+G_{2}\Big{)}v^{2}}{(t+w-1)},\] where \(Q=\Big{(}\frac{w+1}{w}\Big{)}^{2}\). It proves Theorem 2.5. ## 8. Improved convergence estimate for one dimensional example In this section, we show that the convergence result of Theorem 2.3 can be improved for one dimensional example. Recall the following estimate in Theorem 2.3: \[\|\bar{\mathbf{x}}(t)-\mathbf{x}_{*}\|^{2}\leq\Big{(}1-\frac{\mu\alpha}{2} \Big{)}^{t}\|\bar{\mathbf{x}}(0)-\mathbf{x}_{*}\|^{2}+\frac{2G_{2}}{\mu} \alpha+\frac{2}{\mu\alpha}\left(\Big{(}1-\frac{\mu\alpha}{2}\Big{)}^{t-1}+ \beta^{\frac{t-1}{2}}\right).\] This implies that the sequence \(\{\bar{x}(t)\}_{t\geq 0}\) converges to an \(O(\sqrt{\alpha})\)-neighborhood of the optimal point. However, if \(\Omega=\mathbb{R}^{n}\), it is known by the work [27] that the sequence converges to an \(O(\alpha)\)-neighborhood of the optimal point. Hence, it is natural to ask the following question: While our approach for Theorem 2.3 has led to convergence to an \(O(\sqrt{\alpha})\)-neighborhood, can we improve empirical performance up to an \(O(\alpha)\)-neighborhood? We answer this question by constructing a specific example. Let us consider the functions \(g_{1},g_{2}:[0,\infty)\to\mathbb{R}\) defined by \[g_{1}(x)=5x^{2}\quad\text{and}\quad g_{2}(x)=-3x^{2},\quad x\in[1,\infty] \tag{8.1}\] and the mixing matrix \(\tilde{W}\) defined by \[\tilde{W}=\begin{pmatrix}2/3&1/3\\ 1/3&2/3\end{pmatrix} \tag{8.2}\] satisfying Assumption 4. We note that the total cost function \(g=(g_{1}+g_{2})/2\) has the optimal point at \(x=1\). Then we can represent the projected decentralized gradient descent algorithm with a constant stepsize \(\alpha\) explicitly as follows: \[\begin{split}& x_{1}(t+1)=\max\Big{[}\frac{2}{3}x_{1}(t)+\frac{1 }{3}x_{2}(t)-10\alpha x_{1}(t),\ 1\Big{]},\\ & x_{2}(t+1)=\max\Big{[}\frac{1}{3}x_{1}(t)+\frac{2}{3}x_{2}(t)+6 \alpha x_{2}(t),\ 1\Big{]}.\end{split} \tag{8.3}\] We first establish that the state \((x_{1}(t),x_{2}(t))\) generated by the algorithm (8.3) will be confined to a certain region after a finite number of iterations. The proof for the following lemma will be provided at the end of this section. **Lemma 8.1**.: _Let \(g_{1}(x)\), \(g_{2}(x)\) and the mixing matrix \(\tilde{W}\) be defined by (8.1) and (8.2) and let \(\mathbf{x}(t)=(x_{1}(t),x_{2}(t))\) be the state at \(t\geq 0\) generated by (8.3). Then for any initial state \(\mathbf{x}(0)=(x_{1}(0),x_{2}(0))\) and \(\alpha\in(0,1/45)\), there exists \(t_{0}\leq\log\|\mathbf{x}(0)\|_{2}/\log(1/\lambda_{+})-1\) such that \(x_{1}(t_{0}+1)=1\) and \(x_{2}(t_{0}+1)\leq 1+30\alpha\), where \(\lambda_{+}\in(0,1)\) is defined as_ \[\lambda_{+}=\frac{2}{3}-2\alpha+\sqrt{\frac{1}{9}+64\alpha^{2}}.\] Now, we demonstrate that the state \((x_{1}(t),x_{2}(t))\) generated by the algorithm (8.3) converges to an \(O(\alpha)\) neighborhood of the optimal point \((1,1)\). **Theorem 8.1**.: _Let \(g_{1}(x)\), \(g_{2}(x)\) and the mixing matrix \(\tilde{W}\) be defined by (8.1) and (8.2) and let \(\mathbf{x}(t)=(x_{1}(t),x_{2}(t))\) be the state at \(t\geq 0\) generated by (8.3). Then for any initial state \(\mathbf{x}(0)=(x_{1}(0),x_{2}(0))\) and \(\alpha\in(0,1/45)\), the state \(\mathbf{x}(t)\) converges exponentially fast to the point \((1,1/(1-18\alpha))\) which belongs to an \(O(\alpha)\) neighborhood of the optimal point \((1,1)\)._ Proof.: By Lemma (8.1), we can choose \(t_{0}\) satisfying \(x_{1}(t_{0}+1)=1\) and \(x_{2}(t_{0}+1)\leq 1+30\alpha\). Note that if \(\alpha<1/45\) then \(1/(1-18\alpha)<1+30\alpha\). Since \(x_{1}(t_{0}+1)=1\) and \(x_{2}(t_{0}+1)<1+30\alpha\), it follows that \[\frac{2}{3}+\frac{1}{3}x_{2}(t_{0}+1)-10\alpha<1,\] which implies \(x_{1}(t_{0}+2)=1\). We have \[\begin{split} x_{2}(t_{0}+2)&=\frac{1}{3}x_{1}(t_{ 0}+1)+\frac{2}{3}x_{2}(t_{0}+1)+6\alpha x_{2}(t_{0}+1)\\ &=\frac{1}{3}x_{1}(t_{0}+1)+\frac{2+18\alpha}{3}x_{2}(t_{0}+1). \end{split} \tag{8.4}\] We can further write (8.4) as \[x_{2}(t_{0}+2)-\frac{1}{1-18\alpha}=\frac{2+18\alpha}{3}\left[x_{2}(t_{0}+1)- \frac{1}{1-18\alpha}\right]. \tag{8.5}\] Since \((2+18\alpha)/3<1\), it follows that \(x_{2}(t_{0}+2)<x_{2}(t_{0}+1)<1+30\alpha\) for \(x_{2}(t_{0}+1)>\frac{1}{1-18\alpha}\), and \(1\leq x_{2}(t_{0}+2)\leq\frac{1}{1-18\alpha}\) for \(1\leq x_{2}(t_{0}+1)\leq\frac{1}{1-18\alpha}\). We can conclude that \(x_{1}(t)=1\) and \(x_{2}(t)<1+30\alpha\) for all \(t\geq t_{0}+1\). In addition, (8.5) implies that \(x_{2}(t)\) converges to \(1/(1-18\alpha)\). The proof is done. The above result will be also verified by numerical test in the next section. This result suggests that the sequence \(\{\bar{x}(t)\}\) converges to an \(O(\alpha)\)-neighborhood of the optimal point which is a stronger result than the convergence result to an \(O(\sqrt{\alpha})\)-neighborhood of Theorem 2.3. We guess that the result of Theorem 8.1 could be extended to more general examples. Before ending this section, we give a proof of Lemma 8.1. Proof of Lemma 8.1.: Notice that if \(x_{1}(t+1)>1\) and \(x_{2}(t+1)>1\), it should hold that \[\begin{pmatrix}x_{1}(t+1)\\ x_{2}(t+1)\end{pmatrix}=\begin{pmatrix}\frac{2}{3}-10\alpha&\frac{1}{3}\\ \frac{1}{3}&\frac{2}{3}+6\alpha\end{pmatrix}\begin{pmatrix}x_{1}(t)\\ x_{2}(t)\end{pmatrix}. \tag{8.6}\] The eigenvalues of the matrix in the right hand side of (8.6) are \[\lambda_{\pm}=\frac{2}{3}-2\alpha\pm\sqrt{\frac{1}{9}+64\alpha^{2}}\] which are positive and less than \(1\) for \(\alpha\in(0,1/45)\). Therefore \[\|(x_{1}(t+1),x_{2}(t+1))\|_{2}\leq\lambda_{+}\|(x_{1}(t),x_{2}(t))\|_{2},\] and so \[\|(x_{1}(t),x_{2}(t))\|_{2}\leq\lambda_{+}^{t}\|(x_{1}(0),x_{2}(0))\|_{2}.\] Thus we can find a smallest integer \(t_{0}\leq\log\|\mathbf{x}(0)\|_{2}/\log(1/\lambda_{+})-1\) such that \(x_{1}(t_{0}+1)=1\). By (8.3), it follows that \[\frac{2}{3}x_{1}(t_{0})+\frac{1}{3}x_{2}(t_{0})-10\alpha x_{1}(t_{0})\leq 1.\] This leads to \[\frac{1}{3}x_{2}(t_{0}) \leq 1-\Big{(}\frac{2}{3}-10\alpha\Big{)}x_{1}(t_{0})\] \[\leq 1-\Big{(}\frac{2}{3}-10\alpha\Big{)}=\frac{1}{3}+10\alpha,\] which implies that \(x_{2}(t_{0})\leq 1+30\alpha\). Now we want to show that \(x_{2}(t_{0}+1)\leq 1+30\alpha\). Note that \[\frac{1}{3}x_{1}(t_{0})+\Big{(}\frac{2}{3}+6\alpha\Big{)}x_{2}(t _{0}) \leq\frac{1}{3}\Big{(}\frac{3-x_{2}(t_{0})}{2-30\alpha}\Big{)}+ \Big{(}\frac{2}{3}+6\alpha\Big{)}x_{2}(t_{0})\] \[=\frac{1}{2-30\alpha}+\Big{(}\frac{2}{3}+6\alpha-\frac{1}{6-90 \alpha}\Big{)}x_{2}(t_{0}).\] Here we used \(x_{1}(t_{0})\leq(3-x_{2}(t_{0}))/(2-30\alpha)\) from (8) for the first inequality. Inserting this into (8.3) we find \[x_{2}(t_{0}+1) =\max\left[\frac{1}{3}x_{1}(t_{0})+\Big{(}\frac{2}{3}+6\alpha \Big{)}x_{2}(t_{0}),1\right]\] \[=\max\left[\frac{1}{2-30\alpha}+\Big{(}\frac{2}{3}+6\alpha-\frac {1}{6-90\alpha}\Big{)}x_{2}(t_{0}),1\right].\] Combining this with \(x_{2}(t_{0})\leq 1+30\alpha\), we get \[x_{2}(t_{0}+1) \leq\max\left[\frac{1}{2-30\alpha}+\Big{(}\frac{2}{3}+6\alpha- \frac{1}{6-90\alpha}\Big{)}(1+30\alpha),1\right]\] \[\leq 1+30\alpha.\] Here the second inequality follows by observing the first component in the max operator equivalent form \[1\leq(1-22\alpha+180\alpha^{2})(1+30\alpha).\] This can be rewritten as \[0\leq\alpha(1-45\alpha)(1-15\alpha),\] which holds true for \(\alpha\in(0,1/45)\). ## 9. Simulations In this section, we conduct numerical experiments for the projected distributed gradient descent algorithm (1.2) supporting the convergence results of this paper, both for the constant and decreasing stepsize cases. ### Regression problem We consider the following constrained decentralized least squares problem, with \(n\) agents \[\min_{x\in\Omega}\sum_{i=1}^{n}\|q_{i}-p_{i}^{T}x\|^{2}\] where \(\Omega=\{x\in\mathbb{R}^{d}|\|x\|\leq 3\}\). Here the variable \(p_{i}\in\mathbb{R}^{d\times p}\) is randomly chosen from the uniform distribution on \([0,1]\) and the variable \(q_{i}\in\mathbb{R}^{p}\) is generated according to the linear regression model \(q_{i}=p_{i}^{T}\tilde{x}+\varepsilon_{i}\) where \(\tilde{x}\in\mathbb{R}^{d}\) is the true weight vector and \(\varepsilon_{i}\)'s are jointly Gaussian and zero mean. In this case, the projection operator \(\mathcal{P}_{\Omega}\) is defined by \[\mathcal{P}_{\Omega}[x]=\left\{\begin{array}{ll}\frac{3}{\|x\|}x\text{ if }\|x\|>3,\\ x,\text{ otherwise.}\end{array}\right.\] We initialized the points \(x_{i}(0)\) as independent random variables generated from a standard Gaussian distribution and then apply the projection operator. In this simulation, we set the problem dimensions and the number of agents as \(d=5\), \(p=2\), and \(n=50\). We use a connected graph constructed based on the Watts and Strogatz model where each node has four out-neighbors. Then we consider the relative convergence error \(R_{1}(t)\) and the relative consensus error \(R_{2}(t)\) which are defined as follows: \[R_{1}(t)=\frac{\|\bar{x}(t)-x_{*}\|}{\|\bar{x}(0)-x_{*}\|},\quad R_{2}(t)= \frac{\sum_{i=1}^{n}\|x_{i}(t)-\bar{x}(t)\|}{\sum_{i=1}^{n}\|x_{i}(0)-\bar{x}( 0)\|}.\] We first consider the following constant stepsizes: \[\alpha_{1}(t)\equiv\frac{4}{\mu+L},\ \alpha_{2}(t)\equiv\frac{3}{\mu+L},\ \alpha_{3}(t)\equiv\frac{1}{\mu+L},\ \alpha_{4}(t)\equiv\frac{1}{2(\mu+L)}.\] We measure the relative convergence error \(R_{1}(t)\) and the relative convergence error \(R_{2}(t)\) for each constant stepsize and presented in Figure 1. From Figure 1, we observe that the black and blue lines in the plot represent cases where the conditions in Theorems 2.2 and 2.3 are violated, while the other lines satisfy the conditions. Although both the black and blue lines do not converge to zero, they still appear to be bounded and oscillate within a certain range. On the other hand, for the other lines, both \(R_{1}(t)\) and \(R_{2}(t)\) properly converge to a small neighborhood of zero. Next, we consider the decreasing stepsize \(\alpha(t)=0.5/(t+w)^{p}\) with \(w\) satisfying \(w^{p}=L+\mu\) for \(p\in\{0.25,0.5,0.75,1\}\). The graphs of \(R_{1}(t)\) and \(R_{2}(t)\) are presented in Figure 2. The numerical result shows that the values of \(R_{1}(t)\) and \(R_{2}(t)\) converge to zero as expected in Theorem 2.4 and Theorem 2.5. Figure 1. The left figure corresponds to the value \(R_{1}(t)\) and the right figure corresponds to the value \(R_{2}(t)\). Figure 2. The values for \(R_{1}(t)\) are listed in the first column and those for \(R_{2}(t)\) are listed in the second columns. Each figure in the second row displays the first 100 iterations for each of the corresponding figures in the first row. ### The example in Section 8 Here we provide a numerical test for the example considered in Section 8. Namely, we test the convergence property of the algorithm (8.3). To verify the result of Theorem 8.1, we consider the sequence \((x_{1}(t),x_{2}(t))\) of (8.3) and the following measure \[R(t)=(x_{1}(t)-1)^{2}+\Big{(}x_{2}(t)-\frac{1}{1-18\alpha}\Big{)}^{2}\quad\text {for }t\geq 0. \tag{9.1}\] We test with the stepsizes \(\{1/200,1/100,1/46,1/43\}\) and the initial values given as \[(x_{1}(0),x_{2}(0))\in\{(5,10),(100,5)\}. \tag{9.2}\] The graph of the normalized measure \(R(t)/R(0)\) is provided in Figure 3. The result shows that the algorithm (8.3) converges to the value \((1,1/(1-18\alpha))\) for the stepsizes \(\{1/200,1/100,1/46\}\) as expected by Theorem 8.1. Meanwhile, the algorithm (8.3) diverges for the stepsize \(\alpha=1/43\) which is not supported in the interval \((0,1/45)\) guaranteed by Theorem 8.1. These results verify the sharpness of the result of Theorem 8.1. **Conclusion.** In this paper, we established new convergence estimates for the decentrlized projected gradient method. The result guarantees that the algorithm with stepsize \(\alpha(t)\equiv\alpha>0\) converges to an \(O(\sqrt{\alpha})\)-neighborhood of the optimization, provided that \(\alpha\) is less than a threshold. We also proved that this result can be improved to the convergence to an \(O(\alpha)\)-neighborhood for a specific example in dimension one. It remains an open question to extend the \(O(\alpha)\) convergence result for general cases.
2302.07408
Pose-Oriented Transformer with Uncertainty-Guided Refinement for 2D-to-3D Human Pose Estimation
There has been a recent surge of interest in introducing transformers to 3D human pose estimation (HPE) due to their powerful capabilities in modeling long-term dependencies. However, existing transformer-based methods treat body joints as equally important inputs and ignore the prior knowledge of human skeleton topology in the self-attention mechanism. To tackle this issue, in this paper, we propose a Pose-Oriented Transformer (POT) with uncertainty guided refinement for 3D HPE. Specifically, we first develop novel pose-oriented self-attention mechanism and distance-related position embedding for POT to explicitly exploit the human skeleton topology. The pose-oriented self-attention mechanism explicitly models the topological interactions between body joints, whereas the distance-related position embedding encodes the distance of joints to the root joint to distinguish groups of joints with different difficulties in regression. Furthermore, we present an Uncertainty-Guided Refinement Network (UGRN) to refine pose predictions from POT, especially for the difficult joints, by considering the estimated uncertainty of each joint with uncertainty-guided sampling strategy and self-attention mechanism. Extensive experiments demonstrate that our method significantly outperforms the state-of-the-art methods with reduced model parameters on 3D HPE benchmarks such as Human3.6M and MPI-INF-3DHP
Han Li, Bowen Shi, Wenrui Dai, Hongwei Zheng, Botao Wang, Yu Sun, Min Guo, Chenlin Li, Junni Zou, Hongkai Xiong
2023-02-15T00:22:02Z
http://arxiv.org/abs/2302.07408v1
# Pose-Oriented Transformer with Uncertainty-Guided Refinement ###### Abstract There has been a recent surge of interest in introducing transformers to 3D human pose estimation (HPE) due to their powerful capabilities in modeling long-term dependencies. However, existing transformer-based methods treat body joints as equally important inputs and ignore the prior knowledge of human skeleton topology in the self-attention mechanism. To tackle this issue, in this paper, we propose a Pose-Oriented Transformer (POT) with uncertainty guided refinement for 3D HPE. Specifically, we first develop novel pose-oriented self-attention mechanism and distance-related position embedding for POT to explicitly exploit the human skeleton topology. The pose-oriented self-attention mechanism explicitly models the topological interactions between body joints, whereas the distance-related position embedding encodes the distance of joints to the root joint to distinguish groups of joints with different difficulties in regression. Furthermore, we present an Uncertainty-Guided Refinement Network (UGRN) to refine pose predictions from POT, especially for the difficult joints, by considering the estimated uncertainty of each joint with uncertainty-guided sampling strategy and self-attention mechanism. Extensive experiments demonstrate that our method significantly outperforms the state-of-the-art methods with reduced model parameters on 3D HPE benchmarks such as Human3.6M and MPI-INF-3DHP. 1Shanghai Jiao Tong University, Shanghai, China 2Qualcomm AI Research\({}^{\dagger}\), Shanghai, China {qingshi9974, sjtu_shibowen, daiwenrui, 1424977324}@sjtu.edu.cn, {botaow, sunyu, mguo}@qti.qualcomm.com, {lcl1985, zoujunni, xionghongkai}@sjtu.edu.cn ## Introduction 3D human pose estimation (HPE) aims to obtain the 3D spatial coordinates of body joints from monocular images or videos. It has attracted extensive attention in a wide range of applications such as autonomous driving, augmented/virtual reality (AR/VR) and virtual avatar. The 2D-to-3D pipeline is prevailing in recent works Martinez et al. (2017); Zhao et al. (2019); Cai et al. (2019); Li et al. (2021), where 2D joint coordinates are taken as the inputs to directly regress the 3D pose target. Despite its promising performance, the 2D-to-3D pipeline is restricted by depth ambiguity caused by the many-to-one mapping from multiple 3D poses to one same 2D projection. Considering that the human body can be modeled as a highly structured graph, the problem of depth ambiguity can be alleviated by exploiting the interactions between body joints. Graph convolution networks (GCNs) have been naturally adopted to exploit these interactions Zhao et al. (2019); Cai et al. (2019); Li et al. (2021). However, GCNs are usually limited in receptive fields and impede the relationship modeling. Inspired by the success of Transformer Vaswani et al. (2017), the self-attention mechanism is leveraged in recent works Zheng et al. (2021); Zhu et al. (2021); Zhao et al. (2022) to facilitate global interactions for 3D HPE and yield state-of-the-art performance. **However, these methods treat body joints as input tokens of equal importance but ignore the human body priors (e.g., human skeleton topology) in designing the self-attention mechanism.** In this paper, we argue that introducing pose-oriented designs to the transformer is important for 3D HPE and thereby propose a Pose-Oriented Transformer (POT) for reliable pose prediction. We design a novel pose-oriented self-attention (PO-SA) mechanism for POT that is the first to ex Figure 1: **Left: Human skeleton topology**. We consider the distance for each joint towards the root joint (pelvis) based on the human skeleton topology. **Right: Impact of distance towards the root joint on the joint-wise estimation error.** Based on a baseline model, we empirically find that joints far from the root joint tend to have large prediction errors. This inspires us to introduce targeted designs for these joints. plicitly exploit human skeleton topology without implicitly injecting graph convolutions. The relative distance is computed for each joints pair and is encoded as attention bias into the self-attention mechanism to enhance the ability of modeling the human skeleton dependence. Furthermore, as shown in Figure 1, we empirically find that joints far from the root joint (pelvis) tend to have large prediction errors. To better model these difficult joints, we split body joints into several groups according to their distance toward the root joint and assign additional distance-related position embeddings to different groups. In addition to POT, a second stage of pose refinement is developed to further improve the prediction of difficult joints. Specifically, we propose a transformer-based Uncertainty-Guided Refinement Network (UGRN) for pose refinement by explicitly considering the prediction uncertainty. The proposed UGRN comprises an uncertainty-guided sampling strategy and an uncertainty-guided self-attention (UG-SA) mechanism. The uncertainty-guided sampling strategy incorporates the estimated uncertainty for each joint (that implies the difficulty of prediction) into the learning procedure. The joint coordinates are sampled around the prediction from POT following a Gaussian distribution with the estimated uncertainty as variance. Then, we use the sampled coordinates as the input of UGRN to make the model more robust to errors. Subsequently, the UG-SA is developed in UGRN to reduce the contribution of the joints with high uncertainty during learning. This paper makes the following contributions: * We propose a novel pose-oriented transformer for 3D HPE with the self-attention and position embedding mechanisms explicitly designed to exploit human skeleton topology. * We present an uncertainty-guided refinement network to further improve pose predictions for difficult joints with uncertainty-guided sampling strategy and self-attention mechanism. * We demonstrate our method achieves SOTA performance on the Human3.6M and MPI-INF-3DHP benchmarks and shed light on the task-oriented transformer design for single-frame input human pose estimation. ## Related Work ### 3D Human Pose Estimation The methods of 3D human pose estimation can be divided into two categorizes: one-stage methods and two-stage methods. The one-stage methods take RGB image as input and directly predict the 3D pose. Thanks to the development of deep learning, recent works [23, 24, 25, 26, 27] can leverage the advantages of Convolutional Neural Networks (CNNs) to obtain promising results for image-to-3D human pose estimation. In which [26] built a weakly-supervised transfer learning framework to make full use of mixed 2D and 3D labels, and augmented the 2D pose estimation sub-network with a 3D depth regression sub-network to estimate the depth. [26] represented the space around the human body discretely as voxel and used 3D heatmaps to regress 3D human pose. Taking the feature extracted by CNNs as input, [10] further proposed a graph-convolution-reinforced transformer to predict 3D pose. [25] proposed a normalizing flow method that can generate a diverse set of feasible 3D poses. The second category of methods first estimate the 2D position of human joints from the input image, and then regress the 3D pose in the camera coordinate system. Pioneering work [12] revealed that only using 2D joints as input can also gets highly accurate results, and proposed a simple yet effective baseline for 3D HPE. Since the human body can be regarded as a highly structured graph, [27] proposed Semantic Graph Convolution (SemGConv) for 3D HPE, it added a parameter matrix to learn the semantic relations among body joints. [26] further extended SemGConv to a high-order GCN To learn long-range dependencies among body joints. Nevertheless, GCN-based methods still suffer from limited receptive field. In this work, we leverage the powerful long-term modeling capability of transformer to construct our model. ### Transformer and Self-Attention Mechanism Transformer was firstly introduced in [28] for the natural language processing (NLP) tasks such as machine translation, whose core component is the self-attention mechanism that can model the long-term dependence of the input sequential data. Recently, with the appearance of VIT [25], transformer also attracted much attention in various visual tasks. In addition, [26] also generalized transformer to graph-structured data for graph-level predictions tasks including link prediction and knowledge graphs. For the 3D HPE, PoseFormer [28] first built a transformer-based model to sequentially capture the temporal and spatial dependency of the input 2D pose sequence. PoseGTAC [26] and Graformer [25] both injected graph convolution into transformer in different ways to exploit the structure information of human skeleton topology. However, we argue that simply stacking self-attention and graph convolution can not fully utilize the human skeleton topology and propose our pose-oriented transformer to take the topology information into account in the self-attention mechanism. ### Uncertainty Estimation Uncertainty in the deep learning models can be categorized into two types: aleatoric uncertainty and epistemic uncertainty. It can be estimated by sampling-based method [27] and dropout method [28]. [1] further revealed that the heteroscedastic uncertainty dependent on the input data is vitally important for computer vision application. For example, [29] considered the uncertainty of the noisy input data and proposed the uncertain graph neural networks for facial action unit detection. [29] 2021) utilized the data-uncertainty as guidance to propose a multi-phase learning method for semi-supervised object detection. Yang et al. (2021) combined the benefits of Bayesian learning and transformer-based reasoning, and built an uncertainty-guided transformer for camouflaged object detection. However, previous 2D-to-3D HPE methods did not take uncertainty information of human pose into account in the training and inference procedure. For our work, we estimate the uncertainty for each joint of first-stage 3D pose and propose our UG-sampling and UG-SA to obtain the refined 3D pose. ## Method The overview of the proposed method is depicted in Figure 2. Our method is a two-stage framework which consists of two major module: pose-oriented transformer (POT) and uncertainty-guided refinement network (UGRN). Given the 2D pose \(X\in R^{J\times 2}\) estimated by an off-the-shelf 2D pose detector from an image, POT is designed by utilizing human skeleton topology for better pose-related feature extracting and first-stage 3D pose predicting, while UGRN leverages uncertainty information \(\sigma\in R^{J\times 3}\) to further refine the predicting pose. Details are included in the following. ### Preliminaries In this work, we leverage transformer to model the long-distance relationship between body joints. We first briefly introduce the basic components in the transformer, including multi-head self-attention (MH-SA), position-wise feed-forward network (FFN) and position embeddings. #### 2.0.1 Mh-Sa The basic self-attention mechanism transfers the inputs \(Z\in R^{N\times C}\) into corresponding \(query\ Q\), \(key\ K\) and \(value\ V\) with the same dimensions \(N\times C\) by projection matrices \(P^{Q},P^{K},P^{V}\in R^{C\times C}\) respectively, where \(N\) denotes the sequence length, and \(C\) is the number of hidden dimension. \[Q=ZP^{Q},\ \ K=ZP^{K},\ \ V=ZP^{V}, \tag{1}\] Then we can calculate self-attention by: \[A=QK^{T}/\sqrt{d},\ \text{ {MH-SA}}(X)=softmax(A)V, \tag{2}\] where \(A\in R^{N\times N}\) denotes the attention weight matrix. Based on the basic self-attention, MH-SA further splits the \(Q,K,V\) for \(h\) times to perform attention in parallel and then the outputs of all the heads are concatenated. #### 2.0.2 Ffn position-wise FFN is used for non-linear feature transformation and it contains two Multilayer Perceptron (MLP) and an GELU activation layer. This procedure can be formulated as follows: \[FFN(X)=MLP(GELU(MLP(X)))+X. \tag{3}\] #### 2.0.3 Position Embeddings As MH-SA and FFN in transformer are permutation equivariant operation, additional mechanisms are required to encode the structure of input data into model. In particular, we can utilize sine and cosine functions or learnable vectors as the position embeddings, which can be formulated as \[P_{t}=PE(t)\in R^{C}, \tag{4}\] where \(t\) denotes the position index. #### 2.0.4 Pose-oriented Transformer POT aims at better utilizing the human skeleton information for feature extracting. It includes target position embedding and self-attention design for 3D HPE. Specifically, given the input 2D joints \(X\in R^{J\times 2}\), we first project it into high-dimensional feature embeddings \(Z\in R^{J\times C}\), where \(J\) denotes the number of human body joints and \(C\) denotes the embedding dimension. Then we add keypoint position embeddings \(K\) and our proposed group position embeddings \(G\) to \(Z\) as the input of POT encoder. In POT encoder, we also design pose-oriented self-attention (PO-SA) which takes the topological connections of body joints into consideration. Figure 2: The overview of proposed method, which contains two major module: pose-oriented transformer (POT) and uncertainty-guided refinement network (UGRN). Given the 2D pose \(X\in R^{J\times 2}\) estimated by an off-the-shelf 2D pose detector, POT with pose-oriented attention and position embedding designs are first used for pose-related feature extracting and first-stage 3D pose predicting. Then, UGRN leverage uncertainty information \(\sigma\in R^{J\times 3}\) to generate refined pose \(\hat{Y}\in R^{J\times 3}\). Keypoint and Group Position EmbeddingsFollowing previous design [11, 12], we first introduce a learnable keypoint position embeddings \(K\in R^{J\times C}\) to represent the absolute position of each body joint. In addition, as shown in Figure 3, according to the distance between each joint and the root joint (Pelvis), we split body joints into five groups and design another learnable embeddings called group position embeddings, _i.e._, \(G\in R^{5\times C}\). Therefore, additional distance-related knowledge can be encoded into model, helping transformer better model the difficult body joints that are far from the root. In this way, the input of pose-oriented transformer encoder, \(Z^{(0)}\), can be obtained by: \[Z^{(0)}_{i}=Z_{i}+K_{i}+G_{\varphi(i)},\;\;for\;\;i\in[1,\cdots,J], \tag{5}\] where \(i\) is the joint index and \(\varphi(i)=\mathcal{D}(i,1)\) represents the shortest path distance between \(i\)-th joint and the root joint. Pose-Oriented Self-Attention (PO-SA)We also propose our pose-oriented self-attention (PO-SA) that explicitly modeling the topological connections of body joints. Specifically, we compute the relative distance for each joints pair \((i,j)\), and encode it as the attention bias for the self-attention mechanism. In this way, we rewrite the self-attention in Eq (2), in which the \((i,j)\)-th element of attention matrix \(A\) can be computed by: \[A_{i,j}=(Z_{i}P^{Q})(Z_{j}P^{K})^{T}/\sqrt{d}+\Phi(\mathcal{D}(i,j)), \tag{6}\] where \(\Phi\) is a MLP network which projects the relative distance (1-dimension) to an H-dimension vector where H is the number of heads in the SA mechanism, it makes each PO-SA have the ability to adjust the desired distance-related receptive field and the additional parameters can be ignored. POT EncoderBased on the PO-SA, we can obtain output features by sending \(Z^{(0)}\) to a cascaded transformer with \(L_{1}\) layers. These procedure can be formulated as : \[Z^{\prime l} =\textit{PO-SA}(\textit{LN}(Z^{l-1}))+Z^{l-1}, \tag{7}\] \[Z^{l} =\textit{FFN}(\textit{LN}(\textit{LN}(Z^{\prime l}))+Z^{\prime l}, \tag{8}\] where \(LN(\cdot)\) represents the layer normalization and \(l\in[1,2,\cdots,L_{1}]\) is the index of POT encoder layers. Regression HeadIn the regression head, we apply a MLP on the output feature \(Z^{L_{1}}\) to perform pose regression, generating the first-stage 3D pose \(\widetilde{Y}\in R^{J\times 3}\). ### Uncertainty-guided Refinement Taking the first-stage 3D pose \(\widetilde{Y}\) from POT, we further send it together with the input 2D pose \(X\) to another Uncertainty-guided Refinement Network (UGRN) for pose refinement. The proposed UGRN contains the following components. Uncertainty EstimationWe first model the uncertainty for each joint. Specifically, features of POT encoder \(Z^{L_{1}}\) are sent to another uncertainty estimation head, producing the uncertainty \(\sigma\in R^{J\times 3}\) of the first-stage 3D poses by using an uncertainty estimation loss \(\mathcal{L}_{\sigma}\)[13]. Uncertainty-Guided SamplingInstead of directly utilizing the first-stage 3D predictions \(\widetilde{Y}\), we randomly sample 3D coordinates \(\widetilde{Y}\) around \(\widetilde{Y}\) according to a Gaussian distribution \(\mathcal{N}(\widetilde{Y},\sigma)\) with the predicted uncertainty \(\sigma\) as variance, and send the sampled coordinates to UGRN. This uncertainty-guided sampling strategy ensures that the sampled coordinates have large variance on difficult joints, which requires the model to focus more on making use of context from other joints to compensate for the difficult joint predictions, thus further enhancing the model robustness. To enable correct back-propagation, we employ a re-parameterization trick to draw a sample \(\epsilon\) from the standard Gaussian distribution \(\mathcal{N}(\textbf{0},\textbf{1})\) randomly, \(i.e.\), \(\epsilon\sim\mathcal{N}(\textbf{0},\textbf{1})\). In this way, we can obtain the sampled 3D coordinates by: \[\widetilde{Y}=\widetilde{Y}+\sigma\cdot\epsilon. \tag{9}\] Note that this sample strategy is only implemented in the training stage. In the inference stage, we set \(\widetilde{Y}=\widetilde{Y}\) directly. Uncertainty-guided Refinement NetworkAfter obtaining the sampled 3D pose \(\widetilde{Y}\), we first concatenate it with the input 2D pose \(X\) and obtain \(\widetilde{X}\), _i.e._, \(\widetilde{X}=\textit{Concat}(\widetilde{Y},X)\). Then we project \(\widetilde{X}\) to feature embeddings \(\widetilde{Z}\) and equip them with keypoint position embeddings \(K\) and group position embedding \(G\): \[\widetilde{Z}^{(0)}_{i}=\widetilde{Z}_{i}+K_{i}+G_{\varphi(i)},\;\;for\;\;i\in[1,J]. \tag{10}\] Next, \(\widetilde{Z}^{(0)}_{i}\) is sent to the following \(L_{2}\) transformer layers of UGRN to perform uncertainty-guided refinement. The transform layers of UGRN is similar to those of POT, but we replace the distance-related term of Eq. 6 with uncertainty guidance to dynamically adjust the attention weights: \[A_{i,j}=(Z_{i}P^{Q})(Z_{j}P^{K})^{T}/\left(\sqrt{d}\cdot\textit{Sum}(\sigma_{j} )\right), \tag{11}\] where \(\sigma_{j}\in R^{3}\) is the predicted uncertainty of \(j\)-th joint. The above uncertainty-guided self-attention (UG-SA) ensures that the body joints with high uncertainty will contribute less in the self-attention mechanism, which can not only alleviate the error propagation, but also enhance the context understanding ability of the model. Finally, we apply another regression head to \(\tilde{Z}^{L2}\) and generate our second-stage refined 3D pose \(\tilde{Y}\in R^{J\times 3}\). Figure 3: The depiction of distance-related group for human body joints. ### Loss Function **Stage I** We first train our POT for the first-stage 3D pose regressing. The objective function can be formulated as : \[\mathcal{L}_{\mathrm{stagel}}=\frac{1}{J}\sum_{i=1}^{J}\bigg{(}\bigg{\|}\widetilde {Y}_{i}-Y_{i}\bigg{\|}^{2}\bigg{)}, \tag{12}\] where \(\widetilde{Y}_{i}\) and \(Y_{i}\) are the estimated first-stage 3D positions and the ground truth of \(i\)-th joint respectively. **Stage II** We aim to predict the uncertainty correctly as well as estimate an accurate refined 3D pose in Stage II. During this stage, we freeze the model parameters of POT and only train the UGRN for stable results. Following [12], we set our uncertainty estimation loss as: \[\mathcal{L}_{\sigma}=\frac{1}{J}\sum_{i=1}^{J}\Bigg{(}\Bigg{\|}\frac{\widetilde {Y}_{i}-Y_{i}}{\sigma_{i}}\Bigg{\|}^{2}+\log(\|\sigma_{i}\|^{2})\Bigg{)}. \tag{13}\] In addition, we also apply L2 loss to minimize the errors between the refined 3D poses and ground truths: \[\mathcal{L}_{\mathrm{refine}}=\frac{1}{J}\sum_{i=1}^{J}\bigg{(}\bigg{\|} \hat{Y}_{i}-Y_{i}\bigg{\|}^{2}\bigg{)}, \tag{14}\] The final loss function of Stage II is computed by \(\mathcal{L}_{\mathrm{stagelII}}=\mathcal{L}_{\mathrm{refine}}+\lambda\mathcal{ L}_{\sigma}\), where \(\lambda\) is the trade-off factor. We set \(\lambda\) to 0.001 such that the two loss terms are of the same order of magnitude. ## Experiments ### Experimental Setups **Dataset** Human3.6M dataset [13] is widely used in the 3D HPE task which provides 3.6 million indoor RGB images, including 11 subjects actors performing 15 different actions. For fairness, we follow previous works [16, 17, 18] and take 5 subjects (S1, S5, S6, S7, S8) for training and the other 2 subjects (S9, S11) for testing. In our work, We evaluate our proposed method and conduct ablation study on the Human3.6M dataset. Besides, the MPI-INF-3DHP [12] test set provides images in three different scenarios: studio with a green screen (GS), studio without green screen (noGS) and outdoor scene (Outdoor). We also apply our method to it to demonstrate the generalization capabilities of our proposed method. Evaluation metricsFor Human3.6M, we follow previous works [16, 17] to use the \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c} \hline Methods & \multicolumn{2}{c}{Dire. Disc.} & \multicolumn{2}{c}{Eat. Greet Phone} & \multicolumn{2}{c}{Photo Pose} & \multicolumn{2}{c}{Puch. Sit} & \multicolumn{2}{c}{Sit. StD. Smoke} & Wait & \multicolumn{2}{c}{WalkD Walk} & \multicolumn{2}{c}{Walk} & \multicolumn{2}{c}{Walk} & \multicolumn{2}{c}{Walk} & \multicolumn{2}{c}{Walk} & \multicolumn{2}{c}{K} & \multicolumn{2}{c}{**Avg.**} \\ \hline \hline (Martinez et al., 2017) (SH) & 51.8 & 56.2 & 58.1 & 59.0 & 69.5 & 78.4 & 55.2 & 58.1 & 74.0 & 94.6 & 62.3 & 59.1 & 65.1 & 49.5 & 52.4 & 62.9 \\ (Zhao et al., 2019) (SH) & 48.2 & 60.8 & 51.8 & 64.0 & 64.6 & **53.6** & 51.1 & 67.4 & 88.7 & **57.7** & 73.2 & 65.6 & **48.9** & 64.8 & 51.9 & 60.8 \\ (Liu et al., 2020) (CPN) & 46.3 & 52.2 & 47.3 & 50.7 & 55.5 & 67.1 & 49.2 & **46.0** & 60.4 & 71.1 & 51.5 & 50.1 & 54.5 & 40.3 & 43.7 & 52.4 \\ (Zou et al., 2020)(CPN) & 49.0 & 54.5 & 52.3 & 53.6 & 59.2 & 71.6 & 49.6 & 49.8 & 66.0 & 75.5 & 55.1 & 53.8 & 58.5 & 40.9 & 45.4 & 55.6 \\ (Xu and Takano, 2021)(CPN) & **45.2** & **49.9** & 47.5 & 50.9 & 54.9 & 66.1 & 48.5 & 46.3 & 59.7 & 71.5 & 51.4 & **48.6** & 53.9 & **39.9** & 44.1 & 51.9 \\ \hline Ours (CPN) & 47.9 & 50.0 & 47.1 & 51.3 & **51.2** & 59.5 & **48.7** & 46.9 & **56.0** & 61.9 & **51.1** & 48.9 & 54.3 & 40.0 & **42.9** & **50.5** \\ \hline (Martinez et al., 2017) (GT) & 37.7 & 44.4 & 40.3 & 42.1 & 48.2 & 54.9 & 44.4 & 42.1 & 54.6 & 58.0 & 45.1 & 46.4 & 47.6 & 36.4 & 40.4 & 45.5 \\ (Zhao et al., 2019) (GT) & 37.8 & 49.4 & 37.6 & 40.9 & 45.1 & 41.4 & 40.1 & 48.3 & 50.1 & 42.2 & 53.5 & 44.3 & 40.5 & 47.3 & 39.0 & 43.8 \\ (Liu et al., 2020) (GT) & 36.8 & 40.3 & 33.0 & 36.3 & 37.5 & 45.0 & 39.7 & 34.9 & 40.3 & 47.7 & 37.4 & 38.5 & 38.6 & 29.6 & 32.0 & 37.8 \\ (Xu and Takano, 2021) (GT) & 35.8 & 38.1 & 31.0 & 35.3 & 35.8 & 43.2 & 37.3 & 31.7 & 38.4 & 45.5 & 35.4 & 36.7 & 36.8 & 27.9 & 30.7 & 35.8 \\ (Zhao, Wang, and Tian, 2022) (GT) & **32.0** & **38.0** & 30.4 & 34.4 & **34.7** & 43.3 & **35.2** & **31.4** & 38.0 & 46.2 & 34.2 & 35.7 & 36.1 & 27.4 & 30.6 & 35.2 \\ \hline Ours (GT) & 32.9 & 38.3 & **28.3** & **33.8** & 34.9 & **38.7** & 37.2 & **30.7** & **34.5** & **39.7** & **33.9** & **34.7** & **34.3** & **26.1** & **28.9** & **33.8** \\ \hline \end{tabular} \end{table} Table 1: Quantitative evaluation results using MPJPE in millimeter on Human3.6M. No rigid alignment or transform is applied in post-processing. We split this table into 2 groups. The inputs for the top group methods are the detection 2D pose, SH denotes the 2D pose detected by Stacked Hourglass network [12], and CPN denotes the cascaded pyramid network [12]. The inputs for the bottom group are ground truth (GT) of 2D pose. Best results are showed in bold. \begin{table} \begin{tabular}{l|c|c c c|c c} \hline Methods & Training data & GS & noGS & Outdoor & ALL (PCK \(\uparrow\)) & ALL (AUC \(\uparrow\)) \\ \hline \hline (Martinez et al., 2017) & H36M & 49.8 & 42.5 & 31.2 & 42.5 & 17.0 \\ (Mehta et al., 2017) & H36M & 70.8 & 62.3 & 58.8 & 64.7 & 31.7 \\ (Yang et al., 2018) & H36M+MPII & - & - & - & 69.0 & 32.0 \\ (Zhou et al., 2017) & H36M+MPII & 71.1 & 64.7 & 72.7 & 69.2 & 32.5 \\ (Luo, Chu, and Yuille, 2020) & H36M & 71.3 & 59.4 & 65.7 & 65.6 & 33.2 \\ (Ci et al., 2019) & H36M & 74.8 & 70.8 & 77.3 & 74.0 & 36.7 \\ (Zhou et al., 2019) & H36M+MPII & 75.6 & 71.3 & 80.3 & 75.3 & 38.0 \\ (Xu and Takano, 2021) & H36M & 81.5 & 81.7 & 75.2 & 80.1 & 45.8 \\ (Zhao, Wang, and Tian, 2022) & H36M & 80.1 & 77.9 & 74.1 & 79.0 & 43.8 \\ \hline Ours & H36M & **86.2** & **84.7** & **81.9** & **84.1** & **53.7** \\ \hline \end{tabular} \end{table} Table 2: Results on the test set of MPI-INF-3DHP [12] by scene. The results are shown in PCK and AUC. mean per-joint position error (MPJPE) as evaluation metric. MPJPE computes the per-joints mean Euclidean distance between the predicted 3D joints and the ground truth after the origin (pelvis) alignment. For MPI-INF-3DHP, we employ 3D-PCK and AUC as evaluation metrics. Implement detailsIn our experiment, we set the dimension of embeddings to 96 and adopts 6 heads for self-attention with a dropout rate of 0.25. The MLP ratio of FFN is set to 1.5 to reduce the model parameters. We implement our method within the PyTorch framework. During the training stage, we adopt the Adam [13] optimizer. For both Stage I and Stage II, the learning rate is initialized to 0.001 and decayed by 0.96 per 4 epochs, and we train each stage for 25 epochs using a mini-batch size of 256. We initialize weights of the our model using the initialization method described in [1]. We also adopt Max-norm regularization to avoid overfitting. representation of difficult joints is effectively facilitated. In addition, by replacing the standard self-attention with our PO-SA, we also achieve 1.10mm (36.69mm to 35.59mm) improvement with only 0.01M model parameters increase, which reflects the benefits of enhancing the ability of modeling the topological interactions. Effect on uncertainty-guided refinementWe then inspect how uncertainty-guided refinement benefits performance. It can be seen from Table 4 that our first-stage prediction obtained directly by POT can achieve 35.59 mm in MPJPE, while adding UGRN for refinement can bring 0.83mm (35.59mm to 34.72mm) performance improvement, and UG-sampling can facilitate the learning procedure and further bring 0.9 mm (34.72mm to 33.82mm) gains. To demonstrate that the performance improvement is not brought by the increased model parameters, we also test other refinement model design using other kinds of self-attention, and the results are shown in Table 5. When we replacing UG-RA with standard MH-SA, the performance degrades from 34.72mm to 35.22mm. In addition, when using the proposed PO-SA in the UGRN, the performance also degrades (34.72mm to 35.07mm), which reflects that the uncertainty-related information is more important than distance-related information in the second refinement stage. Comparison on different parameters in POT and UGRNTable 6 reports how different parameters impact the performance and the complexity of our model. The results show that, enlarging the embedding dimension from 48 to 96 can boost the performance, but using dimensions larger than 96 cannot bring further benefits. In addition, we observe the best performance when using 12 and 3 transformer layers in POT encoder and UGRN, respectively, and no more gains can be obtained by stacking more layers. Therefore, we set the basic setting to \(L_{1}=12\), \(L_{2}=3\), and \(C=96\). Comparison on model complexityIn Table 7, We compare both the accuracy and the model complexity with other benchmarks on the Human3.6M dataset. We provide two configurations of our method, in which the embedding dimension of Our-S is 48 while that of Our-L is set to 96. Results show that our method can achieve better results with even much fewer parameters. Understanding the performance improvementIn Figure 4, we present the average estimation errors of different body joints according to its group index. It can be seen that, both our group position embedding and UGRN bring more performance improvement for group 4 and 5, in which joints are far from the root joint. The results confirm that our benefit mainly comes form the difficult joints. Qualitative resultsFigure 5 demonstrates some qualitative results on the Human3.6M dataset compared with Graformer [22]. It can be seen that our method can make accurate pose prediction, especially for the difficult joints that are far from the root. ## Conclusion In this paper, we proposed a two-stage transformer-based framework for 3D HPE. First, we introduce targeted improvements for the basic components of transformers and fabricate Pose-Oriented Transformer (POT). Specifically, we design a novel self-attention mechanism in which the topological connections of body joints can be well considered. We also split body joints into several groups according to their distance toward the root joint and provide additional learnable distance-related position embedding for each group. Then, the second stage Uncertainty-Guided Refinement Network (UGRN) is introduced to further refine pose predictions, by considering the estimated uncertainty of each joint with uncertainty-guided sampling strategy and self-attention mechanism. Extensive results on Human3.6M and MPI-INF-3DHP reveal the benefits of our design. Figure 5: Qualitative results on Human3.6M. ## Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grant 61932022, Grant 61931023, Grant 61971285, Grant 61831018, Grant 61871267, Grant 61720106001, Grant 62120106007, Grant 61972256, Grant T2122024, Grant 62125109, and in part by the Program of Shanghai Science and Technology Innovation Project under Grant 20511100100.
2302.00367
FLYEYE family tree, from smart fast cameras to MezzoCielo
We developed game-changing concepts for meter(s) class very-wide-field telescopes, spanning three orders of magnitude of the covered field of view. Multiple cameras and monocentric systems: from the Smart Fast Cameras (with a quasi-monocentric aperture), through the FlyEye, toward a MezzoCielo concept (both with a truly monocentric aperture). MezzoCielo (or "half of the sky") is the last developed concept for a new class of telescopes. Such a concept is based on a fully spherical optical surface filled with a low refractive index, and high transparency liquid surrounded by multiple identical cameras. MezzoCielo is capable to reach field of views in the range of ten to twenty thousand square degrees.
Roberto Ragazzoni, Silvio Di Rosa, Carmelo Arcidiacono, Marco Dima, Demetrio Magrin, Alain J. Corso, Jacopo Farinato, Maria Pelizzo, Giovanni L. Santi, Matteo Simioni, Simone Zaggia
2023-02-01T10:51:17Z
http://arxiv.org/abs/2302.00367v1
# Flyeye Family Tree, From Smart Fast Cameras to Mezzocielo ###### Abstract We developed game-changing concepts for meter(s) class very-wide-field telescopes, spanning three orders of magnitude of the covered field of view. Multiple cameras and monocentric systems: from the Smart Fast Cameras (with a quasi-monocentric aperture), through the FlyEye, toward a MezzoCielo concept (both with a truly monocentric aperture). Mezzocielo (or "half of the sky") is the last developed concept for a new class of telescopes. Such a concept is based on a fully spherical optical surface filled with a low refractive index, and high transparency liquid surrounded by multiple identical cameras. MezzoCielo is capable to reach field of views in the range of ten to twenty thousand square degrees. 1]Univ. of Padova, Dept. of Phys. & Astron., vic. Osservatorio 3, I-35122 Padova (Italy) 2]INAF - Astron. Obs. of Padova, vic. Osservatorio 5, I-35122 Padova (Italy) 3]CNR, Istituto di Fotonica e Nanotecnologie, via Trasea 7, 35131 Padova (Italy) 4]Univ. of Padova, Dept. of Information Engineering, via Gradenigo 6B, 35131 Padova (Italy) 5]Univ. of Padova, Centro di Ateneo Studi e Attivita Spaziali, via Venezia 15, 35131 Padova (Italy) 6]Email: [email protected] ## 1 Prologue Multiplexing is a key issue in astronomical instrumentation. This is somehow different from other experimental sciences, like particle physics, where a very powerful single machine can convey all the possible experiments doable with that specific power, leaving little room for duplication, for instance, of the same kind of accelerator. In contrast, as a purely observative science, astronomy benefits from the multiple observations of the same class of objects, leading, in the most simples form, to the need to cover a very wide Field of View (FoV) of the sky in order to investigate a variety of transient and unexpected phenomenon. It is noticeable that this feature coincides with the patrol of the sky for NEOs and space debris. In the beginning of this century, I had the opportunity to translate into reality the wide FoV cameras conceived for large telescope: namely a couple of Prime Focus correctors [1, 2, 3, 4] now installed at the foci of the two 8.4m parabolic mirrors of the Large Binocular Telescope [5]. This adventure ventured me and a small group of astronomical instrumentalists into handling heavy and large (about 80kg and 820mm in diameter) lenses built by fused silica (a rather fragile kind of glass) and BK7. We built two of these units optimized respectively for the bluer portion of the electromagnetic spectrum that still reach the ground and for the redder one with provision for the near InfraRed one (although this option has not yet been exploited). Although later similar instrumentation has been built [6] our couple of Prime Focus is unique on this class of telescopes for their very high efficiency in the U-band down to the wavelength of 350nm (as, for instance, on the smaller class 3.6 meter Canada France Hawaii u-band [7]) and, for the last 15 years, has remained the most productive instrument - in terms of published papers - onboard the telescope built by an US-Italian and German consortium. This, of course, marked an history of success, however the efforts profuse and the difficulties encountered have been impressive. Most of them, as usually happens, have not been predicted in advance. Indeed, although prime focuses are supposed to be - from several point of view - a kind of conventional instrument, in our specific case we pushed the limits toward the ultraviolet coverage and, thanks to the rather fast (F/1.14) foci of the primary mirrors, designed at an epoch in which no prime focal station Figure 1: The Blue channel of the LBT Prime Focus during the final integration phase in the lab. The main lens is an impressive 800mm class fused silica meniscus. The first author is depicted in the lower right side of the picture. was ever planned. Stimulated by this experience, we thought it was time to change the paradigm in the wide field astronomical instrumentation and we started developing a step beyond that could take advantage of the lessons learned. Therefore, we conceived a post telescope focal plane array of correctors built in a sort of replicated manner. Each individual unit covers a portion of the FoV small enough to assume constant off-axis aberrations over such a solid angle. The correctors were supposed to be built by a sort of aberrating plates that cancel each other if properly co-aligned. The relative rotation of these plates would translate into a changing aberration compensator unit. In this way the rotation of the ensemble of the two plates would identify the position angle of the non-symmetric portion of the aberration. This was somehow resembling systems for offsetting reference stars in telescope systems (like MAST [8] or SIMURIS [9], to position on the focal plane specific portion of the solar surface) and acts as the two main sections of a robotic arm. Although we developed such an approach [10, 11] in some details and planned a preliminary design in order to propose the construction of such an instrument engineered as a wide field spectrograph [12, 13, 14] to cover a significantly larger fraction of the sky at the focal plane of one of the VLTs units, we soon realized that the variety and combination of the rapidly changing aberrations across the intermediate FoV would have required more than the two plates originally conceived. In fact, much later, it was implemented taking advantage of the technological development, which, in the meantime, allowed substituting the passive plates by electro-optics unit [15, 16]. Unfortunately, this solution was neither available or low cost at the time of conception of what we called pompously a "Smart Fast Camera". Further to the lack of available technology, the concept suffers just because of the varying aberrations. Removed this limitation, it would retain the advantages of the mass production of the compensating cameras. ## 2 The Flyeye Concept The use of a primary spherical mirror retains together a number of advantages: the lack of a defined axis of symmetry (in fact the spherical mirror is a kind of monocentric device where the only symmetry is retained around the center of curvature), the easiness of polishing, and of segmentation (all the segments retain the same optical figure, in contrast with parabolic or hyperbolic mirrors, characterizing, for instance, the entrance apertures of, respectively, Cassegrain and Ritchey-Chretienne two mirrors telescope). All this comes to a price, namely the residual spherical aberration, often completely unacceptable. The latter is, however, constant over the ubiquitous line of sight over the FoV. Usually, the compensation of such aberration is easily accomplished by an optical system made up by at least two elements: the first one forms an image of the entrance pupil onto the second one which provides the spherical compensation needed. This approach has been widely used in radio telescopes [17] so large that it would be unconceivable or too expensive to establish a system to point the whole structure toward a certain direction, or where a low-cost large aperture is being implemented for a very narrow science case in the optical domain [18]. Figure 3: The FlyEye concept relies on the use of multiple identical correctors mounted on the intermediate focal plane of a common spherical primary mirrors. The cameras are looking at the monocentric converging device through a number of folding mirrors Figure 2: The Smart Fast Camera concept employs a large number of almost identical focal reducer with an intermediate pupil plane where the - varying over the Field of View - aberration is being compensated by cleverly mounted fixed optical plates or electro-optical devices. located in order to minimize as much as possible the obstruction of such an approach. This obstruction - in contrast with the Schmidt telescope - is the only limit to the Field of View achievable in this context._ Multiplexing such a corrector has been the straightforward extension of the Smart Fast Camera concept to a common spherical primary mirror. It is noticeable that field lenses are located somehow after the intermediate (and heavily aberrated) focus of the spherical main mirror, where a number of folding mirrors conveniently redirect the beams to a small array of cameras. The FlyEye concept ([19, 20, 21, 22, 23, 24, 25, 26, 27]) turned slowly from conceptual drawings to blueprints and real prototypes. ## 3 Schmidt Confrontation The obvious competitor for this kind of approach is the classical Schmidt-like telescope. In this approach, the spherical aberration is corrected for all the employed line of sights with a single refractive corrector located on the center of curvature of the telescope. Several variations on the theme are possible, including UV-compliant where the spherical aberration should be a reflective device. While a number of practical different approaches are beyond the limit of this short manuscript, one should take note that there is a conceptual difference between the Schmidt and the FlyEye approach. The Schmidt plate, in fact, retain an axis of symmetry, although - as a refractive device - the deviations off-axis are initially mild. _The Schmidt plate, in fact, is progressively seen in an obliqued manner, leading to a sub-optimal compensation of the spherical, centrosymmetric, aberration._ However, the larger the off-axis angle, the less effective is the compensation. In contrast, the Field of View of the FlyEye is only limited by the obstruction introduced by the optical elements close to the intermediate focus of the spherical mirror. Namely the folding mirrors and -depending upon the details of the optical design- the field lenses and, of course, their mounts. In the Schmidt telescope, even without taking account the vignetting issue caused by the focal plane detector, at a certain off-axis angle the quality and efficiency of the correction plates would become so deteriorated to make the image quality unacceptable. ## 4 Monocentric Without Obstruction: Mezzocielo The FlyEye concept is, in a certain sense, revolutionary, as there is no conceptual limit to the Field of View. The practical limit, however, is rather stringent as in practice both the folding mirrors and the field lenses are somehow inserted and obstructing the entrance beam. Furthermore, if you want a continuous uninterrupted large patch in the sky properly reimaged onto an array of detectors, then you are forced to place additional folding mirrors much out of focus than what can be achievable with a smaller FoV. Despite this, FlyEyes are able to easily surpass Schmidt telescopes in terms of practically achievable FoV but, somehow frustratingly, are unable to exploit in full the inherent conceptually unlimited field potential. This is overcome using a monocentric refractive solution. It is not necessarily a new approach, but, in order to keep the spherical aberration controllable, the monocentric refractive system doable, and the transparency high enough not to be limited by such a parameter, we conceived a solution where a spherical hollow optical system is filled with industrial liquids with low refractive index and extremely high transparency. It is noticeable that such fluid turned out to already exist and used for non-optical applications. As, in principle, this solution can achieve the continuous observations of the whole sky available at any given location on Earth, we nicknamed it "MezzoCielo" (Italian name for "half of the sky", [28, 29, 30, 31, 32, J) Figure 4: In a conventional Schmidt telescope there are two limits for the achievable Field of View. One is the central obstruction, depicted in red in these two panels of moderate (upper) and large (lower) off-axis acceptance angle. The other is being represented by the sub-optimal compensation of the spherical aberration. This concept (whose patenting has been very recently approved by the European Patent Office) is a game changer from several viewpoints. First of all, its cost is dominated by the focal plane detectors and cameras, although they are all identical and hence are expected to be built with a mass production approach. The use of CCDs would make this extremely expensive (although arrays of identical cameras with the same sky coverage and pixel size would lead to an identical cost with a much smaller effective area), hence the emergent CMOS detector technology is probably the most viable solution. Furthermore, as the telescope is nominally looking at the whole sky, pointing becomes a useless capability. Long exposures, if needed, could still benefit from a sort of equatorial movement of the solely camera system. Patrolling of transient phenomenon is probably not needing even this residual degree of freedom, making this class of telescopes free of any necessity of mounting. Of course, a number of issues are to be investigated, including the requirements in terms of optical precision of the meniscus that could led, in turn, to the requirement of an active control of the solid elements. Even if the system would work in seeing limited mode, in fact, the coaddition of beams coming from different meniscus would translates into a plate scale precision requirement that would make the optics with tolerances comparable to a diffraction limited system, unless to implement more sophisticated data handling approaches. ## 5 A proposal for an epilogue We are currently preparing a proposal to build a half a meter class prototype and a cylindrical section of a full one meter class "MezzoCielo", in order to get enough experience and hand on feedback to proceed to construct a wide field telescope that would outperform most of the existing all sky surveys. Probably, with the sole exception of what can be achieved, on a much longer time frequency domain, by the Vera Rubin's telescope, patrolling Near Earth Objects (NEO) and space debris would benefit of a two orders of magnitude larger instantaneous covered FoV and with a much higher Signal to Noise Ratio (SNR) for tracklets. These could be tracked for a much longer time. This approach can give some gain only if the instantaneous SNR is larger than a given amount, probably making this approach un-useful for the very smaller sources. This could be compensated by larger monocentric systems. With the current fluid we are examining, that was not engineered on purpose, the size at which the transparency of the fluid made a larger sphere no more effective, can fall at diameter that makes 4m class telescopes conceivable. This will come with a number of additional technical difficulties that are not even listed here. If the Moore's law does apply on detectors and data handling system as well, it is not difficult to forecast that this approach is the once that has a vivid future in the short to medium timescale. The diffidence that the previous conceived solutions encountered should be a lesson from which to learn to invest major efforts into the further development of the MezzoCielo concept.
2308.15057
Which Requirements Artifact Quality Defects are Automatically Detectable? A Case Study
[Context] The quality of requirements engineering artifacts, e.g. requirements specifications, is acknowledged to be an important success factor for projects. Therefore, many companies spend significant amounts of money to control the quality of their RE artifacts. To reduce spending and improve the RE artifact quality, methods were proposed that combine manual quality control, i.e. reviews, with automated approaches. [Problem] So far, we have seen various approaches to automatically detect certain aspects in RE artifacts. However, we still lack an overview what can and cannot be automatically detected. [Approach] Starting from an industry guideline for RE artifacts, we classify 166 existing rules for RE artifacts along various categories to discuss the share and the characteristics of those rules that can be automated. For those rules, that cannot be automated, we discuss the main reasons. [Contribution] We estimate that 53% of the 166 rules can be checked automatically either perfectly or with a good heuristic. Most rules need only simple techniques for checking. The main reason why some rules resist automation is due to imprecise definition. [Impact] By giving first estimates and analyses of automatically detectable and not automatically detectable rule violations, we aim to provide an overview of the potential of automated methods in requirements quality control.
Henning Femmer, Michael Unterkalmsteiner, Tony Gorschek
2023-08-29T06:36:27Z
http://arxiv.org/abs/2308.15057v1
# Which requirements artifact quality defects are automatically detectable? A case study ###### Abstract [Context:] The quality of requirements engineering artifacts, e.g. requirements specifications, is acknowledged to be an important success factor for projects. Therefore, many companies spend significant amounts of money to control the quality of their RE artifacts. To reduce spending and improve the RE artifact quality, methods were proposed that combine manual quality control, i.e. reviews, with automated approaches. [Problem:] So far, we have seen various approaches to automatically detect certain aspects in RE artifacts. However, we still lack an overview what can and cannot be automatically detected. [Approach:] Starting from an industry guideline for RE artifacts, we classify 166 existing rules for RE artifacts along various categories to discuss the share and the characteristics of those rules that can be automated. For those rules, that cannot be automated, we discuss the main reasons. [Contribution:] We estimate that 53% of the 166 rules can be checked automatically either perfectly or with a good heuristic. Most rules need only simple techniques for checking. The main reason why some rules resist automation is due to imprecise definition. [Impact:] By giving first estimates and analyses of automatically detectable and not automatically detectable rule violations, we aim to provide an overview of the potential of automated methods in requirements quality control. Requirement Engineering, Artifact Quality, Automated Methods ## I Introduction Requirements Engineering (RE) artifacts play a central role in many systems and software engineering projects. Due to that central role, the quality of RE artifacts is widely considered a success factor, both in academia, e.g. by Boehm [1] or Lawrence [2], and also by practitioners [3]. As a result, companies invest heavily into quality control of RE artifacts. Since RE artifacts are written mostly in natural language [4], quality control is usually applied manually, e.g. in the form of manual reviews. However, besides all of its advantages, manual quality control is slow, expensive and inconsistent, heavily dependent on the competence of the reviewer. One obvious approach to address this is combining manual reviews with automated approaches. The goal of a so-called _phased inspection_[5, 6] is to reduce the effort in manual reviews and to improve the review results by starting into the review with a better (e.g. readable) artifact. Therefore, various authors have focused on automatically detecting quality defects, such as ambiguous language (i.a. [7, 8, 9, 10]) or cloning [11]. However, it is still an open question to what degree quality defects can be detected automatically or require human expertise (i.e. manual work). In previous work [10], we took a bottom-up perspective by qualitatively analyzing which of the quality review results could be automatically detected. **Research Goal:** In this work, we take a top-down perspective by focusing on requirements writing guidelines from a large company. Furthermore, we systematically classify and quantify which proportion of the rules can be automated. ## II Related Work Researchers and practitioners have been working on supporting quality assurance with automated methods (at least) since the end of the 1990's [7]. We want to give only a brief, non-exhaustive summary here. Please refer to our previous work [10] for a more detailed analysis. **Defect types:** Most works in this area focus on the detection of various forms of ambiguity, e.g. [8, 12, 13, 14]. Other works try to detect violations of syntactic [11] or even semantic duplications [15]. Other works focus on correct classifications [16] or on the question whether an instance follows given structural guidelines, e.g. for user stories [9] or for use cases [17]. **Criteria:** The aforementioned works used different sets of criteria. Most prominently are definitions of ambiguity [18], previously summarized lists of criteria [19], or requirements standards [10, 20]. **Techniques:** So far, various techniques have been applied, including machine learning [16, 21] and ontologies [22]. However, Arendse and Lucassen [23] hypothesize that we might not need sophisticated methods for most aspects of quality. In this paper, we provide data regarding this hypothesis. All in all, few works have tried to take a different viewpoint and understand what _cannot_ be automatically checked. In previous work [10], we approached this question in a qualitative manner, by looking not at definitions, but at instances of defects. We did not quantify the portion of automatically discoverable defects, since this depends heavily on the requirements at hand (which defects does an author introduce and a reviewer find?). **Research Gap:** Various authors have shown how to automatically detect individual quality defects. In previous work [10] we qualitatively analyzed which requirements quality defects can be detected. In this work, we provide first evidence, based on requirements writing rules used in a large organization, on the proportion between automatically / not automatically detectable requirements quality issues. ## III Study Design We conducted this study in a research collaboration with the Swedish Transport Administration (STA), the government agency responsible for planning, implementing and maintaining long-term rail, road, shipping and aviation infrastructure in Sweden. In particular, we studied their requirements guidelines that were developed by editors who review and quality assure specifications. A total of 129 rules were analyzed in this paper. While our long-term goal in this research collaboration, is described in more detail elsewhere [24], the specific research goal of this paper is to _characterize requirements writing rules with respect to their potential to be automatically checked from the viewpoint of a requirements quality researcher in the context of an industrial requirements quality control process_. From this goal definition we derive our research questions: * How many rules for natural language requirements specifications can be automated? * To what degree can rules be categorized into groups and to what degree can these groups be eligible for automation? * What information is required to automatically detect rule violations? * Which rules resist automation and why? ### _Rule classification_ A lack of classification schema for requirements writing rules prompted us to formulate the following schema (see Tbl. I). #### Iii-A1 Rule type We distinguish between the lexical, grammatical, structural and semantic rule type (see rules 160, 56, 78 and 81 in Tbl. I). A lexical rule refers to constraints on the use of certain terms or expressions that may induce ambiguity, reduce understandability or readability. Similarly, a grammatical rule refers to constraints on sentence composition. A structural rule refers to the form in which information is presented and formatted. Finally, a semantic rule refers to constraints on the text content and meaning. #### Iii-A2 Rule context We introduced this dimension to characterize in which context of the requirements specification the rule is relevant. An appropriate automated check flags only violations that occur in the correct context, e.g. in requirements (if they are separated from informative text), figures, tables, references, headings, enumerations, comments. #### Iii-A3 Information scope This dimension describes the scope that needs to be considered in order to decide whether the rule is violated or not. We defined five levels: word/phrase, sentence, section, document and global. For example, to check rule 56 in Tbl. I, it is enough to inspect a sentence. However, rule 24 requires access to information that is not in the requirements specification, hence we classified it as global information scope. This characterization provides indication that can be used to estimate the relative required effort to implement the automated check of the rule. #### Iii-A4 Necessary information This dimension describes NLP-based and domain-specific information needed to detect rule violations. NLP-based information refers to language and document structure, such as Part-of-Speech (POS) tags, lemmas and word stems, morphological tags, parse trees and meta-data on formatting. Domain-specific information is only available in the specific domain in which the rules apply, e.g. lists of referenced documents or a domain model / ontology. For example, rule 50 in Tbl. I can be decided with POS tags while rule 56 requires a parse tree that indicates where the subject is positioned in the sentence. #### Iii-A5 Detection accuracy This dimension provides a rough estimate, based on the experiences of previous work [20], on the expected accuracy for detecting rule violations. We have defined a five-level scale, illustrated in Fig. 1, spanning from deterministic, i.e. 100% detectable, to not detectable at all. Good heuristics feature both high recall and precision, while bad heuristics always trade-off between precision and recall. For example, while assigning POS tags is a probabilistic algorithm, we classified rule 50 in Tbl. I as a good heuristic since this particular problem has been solved before, with demonstrably high precision and recall. We classified rule 81, on the other hand, as bad heuristic since, while conceptually feasible, we lack an accurate solution, i.e. a technique to extract a domain model and use that to determine whether a requirement statement contains supplemental information. Then, there are also rules that we do not expect to be automatically detectable at all (rule 54), because they turn out to be challenging, even in manual reviews. We classified these not automatically detectable rules along main reasons (categories resulted from previous work [10], see Tbl. III). ### _Data Collection, Classification and Analysis_ We received a total of 192 writing rules from STA, of which we filtered unapproved rule ideas (63), resulting in 129 original rules. In case a rule contained discernible sub-rules, we split them up to facilitate the classification, resulting in 166 classified rules. We then developed an initial version of the classification schema illustrated in Section III-A. While all Fig. 1: The categories of detection accuracy as used in this study dimensions and the categories for type and detection accuracy were defined a-priori, the categories for context, scope and necessary information were identified during the classification process. During this first workshop we classified 39 rules, stabilizing the schema and fostering our shared understanding. Then, the second author proceeded to classify the remaining 127 rules alone. The first author sampled 20 rules from this set, independently classified them and calculated the inter-rater agreement (\(\kappa=0.79\)) which is considered substantial [25]. The first author then reviewed all 127 rules, marked those where he disagreed, and finally consolidated all classifications with the second author in a second workshop. We then used the classifications of accuracy for RQ1, the type, context and scope for RQ2, the necessary information for RQ3, and the reasons for RQ4. ## IV Results ### RQ1: How many rules for natural language requirements specifications can be automated? In Fig. 2, we show the results from classifying the estimated detection accuracy of the rules. We estimate that 41% of the rules can be deterministically checked, meaning that an algorithm finds each violation. 34% of the rules are heuristic, with 12% of high accuracy, and 11% of medium and low accuracy. We estimate that the remaining 25% cannot be checked at the current state of art and at the current state of the rule definitions. _Discussion:_ Whether rules can be automatically detected is not a binary question. In fact, it depends on the context. However, most rules we can put into a certain category, indicating their potential to be automatically checked. We were surprised by the large number of rules that can be automated. This indicates the potential for automation, as we will discuss in future work. RQ2: To what degree can rules be categorized into groups and to what degree can these groups be eligible for automation? In Fig. 3, we show the results from classifying the automatically detectable rules by their type and estimated detection accuracy. The results indicate an estimated high detection accuracy for structural and lexical rules, medium accuracy for grammatical rules, and medium to low accuracy for semantic rules. Fig. 4 shows that most rules are at the level of words or phrasing or at the level of sentences. Lastly, Fig. 5 shows that most rules hold anywhere or specifically concern the requirements of the RE artifact. _Discussion:_ The further a rule goes into semantic aspects, the harder it is to detect violations. For structural rules, e.g. where Fig. 2: Frequency of rules falling into one of the detection accuracy categories. a certain piece of information should be placed, there are a few rules for which violations are difficult to check automatically. For example, to understand whether a certain text should be tagged as a requirement requires context understanding. We describe further reasons for rules not being automatically detectable in RQ4. _RQ3: What information is required to automatically detect rule violations?_ To understand what techniques are required to automatically detect violations of guideline rules, we classified each rule with the required information for this rule. Each required information then leads to a certain technique. For example, if the lemmas of the words are required, we obviously need a lemmatization technique. Tbl. II shows the results for this analysis. The three most common techniques are the following: In 47% of the cases, lemmatization is required to detect a violation of a rule. In a further 35% of cases only the pure text and regular expressions are needed. Next, formatting information is required in 22% of the cases. _Discussion:_ This analysis supports the hypothesis of Arendse and Lucassen [23] that in most cases, we do not need sophisticated methods to detect violations of rules. _RQ4: Which rules resist automation and why?_ When analyzing the _not automatically detectable_ rules of RQ1, the reasons were distributed as shown in Tbl. III (classification extends previous work [10]). The major reason was, in our studied case, that the rules themselves are still imprecise or unclear. Examples for this are rules such as _"Requirements must be accurate, unambiguous, comprehensive, consistent, modifiable, traceable."_ (this was one single rule) or _"Requirements should contain enough information."_ These rules cannot be checked either manually or automatically. One could even argue that they convey little value. Such imprecise or unclear Fig. 4: Distribution of the scope of the automatically detectable rules. Fig. 5: Context of the automatically detectable rules. Fig. 3: Estimated detection accuracy for each category. rules are the reason for 81% of the not automatically detectable rules (see Tbl. III). In 12% of the cases, an automation would need profound domain knowledge to automatically detect a violation. An example is that requirements about certain system parts must first state that these parts exist. However, to understand which parts this refers to, we would need to know the domain. This means that only domain experts can manually detect violations to these rules. In one case, respectively, the rule requires deep semantic understanding of the text (e.g. to detect logical contradictions written in natural language in different paragraphs), the system or even the process scope. _Discussion:_ Deep computational problems do not seem to be the major cause for why we see no chance in checking a certain rule, rather imprecise rules themselves. ## V Discussion ### _Share of automatically detectable defects_ In our study, we found that a substantial number of requirements writing rules can be automatically checked. This is a top-down perspective and as such helps to quantify the share of defects that can be automatically detected. However, this does not necessarily transfer to the share of defects found in reviews. This is for the following reasons: First, defects created by requirements engineers are not equally distributed over the guideline rules. Furthermore, the defects introduced by requirements engineers very much depend on the individual person, company, and project. Second, defects discovered by reviewers are not necessarily equally distributed over the guideline rules. Therefore, we argue to consider both perspectives, i.e. the share of defects based on guidelines and the share of defects existing in practice, when discussing the potential of automated requirements quality assurance. ### _The 100%-Recall Argument_ There is an ongoing debate in the scientific community whether automated checks in quality assurance need 100% recall to be useful in practice. Some authors (i.a. [26, 27, 28]) argue that if an approach does not achieve perfect recall, this leads to either the reviewer does not check the rule anymore, which would lead to unchecked defects, or the reviewer has to go through the whole document anyways, and thus, the automated analysis has no benefits. We disagree with this view for two reasons. First, we argue that in industrial practice, reviewers rarely go through the artifact rule by rule. Therefore, there is no such thing as _omitting a certain rule_. Reviewers see the guidelines rather as a supporting instrument, and thus anything that reminds them of certain rules, increases the quality. Our second argument also refers to the status quo today. The best automated quality support that is widely used are spell and grammar checks. Both do not have 100% recall. So, if recall is a problem, why do we use spell and grammar checks every day? In our experience from introducing automated analyses at various companies in industry, practitioners were more worried about precision than recall. They are convinced of the value (_"Anything helps!"_), and care more for acceptance with the end users. Here, the core aspect is usability in the form of few false positives, ergo: precision (cf. also similar discussing in static code analysis [29]). ### _Threats to Validity_ There are two major threats to validity. Regarding internal validity, we classified the rules according detection accuracy. We did so because it was not feasible within the scope of this work to do a precision & recall analysis for each guideline rule. However, the first author has been translating guideline rules into automated analyses for 4 years. Thus, we are confident that the results reflect the real precision and recall after implementation. In addition, we created rough categories to gain an overview, not a precise analysis for each rule. To evaluate this aspect, we independently classified a subset of 10% of the rules and calculated a weighted Cohen's kappa of the resulting classification (\(\kappa=0.79\)). This agreement fosters our confidence in the resulting classification. The second threat relates to external validity. Since we analyzed a large guideline used at STA, we do not know whether the results generalize from this partner. We have, however, previously informally checked a guideline from another industry partner in a different domain. Here we came to the same share of not automatically detectable rules (25%). Future work should broaden the study to different guidelines. ## VI Research Agenda The current paper provides an estimation of the extent to which industrial requirements quality rules can be automatically checked. We plan to continue our research as follows. **Complete the rule classification.** 34 of the studied rules were imprecise or unclear. Unfortunately, the authors of the writing guidelines were not available for feedback during the course of this study. We want to deepen our understanding on the nature of the imprecision of these rules. In addition, we had no access regarding the relevance, value, and frequency of violations of the rules. This could provide insights how rules that can be automatically checked potentially contribute to review effort reduction. In addition, the classification scheme used in RQ2 was beneficial for this study and worked fine regarding the first three categories (lexical, grammatical, structural). However, the scheme created some discussion around the semantic category. The reason is that most rules intertwine semantic and syntactic aspects: Since requirements artifacts are not automatically compiled like code, the point of syntactic rules is only to prevent semantic issues. Therefore, future work should extend this classification scheme to clarify this aspect, e.g. by decoupling the two aspects. **Implement and statically validate rules.** We have already begun to implement some rules that are based on dictionary lookups using an existing requirements smell detection framework [10]. While most of the rules can be implemented with simple techniques, we also plan to experiment with more advanced NLP techniques where we expect challenges in the detection accuracy. For example, violations to rule 81 in Table I could be detected by using topic models enhanced with domain knowledge [30]: requirements that contain distant topics or several closely related topics indicate candidates for rule violations. To validate the implemented rules, we can exploit the fact that at STA, the rules were developed based on experience, i.e. there exist versions of requirements that contain rule violations. We can fine-tune and validate the detection against this set. We also plan to provide an analysis of the potential benefits of using automated requirements quality control. To achieve this, we analyze historic requirements (where the current rules were not applied) and study the effort spent on discussing and repairing these violations. **Validation in Use.** We plan to evaluate the efficiency and effectiveness of automated requirements quality assurance in use, i.e. in the environment of STA with the support of their requirements editors. One important question to answer is whether we can control the number of false positives, a crucial aspect for the adoption of tool support in industry that has also been observed in other areas, such as static bug detection [29]. **Repository for requirements writing rules.** Finally, we, as a community, should establish a repository of precise general and validated requirements rules. Such a repository can be created by replicating the work proposed in this paper in different contexts and, at the same time, advance the techniques for detecting rule violations. ## VII Conclusions It is unclear what proportion of quality defects can be automatically detected. Therefore, in this work, we classify rules from a large, fine-grained requirements writing guideline from one of our industry partners. The results indicate that a surprisingly large proportion of rules (41%) can be automatically analyzed. 53% can be analyzed deterministically or with a good heuristic. One reason for this was that these rules contain many structural rules, which require just an analysis of formatting information or pure text. If we take also those rules into account where we have a medium heuristic, we could even tackle 64% of the rules. However, our analysis also shows that 36% of the rules have no or little chance to be automated. While being just first evidence, this analysis indicates that there is a substantial proportion of guideline rules (our intermediate for quality defects) that can be automatically checked. However, the analysis also indicates that there is little hope that we can completely replace manual reviewing with automated reviews. Combining automated and manual quality assurance, as proposed by others [5], and also ourselves [6] could be the promising compromise. ## Acknowledgements This work was performed within the project Q-Effekt and ERSAK; it was funded by the German Federal Ministry of Education and Research (BMBF) under grant no. 01IS15003 A-B and by the Swedish Transport Administration. The authors assume responsibility for the content. The authors thank Jonas Eckhardt for comments on an earlier draft of this paper.
2309.02602
Geometric Mechanics of the Vertical Slice Model
The goals of the present work are to: (i) investigate the dynamics of oceanic frontogenesis by taking advantage of the geometric mechanics underlying the class of Vertical Slice Models (VSMs) of ocean dynamics; and (ii) illustrate the versatility and utility of deterministic and stochastic variational approaches by deriving several variants of wave-current interaction models which describe the effects of internal waves propagating within a vertical planar slice embedded in a 3D region of constant horizontal gradient of buoyancy in the direction transverse to the vertical plane.
Darryl D. Holm, Ruiao Hu, Oliver D. Street
2023-09-05T22:21:54Z
http://arxiv.org/abs/2309.02602v2
# Geometric Mechanics of the Vertical Slice Model ###### Abstract The goals of the present work are to: (i) investigate the dynamics of oceanic frontogenesis by taking advantage of the geometric mechanics underlying the class of Vertical Slice Models (VSMs) of ocean dynamics; and (ii) illustrate the versatility and utility of deterministic and stochastic variational approaches by deriving several variants of wave-current interaction models which describe the effects of internal waves propagating within a vertical planar slice embedded in a 3D region of constant horizontal gradient of buoyancy in the direction transverse to the vertical pane. ###### Contents * 1 Introduction * 2 Vertical slice models (VSMs) * 2.1 Lagrangian formulation of VSMs * 2.2 Hamiltonian formulation of VSMs * 2.3 Computational Simulations of VSMs * 3 Wave mean flow interaction (WMFI) * 3.1 Clebsch-Lin formulation of the EB VSM with wave dynamics * 3.2 The geometry of the generalised Lagrangian mean (GLM) approach to wave-current interaction * 3.3 A wave mean flow decomposition of dynamics in the vertical slice * 4 Stochastic advection by Lie transport (SALT) for VSMs * 4.1 The Euler-Boussinesq Eady model with SALT * 5 Conclusion and Outlook * A The asymptotic expansion Introduction Oceanic fronts form by extracting the gravitational potential energy available from the 'tilting' of buoyancy isoclines, represented as horizontal gradients of buoyancy. In turn, the formation of oceanic fronts facilitates the transfer of kinetic energy to small-scale mixing and transport [23, 42]. In particular, a region of constant horizontal gradient of temperature (implying a corresponding gradient of buoyancy) in a three-dimensional, vertically-stratified fluid governed by the Euler-Boussinesq equations can induce the formation of fronts emerging in vertical planar flows transverse to the direction of constant horizontal gradient of temperature [1, 9, 10, 47]. The present work derives _Vertical Slice Models_ (VSMs) to investigate the formation and subsequent evolution of the fronts emerging in a region of _constant_ horizontal gradient of temperature. These VSMs may also be useful for bench-marking numerical schemes, since in 2D they can be run quickly on a single workstation. Driven by their constant transverse horizontal gradient of temperature, the VSMs in [1, 9, 10, 47] possess a \(y\)-independent solution structure for the flows within the vertical plane which includes the dynamics of the \(y\)-component of the fluid velocity transverse to the vertical plane and the Coriolis force. The \(y\)-independent solution structure for VSM flows is still a solution of the full 3D Euler-Boussinesq fluid equations, as long as the horizontal gradient of the temperature transverse to the vertical slice remains constant. This solution property arises because the pressure gradient within the vertical plane only accesses the \(y\)-independent part of the temperature as it varies with time and space in the vertical plane. The VSM family can be derived in the Euler-Poincare (EP) framework of symmetry-reduced Lagrangians in Hamilton's variational principle [37]. The EP framework involves a constrained Hamilton's principle expressed in the Eulerian fluid description. Their derivation in the EP framework establishes the following properties of each member of the VSM family: the Kelvin-Noether circulation theorem, conservation of potential vorticity on fluid parcels, a Lie-Poisson Hamiltonian formulation possessing conserved Casimirs arising from particle-relabelling symmetry, a conserved domain-integrated energy and an associated variational principle satisfied by the equilibrium solutions. **Aims of the present work.** New theoretical methods of analysing the dynamics of complex fluids advecting a variety of different co-evolving order parameters have been developing in the Euler-Poincare (EP) framework during the past twenty years. See, e.g, [26, 5, 16, 18] for discussions of these new theoretical methods. The EP framework also offers a means of extending the VSMs to include the new methods for complex fluids, as discussed below. The new theoretical methods designed for modelling complex fluid flows include the statistical and probabilistic methods needed for data science, as well as the geometric and analytical methods underlying the theory of nonlinear partial differential equations. For example, recent mathematical discoveries have revealed the Poisson structures of the Eulerian description of ideal (non-dissipative) complex fluids such as superfluids, spin glasses, liquid crystals, ferrofluids, etc. derived in [26, 5, 16, 18]. Complex fluids transport order parameters that co-evolve in the frame of the fluid motion, and the dynamics of these order parameters reacts back to affect the fluid motions that transport them. Motivated by these recent mathematical discoveries, we consider applying the variational methods of the geometric mechanics framework [38] to guide the investigation of the Poisson structures underlying the ocean science of wave and current interactions. We choose this variational approach because of its versatility in deriving the Poisson structures we seek, even in the presence of any random transport that admits the product rule and the chain rule, [27, 17, 12]. The inclusion of randomness into nonlinear ocean dynamics has been shown to be useful for uncertainty quantification and data assimilation methods to reduce uncertainty in computational simulations of ocean models, in particular by using the method of Stochastic Advection by Lie Transport (SALT), which preserves the semidirect-product Poisson structures of classical fluids, [27, 7, 12]. The Euler-Poincare and Hamilton-Pontryagin versions of Hamilton's principle have been extended from stochastic paths to _geometric rough paths_ in [12]. The present work illustrates the versatility and utility of the variational framework of geometric mechanics by deriving several variants of wave-current interaction models that describe internal waves propagating in the vertical slice model (VSM) with transverse flow that was originally derived in [9]. Frontogenesis in this model was demonstrated via numerical simulations in [47] and front formation in its solution behaviour has been analysed recently in [1]. From the viewpoint of modelling in ocean dynamics, the problem statement for VSMs is an augmentation of the standard incompressible Euler-Boussinesq equations with a constant gradient of temperature in the transverse direction. Here, we will first review the derivation of VSMs based on the Euler-Poincare approach as in [9] and then re-derive it from a variational approach based on a _composition of smooth maps_ to describe the order-parameter dynamics taking place in the frame of the fluid motion. Subsequently, we will use the latter method to augment the VSM to enable inclusion of non-Boussinesq effects and also to add the effects of internal gravity waves propagating in the vertical slice. The variational approach we take in this paper introduces a certain composition of maps (C\(\circ\)M) into the well-known approach originally due to Clebsch [6]. Remarkably, the C\(\circ\)M approach produces an 'untangled' (block-diagonal) Poisson structure for total momentum and order-parameters, as well as an 'entangled' Poisson structure for the fluid momentum alone. The latter Poisson structure exhibits a semidirect-product action of the fluid velocity vector field on the order-parameter phase space, as well as an additional symplectic bracket among the order parameters. Mathematically, the 'untangled' version of its Poisson structure separates into the sum of a symplectic two-cocycle bracket in the phase-space of order parameters added to the standard semidirect-product Lie-Poisson bracket for classical fluid dynamics, as derived in the Lagrangian framework of Hamilton's variational principle [37]. By comparing the two versions of the Poisson structures for these fluid equations with advected co-evolving order parameters, one finds that they exhibit a mathematical equivalence that can be written as \(T^{*}Q/G\simeq T^{*}(Q/G)\oplus\mathfrak{g}^{*}\), where \(\oplus\) denotes the Whitney sum. Perhaps not surprisingly, this duality has persisted and been remarked about throughout the historical investigations of fluid dynamics involving variational principles with a composition of maps (C\(\circ\)M). The history of Clebsch's approach to the formulation of Hamilton's principle for classical fluid dynamics is reviewed in [43], where it is supplemented by an additional advection constraint due Lin [41] who introduced it in deriving Landau's two-fluid classical-quantum model of \(He_{2}\) superfluids. This Clebsch approach with the Lin constraint was supplemented further and applied to \(He_{3}\) superfluids with an additional spin order-parameter in [33], where of course additional two-cocycles were found. Later, it was applied to Yang-Mills charged fluids in [20] and to magneto-hydrodynamics (MHD) and other fluid plasma models, as well as nonlinear elasticity in [34]. The same augmented approach was applied in deriving fluid equations for special-relativistic plasmas coupled to electromagnetic fields in [25] and also for general-relativistic fluids in [24]. Remarkably, the symplectic two-cocycle arising in the Poisson structure for general-relativistic adiabatic fluid dynamics turned out to comprise the Minkowsky metric for space-time \(g_{\mu\nu}\) and its canonically conjugate momentum density \(\pi_{\mu\nu}\) in the Arnowitt-Deser-Misner theory of general relativity [24]. After this variational approach had been applied to complex fluids in [26], the mathematical essence of the Poisson structures in this approach was proven to arise via Lie group reduction by symmetry with respect to affine actions yielding precisely the \(T^{*}Q/G\simeq T^{*}(Q/G)\oplus\mathfrak{g}^{*}\) equivalence as its associated bundle reduction in [4, 16]. This identification and its main mathematical applications for complex fluids are summarised in detail in [16]. A related approach called "metamorphosis" was investigated for shape analysis in image registration with applications to computational anatomy in [39, 3]. While the mathematical setting of affine Poisson structures for this topic is quite rich, [26, 5, 16, 18], the Clebsch-Lin composition of maps (C\(\circ\)M) variational approach derives the Poisson structure and identifies its corresponding affine Lie group action quite straight-forwardly, as we shall demonstrate later by comparing the original Euler-Poincare derivation of the EB VSM to the Clebsch-Lin approach. Hereafter, the Clebsch-Lin approach with affine advection will be referred to simply as composition of maps (CoM), as in [30, 32]. The CoM approach can be applied for uncertainty quantification of ocean models by using the Stochastic Advection by Lie Transport (SALT) approach applied in [27, 7, 12] for the composition of several maps. The CoM approach is also compatible with traditional approaches in ocean modelling such as Generalised Lagrangian Mean (GLM). In fact, the CoM approach has been used to provide a stochastic closure for GLM in [31] which will be illustrated further in the present work. To reprise, the purpose of the present work is to make a concrete application of the stochastic CoM approach for an enhancement of the VSMs of [9] to include the wave-current interaction (WCI) effects of internal gravity waves (IGW) propagating in a vertical slice of Euler-Boussinesq (EB) flow including a transverse velocity. This work is meant to be presented as explicitly as possible so it can provide a useful foundation for further applications of geometric mechanics in ocean modelling. Motivation of the paper:In the present work, we consider the interaction of internal waves with Euler-Boussinesq flows in a vertical slice of fluid undergoing three dimensional volume-preserving flow whose motion transverse to a vertical plane advects Lagrangian particle labels that depend linearly on the transverse Eulerian coordinate. The constant transverse slope, \(s\), of the advected Lagrangian particle labels appears as a constant parameter in the slice dynamics along with a new canonically conjugate transverse momentum density, \(\pi_{T}\), dual in the slice to the potential temperature which is advected in three dimensions as a Lagrangian particle label. This concept for a VSM with transverse flow was introduced in [9] and its solution behaviour has been simulated computationally in [47] and analysed recently in [1]. Main goals of the paper:The ocean modelling goal of the present paper is to include internal gravity wave (IGW) motion in the Vertical Slice Model (VSM) with transverse flow introduced in [9]. The mathematical goal accompanying the goal for data assimilation is also to determine the Poisson/Hamiltonian structure of the resulting model, and thereby formulate a new model of stochastic parameterisation of advective transport of potential use for quantifying uncertainty in ocean model simulations. Plan of the paper: * Section 2 reviews the Euler-Poincare derivation of the EB VSM in [9] then formulates a new model of stochastic parameterisation based on the Poisson structure of the derived equations. * Section 2.1 recalls the Euler-Poincare structure of the Euler-Boussinesq Eady VSM model treated in [9]. The Clebsch-Lin derivation of the VSM model in the CoM context in also included. * Section 2.2 treat the 'Entangled' and 'Untangled' versions of the Hamiltonian formulation for the vertical slice model. * Section 2.3 demonstrates some of the solution behaviours exhibited by the VSM derived in the previous sections by showing the result of computational simulations. * Section 3 treats a further asymptotic expansion estimates of IGW interaction dynamics following the wave mean flow interaction (WMFI) closure due to [22] in which the Brunt-Vaisala buoyancy frequency is determined via the Hessian of the fluid pressure. * Section 3.1 treats the Clebsch-Lin derivation of the VSM in the Euler-Boussinesq approximation with the inclusion of a standard WKB model for the IGW Hamiltonian and assumes the existence of a constant Brunt-Vaisala (BV) buoyancy frequency. Section 3.2 reviews the geometry of the generalised Lagrangian mean (GLM) approach to wave mean flow interaction (WMFI). * Section 3.3 introduces a wave mean flow decomposition of dynamics in the vertical slice of Euler-Boussinesq fluid. * Section 4 then formulates the stochastic parameterisation model for VSMs based on Stochastic Advection by Lie Transport (SALT). * Section 5 Concludes with a discussion of the goals achieved in the paper and a survey of potential future work. * Appendix A contains an asymptotic expansion which reveals the form of the action for the dynamical system studied in Section 3.1. At each level of approximation in the sequence of sections in this paper, we identify the opportunities where stochasticity (SALT) may be introduced that will preserve the geometric Poisson structure of the model. The dynamical effects of the introduction of SALT into the deterministic VSMs derived here and their implications for data science and data assimilation are beyond the scope of the present paper. However, we expect that the framework for applications of the present approach for data calibration, uncertainty quantification and data assimilation considered previously in [7, 8] will certainly carry over for the stochastic internal gravity wave (IGW) effects considered here. ## 2 Vertical slice models (VSMs) The vertical slice model, as proposed by Eady in the 1940s [14, 15, 45], was born from a desire to understand instabilities in atmospheric sciences. It has since proven to be a valuable model for testing numerical methods for geophysical flows, as well as remaining useful for understanding thermal front formation and propagation. Dynamics in a vertical slice model occurs within a vertical (\(x\)-\(z\)) plane, denoted by \(M\) in Figure 1. The slice model features a velocity field tangent to the slice that advects fluid material variables within the slice, as well as a velocity perpendicular to the slice representing transverse flow crossing through the slice at each point. In this section, the vertical slice model will be derived from a variational principle in the Eulerian representation and investigated further by passing to its Hamiltonian formulation. The key idea is to regard the tangential flow in the slice and the perpendicular flow transverse to the slice as the composition of two flow maps. Figure 1: The vertical slice domain ### Lagrangian formulation of VSMs Vertical slice models can be derived using variational principles that apply the Lie group structure of their transverse flow explicitly [9]. In particular, their 3D flow map is written in the form \[\phi_{t}(X,Y,Z)=(x(X,Z),\ y(X,Z)+Y,\ z(X,Y))\,,\quad\text{which implies}\quad\frac{\partial\phi_{t}}{\partial Y}= \begin{pmatrix}0\\ 1\\ 0\end{pmatrix}\,, \tag{2.1}\] where \((X,Y,Z)\) are the initial Lagrangian particle labels at time, \(t_{0}\), and \((x,y,z)\) denote the Eulerian location of the particle labels \((X,Y,Z)\) at a later time, \(t\). We will reserve the boldface notation to mean the spatial coordinates in the plane only, i.e. \(\mathbf{x}=(x,z)\). The subgroup of diffeomorphisms on \(\mathbb{R}^{3}\) of the form (2.1) is isomorphic to \(\operatorname{Diff}(M)\ltimes\mathcal{F}(M)\), where \(M\subseteq\mathbb{R}^{2}\) is the domain in the vertically sliced plane with coordinates \((x,z)\), and \(\mathcal{F}(M)\) denotes the space of smooth functions on \(M\), and the symbol \(\ltimes\) denotes semidirect product [9, 37]. The Lie algebra corresponding to this group is \(\mathfrak{X}(M)\ltimes\mathcal{F}(M)\), where \(\mathfrak{X}(M)\) is the space of vector fields on \(M\). Our Lie algebra consists of elements of the form \((\mathbf{u}_{S},u_{T})\), which can be interpreted as components of a vector field _within_ and _transverse to_ the slice, respectively. For a volume form, \(D\,d^{3}x\), advected by the three dimensional flow, under the assumptions that both \(D\) and the transverse component of the velocity field are independent of the Lagrangian label \(Y\), one finds that \(D\,d^{2}x\) is advected in the slice domain \(M\). In this situation, scalar functions, \(\vartheta\), advected in the planar slice domain \(M\) can admit a constant derivative in the \(y\) direction. That is, one may write the advected scalar function \(\vartheta\) as \[\vartheta(x,y,z,t)=\vartheta_{s}(x,z,t)+(y-y_{0})s\,, \tag{2.2}\] for some constant \(s\in\mathbb{R}\). Thus, the advection equation for \(\vartheta\) in three dimensions may be expressed as \[\partial_{t}\vartheta_{s}+\mathbf{u}_{S}\cdot\nabla\vartheta_{s}+u_{T}s=0\,, \quad\text{where}\quad\nabla:=(\partial_{x},\partial_{z})\,. \tag{2.3}\] Indeed, the semidirect-product action of the vector fields \((u_{S},u_{T})\in\mathfrak{X}(M)\ltimes\mathcal{F}(M)\) on the advected scalar function \((\vartheta_{s},s)\in\mathcal{F}(M)\times\mathbb{R}\) is given by \[\operatorname{ad}^{*}_{(u_{S},u_{T})}(\vartheta_{s},s)=\mathcal{L}_{(u_{S},u_ {T})}(\vartheta_{s},s)=(\mathbf{u}_{S}\cdot\nabla\vartheta_{S}+u_{T}s,0)\,, \tag{2.4}\] where \(\mathcal{L}_{\framebox{}}\) is the Lie derivative. We introduce the following diamond notation, \(\diamond\), which involves integrating the Lie derivative by parts. In particular, for an arbitrary vector field and scalar function \((v_{S},v_{T})\in\mathfrak{X}(M)\ltimes\mathcal{F}(M)\), we have \[\left\langle\frac{\delta\ell}{\delta(\vartheta_{s},s)}\,,\, \mathcal{L}_{(v_{S},v_{T})}(\vartheta_{s},s)\right\rangle =-\left\langle\frac{\delta\ell}{\delta(\vartheta_{s},s)}\diamond (\vartheta_{s},s)\,,\,(v_{S},v_{T})\right\rangle\,,\] \[\left\langle\frac{\delta\ell}{\delta D}\,,\,\mathcal{L}_{v_{S}}(D \,d^{2}x)\right\rangle =-\left\langle\left(\frac{\delta\ell}{\delta D}\diamond D,0\right),\,(v_{S},v_{T})\right\rangle\,.\] Notice that the diamond operation, \(\diamond\), is defined separately for each advected quantity, according to its infinitesimal transformation under the Lie derivative. Following the established literature [37], we will denote the space of advected quantities by \(V^{*}\) and assume that there exists a right representation of our Lie group action on this vector space. In this setup, we have the following Euler-Poincare theorem which enables the derivation of the corresponding equations of motion. **Theorem 2.1** (Euler-Poincare theorem for VSMs [9]).: _For the variables and advected quantities defined above, suppose we have a Lagrangian \(\ell(u_{S},u_{T},D,\vartheta_{s}):(\mathfrak{X}(M)\ltimes\mathcal{F}(M))\times V ^{*}\mapsto\mathbb{R}\). An application of Hamilton's Principle_ \[0=\delta S=\delta\int_{t_{0}}^{t_{1}}\ell(\mathbf{u}_{S},u_{T},D,\vartheta_{s} )\,,\] _subject to the following constraints for right action of the variational vector fields \((v_{S},v_{T})\)_ \[\delta(u_{S},u_{T}) =(\partial_{t}v_{S},\partial_{t}v_{T})-\mathrm{ad}_{(u_{S},u_{T})}( v_{S},v_{T})\,,\] \[\delta D\,d^{2}x =-\mathcal{L}_{v_{S}}(D\,d^{2}x)\,,\] \[\delta(\vartheta_{s},s) =-\mathcal{L}_{(v_{S},v_{T})}(\vartheta_{s},s)\,,\] _implies the Euler-Poincare equation_ \[\left(\partial_{t}+\mathrm{ad}_{(u_{S},u_{T})}^{*}\right)\frac{\delta\ell}{ \delta(u_{S},u_{T})}=\left(\frac{\delta\ell}{\delta D}\diamond D,0\right)+ \frac{\delta\ell}{\delta(\vartheta_{s},s)}\diamond(\vartheta_{s},s)\,. \tag{2.5}\] **Remark 2.1**.: _The Euler-Poincare equations corresponding to Theorem 2.1 may be written as_ \[(\partial_{t}+\mathcal{L}_{u_{S}})\left(\frac{1}{D}\frac{\delta \ell}{\delta u_{S}}\right)+\frac{1}{D}\frac{\delta\ell}{\delta u_{T}}du_{T} =d\left(\frac{\delta\ell}{\delta D}\right)-\frac{1}{D}\frac{ \delta\ell}{\delta\vartheta_{s}}d\vartheta_{s}\,, \tag{2.6}\] \[(\partial_{t}+\mathcal{L}_{u_{S}})\left(\frac{1}{D}\frac{\delta \ell}{\delta u_{T}}\right) =-\frac{1}{D}\frac{\delta\ell}{\delta\vartheta_{s}}s\,,\] (2.7) \[(\partial_{t}+\mathcal{L}_{u_{S}})\vartheta_{s}+u_{T}s =0\,,\] (2.8) \[(\partial_{t}+\mathcal{L}_{u_{S}})(D\,d^{2}x) =0\,. \tag{2.9}\] **Remark 2.2** (The Euler-Boussinesq Eady model).: _An application of Theorem 2.1 to the Lagrangian_ \[\ell[u_{S},u_{T},D,\vartheta_{s},p]=\int_{M}\frac{D}{2}(|\mathbf{u}_{S}|^{2}+ u_{T}^{2})+Dfu_{T}x+\frac{g}{\vartheta_{0}}D\left(z-\frac{H}{2}\right) \vartheta_{s}+p(1-D)\,d^{2}x\,, \tag{2.10}\] _yields the Euler-Boussinesq Eady equations [9],_ \[\partial_{t}\mathbf{u}_{S}+\mathbf{u}_{S}\cdot\nabla\mathbf{u}_{S} =fu_{T}\widehat{x}+\frac{g}{\vartheta_{0}}\vartheta_{s}\widehat{ z}-\nabla p\,, \tag{2.11}\] \[\partial_{t}u_{T}+\mathbf{u}_{S}\cdot\nabla u_{T} =-f\mathbf{u}_{S}\cdot\widehat{x}-\frac{g}{\vartheta_{0}}\left(z- \frac{H}{2}\right)s\,,\] (2.12) \[\partial_{t}\vartheta_{s}+\mathbf{u}_{S}\cdot\nabla\vartheta_{s} =0\,,\] (2.13) \[\nabla\cdot\mathbf{u}_{S} =0\,. \tag{2.14}\] **Remark 2.3** (In what sense is the VSM for the Euler-Boussinesq Eady equations three dimensional?).: _All of the equations in the EB Eady model above are evaluated in two dimensions on the vertical slice. In what sense, then, does the transverse velocity \(u_{T}\) confer any sense of three dimensionality? The answer to this question stems from the constant slope \(s\) of the advected scalar function \(\vartheta\) in (2.2). Consider 3D advected Lagrangian labels \((L_{1}(x_{1},x_{2},t),L_{2}(x_{1},x_{2},t),L_{3}(x_{1},x_{2},x_{3},t))\) with \((x_{1},x_{2},x_{3})=(x,z,y)\) as in (2.2). In that case, the 3rd row of the Jacobian matrix \(J_{ij}=\partial L_{i}/\partial x_{j}\) for the inverse map (Euler-to-Lagrange) would have entries \(J_{3j}=(\partial L_{1}/\partial x_{3},\partial L_{2}/\partial x_{3},\partial L _{3}/\partial x_{3})=(0,0,s)\). Consequently, imposing the relation \(\det J=1\) in 3D would imply the relation \(\det J=1/s\) on the 2D vertical slice. Hence, the 2D velocity would be divergence-free, i.e., \(\nabla\cdot\mathbf{u}_{S}=0\) in equation (2.14) and the transverse velocity \(u_{T}\) would confer a sense of three dimensionality for the class of flows with this type of Lagrangian label dependence for the inverse flow map. In particular, the corresponding Lagrange-to-Euler 3D flow map is written in equation (2.1)._ #### Clebsch-Lin derivation of VSMs by C\(\circ\)M In the above summary of the Lagrangian formulation of VSMs, the coadjoint action given by equation (2.4) is nonstandard. Thus, we seek to describe VSMs without the derivation relying on this action, and instead constrain the required relationships using a Lagrange multiplier. The formulation undertaken here rederives the VSM by using the composition-of-maps (C\(\circ\)M) approach for Hamilton's variational principle for the symmetry reduced Eulerian representation of fluid dynamics [32]. We begin by formulating the Clebsch-Lin form of Hamilton's principle corresponding to the EB Eady VSM (2.6) - (2.9) by augmenting the action to the sum of the EB Eady VSM Lagrangian (2.10) and a constraint on the dynamics of the transverse velocity \(u_{T}\). The variational principle then reads \[\begin{split} 0=\delta S[u_{S},u_{T},D,\vartheta_{s},p,\pi_{T}]=& \delta\int_{a}^{b}\int_{M}\tfrac{1}{2}D|\mathbf{u}_{S}|^{2}+\tfrac{1}{2}Du_{ T}^{2}+Du_{T}fx+\frac{g}{\vartheta_{0}}D\left(z-\frac{H}{2}\right)\vartheta_{s}-p(D-1) \\ &\qquad-\pi_{T}\left(\partial_{t}\vartheta_{s}+\mathbf{u}_{S} \cdot\nabla\vartheta_{s}+su_{T}\right)d^{2}x\,dt\,.\end{split} \tag{2.15}\] Here, we have the constrained Euler-Poincare variations \(\delta u_{S}=\partial_{t}\xi-\mathrm{ad}_{u_{S}}\xi\) and \(\delta(Dd^{2}x_{s})=-\,\mathcal{L}_{\xi}(Dd^{2}x_{s})\) where \(\xi\in\mathfrak{X}(M)\) are arbitrary and vanishes at the boundaries. Furthermore, all other variations, \(\delta\pi_{T},\delta u_{T},\delta\vartheta_{s}\) and \(\delta p\) are taken to be arbitrary. In (2.15), the Lagrange multiplier \(\pi_{T}\) is introduced which takes the form of momentum density and it is identified as the total momentum normal to an area element on the slice in the horizontal direction transverse to the vertical slice. Variations of the action produces the following variational derivatives \[\begin{split} 0=&\int_{a}^{b}\int_{M}B\,\delta D+ \delta\mathbf{u}_{S}\cdot\left(D\mathbf{u}_{S}-\pi_{T}\nabla\vartheta_{s} \right)+\delta u_{T}\left(D(u_{T}+fx)-s\pi_{T}\right)-\delta p(D-1)\\ &\qquad\qquad+\delta\vartheta_{s}\left(\partial_{t}\pi_{T}+ \mathrm{div}(\pi_{t}\mathbf{u}_{S})+D\gamma(z)\right)-\delta\pi_{T}\left( \partial_{t}\vartheta_{s}+\mathbf{u}_{S}\cdot\nabla\vartheta_{s}+su_{T} \right)d^{2}x_{s}dt\end{split} \tag{2.16}\] where for convenience and brevity in notation we define \[B:=\frac{\delta\ell}{\delta D}=\tfrac{1}{2}|\mathbf{u}_{S}|^{2}+\tfrac{1}{2}u _{T}^{2}+u_{T}fx+D\gamma(z)\vartheta_{s}-p\quad\text{and}\quad\gamma(z):= \frac{g}{\theta_{0}}\left(z-\frac{H}{2}\right)\,, \tag{2.17}\] in which \(B\) is the Bernoulli function. We also have the continuity equation for the advected areal density \(Dd^{2}x_{s}\), written in its calculus form or in its Lie derivative form, respectively, as \[\partial_{t}D+\mathrm{div}(D\mathbf{u}_{S})=0\,,\quad\text{or}\quad\left( \partial_{t}+\mathcal{L}_{u_{s}}\right)\left(Dd^{2}x_{s}\right)=0\,, \tag{2.18}\] in which \(\mathcal{L}_{u_{s}}\) is the Lie-derivative with respect to the area-preserving vector field \(u_{s}=\mathbf{u}_{S}\cdot\nabla\), with time-dependent planar vector components \(\mathbf{u}_{S}(x,z,t)\in\mathbb{R}^{2}\). Upon substituting the Euler-Poincare variations into the variational results for \(\delta S\) above in (2.16) and integrating by parts following [37], one finds the equation of motion and auxiliary equations in Lie derivative form, as \[\begin{split}\left(\partial_{t}+\mathcal{L}_{u_{s}}\right) \left(D^{-1}\mathbf{M}\cdot d\mathbf{x}\right)&=dB\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(Dd^{2}x_{s} \right)&=0\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(\pi_{t}d^{2}x _{s}\right)&=-D\gamma(z)d^{2}x_{s}\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\vartheta_{s}& =-su_{T}\,.\end{split} \tag{2.19}\] Here, one defines the following momentum variables whose value will be constrained to \(D=1\) by the Lagrange multiplier \(p\) in (2.16), \[\begin{split}\mathbf{M}&:=D\mathbf{u}_{S}-\pi_{T} \nabla\vartheta_{s}\,,\\ \pi_{T}&:=s^{-1}D(u_{T}+fx)\,.\end{split} \tag{2.20}\] **Theorem 2.2** (Kelvin-Noether theorem for the Euler-Boussinesq VSM).: _The Euler-Boussinesq vertical slice model in equation (3.5) satisfies_ \[\frac{d}{dt}\oint_{\gamma_{t}}D^{-1}\mathbf{M}\cdot d\mathbf{x}=\oint_{\gamma_{t }}dB=0\,, \tag{2.21}\] _where \(\gamma_{t}:C^{1}\mapsto M\) is a closed loop moving with the flow \(\gamma_{t}=\phi_{t}\gamma_{0}\) generated by the vector field \(u_{s}=\dot{\phi}_{t}\phi_{t}^{-1}\)._ **Remark 2.4** (PV conservation for the VSM).: _Because \(d(D^{-1}\mathbf{M}\cdot d\mathbf{x})=\widehat{\mathbf{y}}\cdot\mathrm{curl}(D^ {-1}\mathbf{M})\,d^{2}x_{s}\), \(d^{2}B=0\) and \(D=1\), the first equation in (2.19) implies potential vorticity advection (conservation of PV on fluid parcels)_ \[(\partial_{t}+\mathcal{L}_{u_{s}})\,d\left(D^{-1}\mathbf{M}\cdot d\mathbf{x} \right)=0\quad\Longrightarrow\quad(\partial_{t}+\mathbf{u}_{S}\cdot\nabla)\,( \widehat{\mathbf{y}}\cdot\mathrm{curl}\mathbf{u}_{S}+J(\pi_{T},\vartheta_{s}) )=0\,, \tag{2.22}\] _where \(J(a,b)dx\wedge dz=da\wedge db\) defines the Jacobian operation for functions \((a,b)\) of \((x,z)\in\mathcal{S}\). If we also define \(\mathbf{u}_{S}:=\nabla^{\perp}\psi_{s}\) for a stream function \(\psi_{s}\) since \(D=1\) implies \(\mathrm{div}\mathbf{u}_{S}=0\), then \(\widehat{\mathbf{y}}\cdot\mathrm{curl}\mathbf{u}_{S}=\Delta\psi_{s}\) for the Laplacian operator \(\Delta\) on the vertical \((x,z)\) slice and one may write PV conservation on fluid parcels for the VSM as_ \[(\partial_{t}+\mathbf{u}_{S}\cdot\nabla)\,q=\partial_{t}q+J(\psi_{s},q)=0 \quad\text{with}\quad q:=\Delta\psi_{s}+J(\pi_{T},\vartheta_{s})\,. \tag{2.23}\] ### Hamiltonian formulation of VSMs Recall that the Lagrangian for the Euler-Boussinesq Eady model is given by \[\ell[u_{S},u_{T},D,\vartheta_{s},p]=\int_{M}\frac{D}{2}(|\mathbf{u}_{S}|^{2}+ u_{T}^{2})+Dfu_{T}x+\frac{g}{\vartheta_{0}}D\left(z-\frac{H}{2}\right) \vartheta_{s}+p(1-D)\,d^{2}x\,.\] ((2.10) Following the approach taken in a previous variational derivation of the model [9], we may define momentum variables with respect to both \(u_{S}\) and \(u_{T}\) as \[m_{S}=\frac{\delta\ell}{\delta u_{S}}\,,\quad\text{and}\quad m_{T}=\frac{ \delta\ell}{\delta u_{T}}=s\pi_{T}\,, \tag{2.24}\] where \(\pi_{T}\) is the momentum density introduced in (2.15) whose relationship with \(m_{T}\) will be discussed later in this section. Continuing with the \(m_{T}\) definition of momentum density for the transverse velocity \(u_{T}\), for the Euler-Boussinesq Eady system, we have \[m_{S}=\mathbf{m}_{S}\cdot d\mathbf{x}\otimes d^{2}x=D\mathbf{u}_{S}\cdot d \mathbf{x}\otimes d^{2}x\,,\quad\text{and}\quad m_{T}=Du_{T}+Dfx\,. \tag{2.25}\] Defining \(\gamma(z):=\frac{g}{\vartheta_{0}}(z-\frac{H}{2})\), the Hamiltonian may be calculated as \[h[m_{S},m_{T},D,\vartheta_{s}] =\int_{M}\mathbf{m}_{S}\cdot\mathbf{u}_{S}+m_{T}u_{T}\,d^{2}x- \ell[u_{S},u_{T},D,\vartheta_{s},p] \tag{2.26}\] \[=\int_{M}\frac{|\mathbf{m}_{S}|^{2}}{D}+m_{T}\left(\frac{m_{T}}{ D}-fx\right)-\frac{D}{2}\bigg{(}\,\Big{|}\frac{\mathbf{m}_{S}}{D}\Big{|}^{2}+ \left(\frac{m_{T}}{D}-fx\right)^{2}\bigg{)}\] \[\qquad\qquad-Dfx\left(\frac{m_{T}}{D}-fx\right)-D\gamma(z) \vartheta_{s}+p(D-1)\,d^{2}x\] \[=\int_{M}\frac{|\mathbf{m}_{S}|^{2}}{2D}+\frac{m_{T}}{2D}-m_{T}fx+ \frac{D}{2}(fx)^{2}-D\gamma(z)\vartheta_{s}+p(D-1)\,d^{2}x\] \[=\int_{M}\frac{|\mathbf{m}_{S}|^{2}}{2D}+\frac{(m_{T}-Dfx)^{2}}{ 2D}-D\gamma(z)\vartheta_{s}+p(D-1)\,d^{2}x\,.\] It can be easily verified that this agrees with the conserved energy known to the literature [9, 46] \[E=\int_{M}\frac{D}{2}\left(|\mathbf{u}_{S}|^{2}+u_{T}^{2}\right)-D\gamma(z)\vartheta _{s}\,d^{2}x\,. \tag{2.27}\] With respect to the Hamiltonian, \(h\), defined above, the equations of motion can be written in the following Lie-Poisson form \[\frac{\partial}{\partial t}\begin{bmatrix}m_{S}\\ D\\ m_{T}\\ \vartheta_{s}\end{bmatrix}=-\begin{bmatrix}\operatorname{ad}_{\square}^{*}m_{S }&\square\diamond D&\square\diamond m_{T}&\square\diamond\vartheta_{s}\\ \mathcal{L}_{\square}D&0&0&0\\ \mathcal{L}_{\square}m_{T}&0&0&-s\\ \mathcal{L}_{\square}\vartheta_{s}&0&s&0\end{bmatrix}\begin{bmatrix}\delta h/ \delta m_{S}\\ \delta h/\delta D\\ \delta h/\delta m_{T}\\ \delta h/\delta\vartheta_{s}\end{bmatrix}. \tag{2.28}\] **Remark 2.5** (Interpretations of the Poisson matrix (2.28)).: _The Poisson matrix appearing in (2.28) is "tangled" in the sense that it consists of the canonical Lie-Poisson structure on the dual of the semi-direct product Lie algebra \(\mathfrak{s}=\mathfrak{X}(M)\ltimes(\mathcal{F}(M)\oplus\mathcal{F}(M) \oplus\text{Den}(M))\) coupled to a \(s\) weighted symplectic structure on the cotangent bundle \(T^{*}\mathcal{F}(M)\simeq\mathcal{F}(M)\otimes\text{Den}(M)\). The \(s\) factor on the canonical symplectic structure is due to the choice of momentum density \(m_{T}\). Choosing the canonical momentum \(\pi_{T}=m_{T}/s\), one can transform the weighted symplectic structure to the canonical structure as demonstrated in the next paragraph._ 'Untangled' Hamiltonian formAs was discussed in a previous work in the context of reduction by stages [32], there exists a so-called _untangling_ map which allows us to untangle the Lie-Poisson structure by shifting the momentum variable. Indeed, consider the terms corresponding to the Legendre transformation in the previous section. Indeed, the sum of the Lagrangian and the Hamiltonian is \[\ell+h=\langle\mathbf{m}_{S}\,,\,\mathbf{u}_{S}\rangle+\langle m_{T}\,,\,u_{T }\rangle\.\] By considering the advection equation for \(\vartheta_{s}\), we have that \[\begin{split}\ell+h&=\langle\mathbf{m}_{S}\,,\, \mathbf{u}_{S}\rangle+\left\langle m_{T}\,,\,-\frac{\partial_{t}\vartheta_{s} +\mathbf{u}_{S}\cdot\nabla\vartheta_{s}}{s}\right\rangle\\ &=\left\langle\mathbf{m}_{S}-\frac{m_{T}\nabla\vartheta_{s}}{s} \,,\,\mathbf{u}_{S}\right\rangle-\left\langle\frac{m_{T}}{s}\,,\,\partial_{t }\vartheta_{s}\right\rangle=:\langle\mathbf{M}\,,\,\mathbf{u}_{S}\rangle- \left\langle\pi_{T}\,,\,\partial_{t}\vartheta_{s}\right\rangle\,,\end{split} \tag{2.29}\] where the pair of momentum variables, \(\mathbf{M}\) and \(\pi_{T}\) have the same definition as in (2.20). Furthermore, the velocity variables, \(\mathbf{u}_{S}\) and \(u_{T}\), can be written in terms of these momenta as \[\mathbf{u}_{S}=\frac{\mathbf{M}+\pi_{T}\nabla\vartheta_{s}}{D}\,,\quad\text{ and}\quad u_{T}=\frac{s\pi_{T}}{D}-fx\,,\] and the Hamiltonian may be derived as follows. \[\begin{split}\tilde{h}[\mathbf{M},\pi_{T},D,\vartheta_{s}]& =\int_{M}\mathbf{M}\cdot\mathbf{u}_{S}-\pi_{T}\partial_{t}\vartheta_{s}- \ell[u_{S},u_{T},D,\vartheta_{s},p]\,d^{2}x\\ &=\int_{M}\mathbf{M}\cdot\mathbf{u}_{S}+\pi_{T}(\mathbf{u}_{S} \cdot\nabla\vartheta_{s}+su_{T})-\ell[u_{S},u_{T},D,\vartheta_{s},p]\,d^{2}x \\ &=\int_{M}\mathbf{M}\cdot\left(\frac{\mathbf{M}+\pi_{T}\nabla \vartheta_{s}}{D}\right)+\pi_{T}\left(\left(\frac{\mathbf{M}+\pi_{T}\nabla \vartheta_{s}}{D}\right)\cdot\nabla\vartheta_{s}+s\bigg{(}\frac{s\pi_{T}}{D}- fx\bigg{)}\right)-\ell\,d^{2}x\\ &=\int_{M}\frac{|\mathbf{M}|^{2}}{D}+\frac{2\pi_{T}\mathbf{M} \cdot\nabla\vartheta_{s}}{D}+\frac{\pi_{T}^{2}|\nabla\vartheta_{s}|^{2}}{D}+ \frac{s^{2}\pi_{T}^{2}}{D}-sfx\pi_{T}-\frac{D}{2}\bigg{|}\frac{\mathbf{M}+\pi _{T}\nabla\vartheta_{s}}{D}\bigg{|}^{2}\\ &\qquad\qquad-\frac{D}{2}\bigg{(}\frac{s\pi_{T}}{D}-fx\bigg{)}^{ 2}-Dfx\bigg{(}\frac{s\pi_{T}}{D}-fx\bigg{)}-D\gamma(z)\vartheta_{s}+p(D-1)\,d ^{2}x\\ &=\int_{M}\frac{|\mathbf{M}+\pi_{T}\nabla\vartheta_{s}|^{2}}{2D} +\frac{(s\pi_{T}-Dfx)^{2}}{2D}-D\gamma(z)\vartheta_{s}+p(D-1)\,d^{2}x\,.\end{split} \tag{2.30}\] The equations of motion for the Hamiltonian defined in terms of M and \(\pi_{T}\) have the following _untangled_ Lie-Poisson structure \[\frac{\partial}{\partial t}\begin{bmatrix}\mathrm{M}\\ D\\ \pi_{T}\\ \vartheta_{s}\end{bmatrix}=-\begin{bmatrix}\mathrm{ad}_{\square}^{*}\mathrm{M}& \square\diamond D&0&0\\ \mathcal{L}_{\square}D&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{bmatrix}\begin{bmatrix}\delta\tilde{h}/\delta\mathrm{M}\\ \delta\tilde{h}/\delta D\\ \delta\tilde{h}/\delta\pi_{T}\\ \delta\tilde{h}/\delta\vartheta_{s}\end{bmatrix}. \tag{2.31}\] The Poisson structure appearing in equation (2.31) is untangled and it is the canonical Poisson structure on the dual space \((\mathfrak{X}^{*}(M)\ltimes\mathrm{Den}(M))\oplus T^{*}\mathcal{F}(M)\). That is, it is the sum of the Lie-Poisson matrix on the dual of the semi-direct product Lie algebra \(\mathfrak{X}(M)\ltimes\mathcal{F}(M)\) and the canonical symplectic matrix on \(T^{*}\mathcal{F}(M)\). One can verify the equivalence of the tangled and untangled Hamiltonian form of the Lie-Poisson equations (2.28) and (2.31) respectively, by direct calculation. Indeed, the momentum equation in a result of the relationship \[(\partial_{t}+\mathcal{L}_{u_{S}})(\pi_{T}d\vartheta_{s})=-m_{T}d\frac{\delta h }{\delta m_{T}}+\frac{\delta h}{\delta\vartheta_{s}}d\vartheta_{s}=-\frac{ \delta h}{\delta\vartheta_{s}}\diamond\vartheta_{s}-\frac{\delta h}{\delta m_{ T}}\diamond m_{T}\,,\] and it can be easily verified that the above Lie-Poisson system is equivalent to the Euler-Boussinesq Eady equations (2.11)-(2.14) for the Hamiltonian, \(\tilde{h}\), given by equation (2.30). ### Computational Simulations of VSMs This section demonstrates some of the variety of solution behaviours of the VSM derived in the previous sections by showing the dynamics of front formation in computational simulations of the VSM equations. For this section only, we rewrite the VSM equations (2.11) - (2.14) using the buoyancy variable \(b_{s}(x,z,t)\) which is related to the potential temperature \(\vartheta_{s}\) by the relation \(b_{s}=\frac{\partial}{\vartheta_{0}}\vartheta_{s}\). We assume the buoyancy decomposes to \(b_{s}(x,z,t)=\overline{b}_{s}(z)+b_{s}^{\prime}(x,z,t)\) where \(\overline{b}_{s}(z)\) is the background buoyancy which is linear in \(z\). Upon defining the buoyancy frequency as \(N^{2}=\partial\overline{b}_{s}/\partial z\), the VSM equations can be written as \[\begin{split}\partial_{t}\mathbf{u}_{S}+\mathbf{u}_{S}\cdot\nabla \mathbf{u}_{S}&=fu_{T}\widehat{x}+b_{s}^{\prime}\widehat{z}- \nabla p\,,\\ \partial_{t}u_{T}+\mathbf{u}_{S}\cdot\nabla u_{T}& =-f\mathbf{u}_{S}\cdot\widehat{x}-s\frac{g}{\vartheta_{0}}\left(z- \frac{H}{2}\right)\,,\\ \partial_{t}b_{s}^{\prime}+\mathbf{u}_{S}\cdot\nabla b_{s}^{\prime}+ N^{2}\mathbf{u}_{S}\cdot\widehat{z}+s\frac{g}{\vartheta_{0}}u_{T}& =0\,,\\ \nabla\cdot\mathbf{u}_{S}&=0\,.\end{split} \tag{2.32}\] The numerical method we employ here to solve the VSM equations (2.32) is the compatible finite element method [13] based on the finite element discretisation for VSMs discussed in [47]. We summarise the numerical methods as follows. The vertical slice domain \(M\) is discretised on the mesh \(\Omega\) using quadrilateral finite elements. For this discretisation, we approximate the prognostic variables \((\mathbf{u}_{S},u_{T},b^{\prime}_{s},p)\), respectively, in terms of the finite element spaces \[(RK_{1}^{\mathring{}}(\Omega),DG_{1}(\Omega),CG_{2}(\Omega_{h})\otimes DG_{1}( \Omega_{v}),DG_{1}(\Omega))\,.\] Here, the various finite element function spaces are denoted as follows: \(RK_{1}(\Omega)\) denotes the Raviart-Thomas space of polynomial degree \(1\); \(DG_{1}(\Omega)\) denotes the continuous finite element space of polynomial degree \(1\); and \(CG_{2}(\Omega)\) denotes the continuous finite element space of polynomial degree \(2\). The space \(RK_{1}(\Omega)\) is a subspace of \(RK_{1}(\Omega)\) defined by \(RK_{1}^{\mathring{}}(\Omega)=\{\mathbf{u}\in RK_{1}(\Omega):\mathbf{u}\cdot \mathbf{n}=0\quad\text{on}\quad\partial\Omega\}\). The finite element space for the buoyancy, \(CG_{2}(\Omega_{h})\otimes DG_{1}(\Omega_{v})\), is the continuous finite element space in the horizontal direction and tensor product discontinuous finite element space in the vertical direction, so as to replicate Charney-Phillips grid staggering. For the temporal discretisation, we use a semi-implicit scheme where we modify the explicit third order strong stability preserving Runge Kutta (SSPRK3) scheme by introducing an implicit time discretisation average applied to the forcing terms, as well as the advecting velocity field \(\mathbf{u}_{S}\) in the advection terms in all of the evolution equations. The resulting nonlinear system is solved using a Picard iterations scheme in which the magnitude of the residual is enforced to be less than a prescribed tolerance. The physical parameters used in the example simulation are obtained by adapting the parameters used in [47] for oceanic conditions, rather than atmospheric. The computational domain is a rectangle \([-L,L]\times[0,H]\) where \(L=10000m\) and \(H=100m\). The boundary conditions are periodic on the lateral boundaries and the in-slice velocity \(\mathbf{u}_{S}\) has no normal component on the top and bottom boundaries. These boundary conditions are sufficient in the absence of explicit diffusion terms. The aspect ratio is given by \(\sigma:=H/L=10^{-2}\) which is applicable to ocean submesoscale dynamics. The rotation frequency \(f=10^{-4}s^{-1}\), gravity \(g=10ms^{-2}\), squared buoyancy frequency \(N^{2}:=\frac{\partial\overline{b}_{s}}{\partial z}=\frac{g}{\partial_{0}} \frac{\partial\vartheta_{s}}{\partial z}=10^{-6}s^{-2}\), the constant \(y\) derivative of potential temperature \(s=\partial\vartheta/\partial y\) and reference temperature \(\vartheta_{0}\) are set such that \(gs/\vartheta_{0}=10^{-7}\). The initial conditions of \(b^{\prime}_{s}\) is taken to be a perturbation away from the stable, stratified buoyancy field which takes the form as in [47], \[b^{\prime}_{s}(x,z,0)=aN\left(-\left[1-\frac{1}{4}\coth\left(\frac{1}{4}\right) \right]\sinh Z\cos\left(\frac{\pi x}{L}\right)-\frac{n}{4}\cosh Z\sin\left( \frac{\pi x}{L}\right)\right)\,, \tag{2.33}\] where \(a=7.5\) is the amplitude of the perturbation and the constant \(n\) and modified vertical coordinate \(Z\) are defined as \[n=2\left(\left[\frac{1}{4}-\tanh\left(\frac{1}{4}\right)\right]\left[\frac{1} {4}-\coth\left(\frac{1}{4}\right)\right]\right)^{1/2}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 2, the left hand column and the right hand column are the snapshot of the transverse velocity \(u_{T}\) and buoyancy \(b^{\prime}_{s}\) respectively, both of which are shown at day \(4,7,9\) and \(14\) of the simulation. The model displays signs of frontogenesis in the \(u_{T}\) field at day \(4\) when sharp gradients start consolidating near the center of the domain. At the same time, the buoyancy field is tracking the region of large gradients in \(u_{T}\) and large buoyancy gradients can be found in the north east and south west regions. At day \(7\), the front can be seen clearly in both the \(u_{T}\) and \(b^{\prime}_{s}\) field and it is tilted eastwards starting from the bottom of the domain. The front weakens over time and is then regenerated with a westward tilt on day \(9\), as is visible on the \(u_{T}\) snapshot. The tilted front then rotates clockwise and displays a westward tilt on day \(14\) whose magnitude is similar to the westward tilting front on day \(7\). The weakening and reformation of the front continue over another \(7\) day cycle. Consequently, we conclude that the VSM dynamics shows a quasi-periodic formation of fronts. Figure 3 consists of the snapshot of the horizontal component of the velocity field \(\mathbf{u}_{S}\) and the pressure field \(p\) at day \(14\). The presence of the low pressure zone in the center of the pressure field aligned with the front in the \(u_{T}\) field suggests a cyclone/anticyclone pair. This is further suggested by the lack of horizontal velocity in frontal regions. Thus, the Lagrangian fluid particles do not cross the front. Instead, they travel vertically upwards and downwards on the left and right of the frontal regions, respectively. The difference in sign of \(u_{T}\) across the front then completes the interpretation of this configuration of the VSM simulation as a cyclone/anticyclone pair. ## 3 Wave mean flow interaction (WMFI) Describing the interacting dynamics between a mean flow and the fluctuating motion around this is a central problem in geophysical fluid dynamics. For fluid flows on a planetary scale, the average behaviour of the system is of particular interest since it can be computationally resolved. However, the interactions between the mean and fluctuating components of the flow are integral to understanding the formation and subsequent dynamics of instabilities. These instabilities can dramatically influence the nature of the mean flow and suitable parameterisations are required to model them effectively [19]. This section explores WMFI within the context of VSMs. ### Clebsch-Lin formulation of the EB VSM with wave dynamics One general formulation of the coupling between the EB VSM dynamics and wave activity, let us first assume the general form of the wave dynamics is described by a wave Hamiltonian \(H_{W}(N,\phi)\). Here the wave degree of freedom \((N,\phi)\) are wave action density and wave phase respectively with the wave number \(\mathbf{k}=\nabla\phi\) which induces a single frequency wave propagating on the vertical slice domain \(\mathcal{S}\) with area element \(d^{2}x_{s}=dxdz\). The coupling between the EB VSM degree of freedom and waves degree of freedom occurs in the Hamilton variational principle where the phase space principle of the wave motion are boosted to the frame of the in slice fluidic motion, in the same fashion as the out of slice fluid variables. That is, we augment the Clebsch-Lin form of the Hamilton variational principle in equation (2.16) to have Figure 2: Snapshots of the transverse velocity field \(u_{T}\) (left) and the buoyancy field \(b^{\prime}_{s}\) (right) of the numerical simulation of the VSM (2.32) at days \(4,7,9\) and \(14\). The solutions are periodic on the lateral boundaries and their in-slice velocity \(\mathbf{u}_{S}\) has no normal component on the top and bottom boundaries; so the buoyancy field is advected horizontally at the top and bottom of the domain. the following \[\begin{split} 0=\delta S&=\delta\int_{a}^{b}\int_{ \mathcal{S}}\ell(\mathbf{u}_{S},u_{T},D,\vartheta_{s},\phi;p,\pi_{t},N)\\ &=\delta\int_{a}^{b}\int_{\mathcal{S}}\left(\tfrac{1}{2}D| \mathbf{u}_{S}|^{2}+\tfrac{1}{2}Du_{T}^{2}+Du_{T}fx+D\gamma(z)\vartheta_{s}-p( D-1)\right.\\ &\qquad\qquad\qquad\left.-\pi_{T}\left(\partial_{t}\vartheta_{s}+ \mathbf{u}_{S}\cdot\nabla\vartheta_{s}+su_{T}\right)-N(\partial_{t}\phi+ \mathbf{u}_{S}\cdot\nabla\phi)\,d^{3}x+H_{W}(N,\mathbf{k})\right)dt\\ =\int_{a}^{b}\int_{\mathcal{S}}& B\,\delta D+\delta \mathbf{u}_{S}\cdot\left(D\mathbf{u}_{S}-\pi_{T}\nabla\vartheta_{s}-N\nabla \phi\right)+\delta u_{T}\left(D(u_{T}+fx)-s\pi_{T}\right)-\delta p(D-1)\\ &\qquad+\delta\vartheta_{s}\left(\partial_{t}\pi_{T}+\mathrm{div} (\pi_{t}\mathbf{u}_{S})+D\gamma(z)\right)-\delta\pi_{T}\left(\partial_{t} \vartheta_{s}+\mathbf{u}_{S}\cdot\nabla\vartheta_{s}+su_{T}\right)\\ &\qquad+\delta\phi\left(\partial_{t}N+\mathrm{div}\left(N\mathbf{ u}_{S}+\frac{\delta H_{W}}{\delta\mathbf{k}}\right)\right)-\delta N\left( \partial_{t}\phi+\mathbf{u}_{S}\cdot\nabla\phi-\frac{\delta H_{W}}{\delta N} \right)\,\,d^{2}x_{s}dt\,.\end{split} \tag{3.1}\] Here, we have arbitrary variations for the two pairs of Clebsch variables \((\vartheta_{s},\pi_{T}),(\phi,N)\) and the associated out of slice velocities \(u_{T}\); we also have constrained variations for the Euler-Poincare variables \[\delta u_{s}=\partial_{t}\xi-\mathrm{ad}_{u_{s}}\xi\quad\text{and}\quad\delta( Dd^{2}x_{s})=-\,\mathcal{L}_{\xi}(Dd^{2}x_{s})\,, \tag{3.2}\] for arbitrary variations \(\xi\). The constrained variations of \(Dd^{2}x_{s}\) implies the advection of \(Dd^{2}x\), written in coordinates as \[\partial_{t}D+\mathrm{div}(D\mathbf{u}_{S})=0\,,\quad\text{or}\quad(\partial _{t}+\mathcal{L}_{u_{s}})\left(Dd^{2}x_{s}\right)=0\,, \tag{3.3}\] As before in (2.17), we define the Bernoulli function, \(B\), and \(g\), \(\theta_{0}\), \(H\) are constants, as \[B:=\frac{\delta\ell}{\delta D}=\tfrac{1}{2}|\mathbf{u}_{S}|^{2}+\tfrac{1}{2}u _{T}^{2}+u_{T}fx+D\gamma(z)\vartheta_{s}-p\quad\text{and}\quad\gamma(z):=\frac {g}{\theta_{0}}\left(z-\frac{H}{2}\right)\,. \tag{3.4}\] Let us briefly remark on the terms appearing in the Lagrangian (3.1). The first line is the fluid Lagrangian for EB fluids in the vertical slice [37]. The first term in the second line is the phase-space Lagrangian that couples the thermal degree of freedom in the slide to the transverse fluid flow through the slice via the pairing \(-\int_{\mathcal{S}}\pi_{T}\nabla\phi\cdot\mathbf{u}_{S}\,d^{2}x\). Similarly, the second term and third term in the second line is the phase-space Lagrangian that couples the wave dynamics to the in slice fluid flow via the coupling \(-\int_{\mathcal{S}}N\nabla\phi\cdot\mathbf{u}_{S}\,d^{2}x\). These two coupling terms in the Lagrangian in (3.1) introduces additional momentum components to the the fluid momentum in the slice. Upon substituting the Euler-Poincare variations (3.2) into the variational results for \(\delta S\) above in (3.1) and integrating by parts, one finds the equation of motion and auxiliary equations in Lie derivative form, Figure 3: Snapshots of the horizontal component of in slice velocity field \(\mathbf{u}_{S}\cdot\widehat{\mathbf{x}}\) (left) and the pressure field \(p\) (right) of the numerical simulation of the VSM (2.32) at day 14. as \[\begin{split}\left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(D^{-1 }\widetilde{\mathbf{M}}\cdot d\mathbf{x}\right)&=dB\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(\pi_{t}d^{2} x_{s}\right)&=-D\gamma(z)d^{2}x_{s}\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\vartheta_{s}& =-su_{T}\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(Nd^{2}x_{s} \right)&=-\mathrm{div}\left(\frac{\delta H_{W}}{\delta\mathbf{k}} \right)\,d^{2}x_{s}\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\phi& =\frac{\delta H_{W}}{\delta N}\,,\\ \left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(Dd^{2}x_{s} \right)&=0\,,\quad\text{with}\quad D=1\,.\end{split} \tag{3.5}\] Here we have the incompressibility constraint \(D=1\) enforced by the Lagrange multiplier \(p\) and we have the following definitions for the momentum variables \[\begin{split}\widetilde{\mathbf{M}}&:=D\mathbf{u }_{S}-\pi_{T}\nabla\vartheta_{s}-N\mathbf{k}\,,\\ \pi_{T}&:=s^{-1}D(u_{T}+fx)\,.\end{split} \tag{3.6}\] Compared with the standard EB VSM in (2.20), we note that the untangled in slice fluid momentum \(\widetilde{\mathbf{M}}\) also have contributions from the additional wave degree of freedom. **Theorem 3.1** (Kelvin-Noether theorem for the Euler-Boussinesq VSM).: _The Euler-Boussinesq vertical slice model in equation (3.5) satisfies_ \[\frac{d}{dt}\oint_{\gamma_{t}}D^{-1}\widetilde{\mathbf{M}}\cdot d\mathbf{x}= \oint_{\gamma_{t}}dB=0\,, \tag{3.7}\] _where \(\gamma_{t}:C^{1}\mapsto M\) is a closed loop moving with the flow \(\gamma_{t}=\phi_{t}\gamma_{0}\) generated by the vector field \(u_{s}=\dot{\phi}_{t}\phi_{t}^{-1}\)._ **Remark 3.1** (PV conservation for the VSM with waves).: _Under similar considerations as for PV conservation for the EB VSM (without waves) in Remark 2.4, i.e. \(d(D^{-1}\widetilde{\mathbf{M}}\cdot d\mathbf{x})=\widehat{\mathbf{y}}\cdot \mathrm{curl}(D^{-1}\widetilde{\mathbf{M}})\,d^{2}x_{s}\), \(d^{2}B=0\) and \(D=1\), the first equation in (3.5) implies potential vorticity advection (conservation of PV on fluid parcels)_ \[\left(\partial_{t}+\mathcal{L}_{u_{s}}\right)d\left(D^{-1} \widetilde{\mathbf{M}}\cdot d\mathbf{x}\right)=0\quad\Longrightarrow\quad \left(\partial_{t}+\mathbf{u}_{S}\cdot\nabla\right)\left(\widehat{\mathbf{y} }\cdot\mathrm{curl}\mathbf{u}_{S}+J(\pi_{T}/D,\vartheta_{s})+J(N/D,\phi) \right)=0\,, \tag{3.8}\] _where, as before, \(J(a,b)dx\wedge dz=da\wedge db\). We may write PV conservation on fluid parcels for the VSM with wave dynamics as_ \[\left(\partial_{t}+\mathbf{u}_{S}\cdot\nabla\right)\widetilde{q}=\partial_{t }\widetilde{q}+J(\psi_{s},\widetilde{q})=0\quad\text{with}\quad\widetilde{q} :=\Delta\psi_{s}+J(\pi_{T}/D,\vartheta_{s})+J(N/D,\phi)\,, \tag{3.9}\] _where \(\psi_{s}\) is the stream function for the slice velocity in same manner as presented in Remark 2.4._ Next, let's rewrite the Kelvin circulation theorem in equation (3.7) in terms of the fluid momentum alone, \[\mathbf{m}_{S}=\widetilde{\mathbf{M}}+\pi_{T}\nabla\vartheta_{s}+N\nabla\phi= D\mathbf{u}_{S}\,. \tag{3.10}\] As we shall see in the Hamiltonian formulation, mapping from the total momentum \(\widetilde{\mathbf{M}}\) to the in slice fluid momentum \(\mathbf{m}_{S}\) is known as the _tangling map_. The motion equation for the fluid momentum \(\mathbf{m}_{S}\) in (3.10) is given by \[\left(\partial_{t}+\mathcal{L}_{u_{s}}\right)\left(\mathbf{m}_{S}\cdot d \mathbf{x}\otimes d^{3}x\right)=\left(DdB-D\gamma(z)d\vartheta_{s}-\frac{s}{2} d\pi_{T}^{2}-\mathrm{div}\left(\frac{\delta H_{W}}{\delta\mathbf{k}}\right)d \phi+Nd\left(\frac{\delta H_{W}}{\delta N}\right)\right)\otimes d^{3}x\,, \tag{3.11}\] where we have used evolution equation of \(\pi_{T}\nabla\vartheta_{s}\) and \(N\nabla\phi\) through equations in (3.5). Noting that \(Dd^{2}x\) is still advected, from (3.11) one obtains the equation for the circulation 1-form \(D^{-1}\mathbf{m}_{S}\) needed for the Kelvin theorem, \[(\partial_{t}+\mathcal{L}_{u_{s}})\left(D^{-1}\mathbf{m}_{S}\cdot d\mathbf{x} \right)=dB-\gamma(z)d\vartheta_{s}-\frac{s}{2D}d\pi_{T}^{2}-\frac{1}{D} \mathrm{div}\left(\frac{\delta H_{W}}{\delta\mathbf{k}}\right)d\phi+\frac{N}{D }d\left(\frac{\delta H_{W}}{\delta N}\right)\,. \tag{3.12}\] The final two terms in equation (3.12) can be evaluated in similar form to the 3D WKB closure of the GLM EB equations in [22, 28], where we further specialise to the 2D slice domain. Namely, \[H_{W}=-\int_{M}N\omega(\mathbf{k})\,d^{2}x\,,\quad\text{so that}\quad\frac{ \delta H_{W}}{\delta N}\Big{|}_{\mathbf{k}}=-\,\omega(\mathbf{k})\,,\quad \text{and}\quad\frac{\delta H_{W}}{\delta\mathbf{k}}\Big{|}_{N}=-N\frac{ \partial\omega(\mathbf{k})}{\partial\mathbf{k}}\,, \tag{3.13}\] In Subsection 3.3, we will use more concrete wave Hamiltonian \(H_{W}\) derived by the using the WKB closure of GLM. **Remark 3.2** (Kelvin circulation theorem for WKB wave-current interaction).: _The motion equation (3.11) implies the following Kelvin circulation theorem for WKB wave-current interaction in the EB vertical slice model dynamics with internal gravity waves,_ \[\begin{split}\frac{d}{dt}\oint_{c(u_{s})}&\frac{1}{D }\mathbf{m}_{S}\cdot d\mathbf{x}=\oint_{c(u_{s})}(\partial_{t}+\mathcal{L}_{u_ {s}})\left(\frac{1}{D}\mathbf{m}_{S}\cdot d\mathbf{x}\right)\\ &=\oint_{c(u_{s})}d\left(B-\frac{s\pi_{T}^{2}}{2D}\right)-\gamma (z)d\vartheta_{s}+\underbrace{\frac{1}{D}\bigg{(}\mathbf{k}\,\mathrm{div} \left(N\frac{\partial\omega(\mathbf{k})}{\partial\mathbf{k}}\right)+N\nabla \omega(\mathbf{k})\bigg{)}}_{\text{IGW Forcing}}\cdot d\mathbf{x}\,,\end{split} \tag{3.14}\] _where \(c(u_{s})\) is a material loop moving with the flow velocity \(\mathbf{u}_{S}(x,z,t)\) in the vertical slice. We remark that the notation \(\nabla\omega(\mathbf{k})\) denotes the spatial gradient of \(\omega\) to avoid confusion. In (3.14), the fluid quantities \(B\) and \(\mathbf{m}\) are defined in equations (3.4) and (3.10). The wave quantity \(\frac{\partial\omega(\mathbf{k})}{\partial\mathbf{k}}\) is defined in equation (3.13). Besides the usual effect of fluid circulation generated by horizonal components of temperature gradients, ones sees the generation of fluid circulation caused by internal gravity wave quantities which can be evaluated using expressions in equations (3.13)._ Hamiltonian formulation of the EB VSM with wave dynamics.The Hamiltonian formulation of the EB VSM with wave dynamics tracks closely to the EB VSM without waves since the wave degree of freedom have similar Hamiltonian structure as transverse fluid momentum and temperature. As the wave Hamiltonian \(H_{W}\) are separate from the Lagrangian for the EB VSM Lagrangian, one can define the tangled Hamiltonian \(\tilde{h}_{T}(\mathbf{m}_{S},D,\pi_{T},\vartheta_{s},N,\phi)\) by summing the \(H_{W}\) with the tangled Hermitian for the EB VSM in (2.26) to have \[\begin{split}\widetilde{h}_{T}(m_{S},m_{T},D,\vartheta_{s},N, \phi;p):=\int_{\mathcal{S}}\frac{|\mathbf{m}_{S}|^{2}}{2D}+\frac{(\pi_{T}-Dfx) ^{2}}{2D}-D\gamma(z)\vartheta_{s}+p(D-1)-H_{W}(N,\phi)\,d^{2}x_{s}\,.\end{split} \tag{3.15}\] **Remark 3.3**.: _We remark that the minus sign of the wave Hamiltonian \(H_{W}\) appearing in (3.15) are due to the overall minus sign chosen for the phase space Lagrangian \(-N\left(\partial_{t}+\mathbf{u}_{S}\cdot\nabla\phi\right)+H_{W}\) in (3.1)._ The variations of this Hamiltonian are given by \[\begin{split}\delta\widetilde{h}_{T}:=\int_{\mathcal{S}}\mathbf{ u}_{S}&\cdot\delta\mathbf{m}_{S}-B\,\delta D+\left(\mathbf{u}_{S} \cdot\nabla\vartheta_{s}+su_{T}\right)\delta\pi_{T}-\left(\mathrm{div}(D \mathbf{u}_{S})+D\gamma(z)\right)\delta\vartheta_{s}\\ &\qquad+\left(\mathbf{u}_{S}\cdot\nabla\phi+\frac{\delta H_{W}}{ \delta N}\right)\delta N-\left(\mathrm{div}(N\mathbf{u}_{S})-\frac{\delta H_ {W}}{\delta\phi}\right)\delta\phi+(D-1)\,\delta p\,d^{2}x_{s}\,,\end{split} \tag{3.16}\] where the expression \(B\) is given by, cf. equations (2.16) and (2.17), \[\begin{split}\frac{\delta\widetilde{H}_{1}}{\delta D}& =p-\frac{1}{2D^{2}}|\widetilde{\mathbf{M}}+\pi_{T}\nabla\vartheta_{s}+N \nabla\phi|^{2}-\frac{s^{2}}{2D^{2}}\pi_{T}^{2}+\tfrac{1}{2}(fx)^{2}-\gamma(z )\vartheta_{s}\\ &=p-\tfrac{1}{2}|\mathbf{u}_{S}|^{2}-\tfrac{1}{2}u_{T}^{2}-u_{T} fx-\gamma(z)\vartheta_{s}=-\frac{\delta\ell}{\delta D}=-B\,.\end{split} \tag{3.17}\] The corresponding Hamiltonian equations are obtained from the following matrix operator which comprises a \(s\) weighted symplectic \(2\times 2\) block in \((\pi_{T},\vartheta_{s})\) in the centre and symplectic \(2\times 2\) block in \((N,\phi)\) on the lower left which are coupled to the Lie-Poisson \(2\times 2\) block in \((\mathbf{m}_{S},D)\) on the upper left through the semi-direct product structure. Namely, in coordinate form, we have \[\frac{\partial}{\partial t}\begin{bmatrix}m_{Sj}\\ D\\ \pi_{T}\\ \vartheta_{s}\\ N\\ \phi\end{bmatrix}=-\begin{bmatrix}m_{Sk}\partial_{j}+\partial_{k}m_{Sj}&D \partial_{j}&\pi_{T}\partial_{j}&-\vartheta_{s,j}&N\partial_{j}&\phi_{,j}\\ \partial_{k}D&0&0&0&0&0\\ \partial_{k}\pi_{T}&0&0&-s&0&0\\ \vartheta_{s,k}&0&s&0&0&0\\ \partial_{k}N&0&0&0&0&-1\\ \phi_{,k}&0&0&0&1&0\end{bmatrix}\begin{bmatrix}\delta\widetilde{h}_{T}/\delta m _{Sk}={u_{s}}^{k}\\ \delta\widetilde{h}_{T}/\delta D=-B\\ \delta\widetilde{h}_{T}/\delta\pi_{T}=su_{T}\\ \delta\widetilde{h}_{T}/\delta\vartheta_{s}=-D\gamma(z)\\ \delta\widetilde{h}_{T}/\delta N=-\delta H_{W}/\delta N\\ \delta\widetilde{h}_{T}/\delta\phi=-\delta H_{W}/\delta\phi\end{bmatrix}\,. \tag{3.18}\] To untangle the canonical symplectic structure in the pairs of variables \((\pi_{T},\vartheta_{s})\) and \((N,\phi)\) from the semi-direct product Lie-Poisson structure of \((\mathbf{m}_{S},D)\), we consider the untangled Hamiltonian \(\widetilde{h}_{UT}\) in terms of the total momentum \(\widetilde{\mathbf{M}}\) \[\begin{split}&\widetilde{h}_{UT}(\widetilde{\mathbf{M}},D,\pi_ {T},\vartheta_{s},N,\phi)\\ &:=\int_{\mathcal{S}}\frac{1}{2D}|\widetilde{\mathbf{M}}+\pi_{T} \nabla\vartheta_{s}+N\nabla\phi|^{2}+\frac{1}{2D}\big{(}s\pi_{T}-Dfx\big{)}^{ 2}-D\gamma(z)\vartheta_{s}+p(D-1)-H_{W}(N,\phi)\,d^{2}x_{s}\,.\end{split} \tag{3.19}\] Then, the Hamiltonian equations corresponding to (2.26) can be written in terms of the following block-diagonal matrix operator form \[\frac{\partial}{\partial t}\begin{bmatrix}\widetilde{M}_{j}\\ D\\ \pi_{T}\\ N\\ \phi\end{bmatrix}=-\begin{bmatrix}\widetilde{M}_{k}\partial_{j}+ \partial_{k}\widetilde{M}_{j}&D\partial_{j}&0&0&0&0\\ \partial_{k}D&0&0&0&0&0\\ 0&0&0&-1&0&0\\ 0&0&1&0&0&0\\ 0&0&0&0&0&-1\\ 0&0&0&0&1&0\end{bmatrix}\begin{bmatrix}\delta\widetilde{h}_{UT}/\delta \widetilde{M}_{k}={u_{s}}^{k}\\ \delta\widetilde{h}_{UT}/\delta D=-B\\ \delta\widetilde{h}_{UT}/\delta\pi_{T}=\mathbf{u}_{S}\cdot\nabla\vartheta_{s} +su_{T}\\ \delta\widetilde{h}_{UT}/\delta\vartheta_{s}=-\operatorname{div}(D\mathbf{u}_{S })-D\gamma(z)\\ \delta\widetilde{h}_{UT}/\delta N=\mathbf{u}_{S}\cdot\nabla\phi-\delta H_{W}/ \delta N\\ \delta\widetilde{h}_{UT}/\delta\phi=-\operatorname{div}(N\mathbf{u}_{S})- \delta H_{W}/\delta\phi\end{bmatrix}\,, \tag{3.20}\] where we note that the Lie-Poisson \(2\times 2\) block in \((\widetilde{\mathbf{M}},D)\) on the upper left, a symplectic \(2\times 2\) block in \((\pi_{T},\vartheta_{s})\) in the centre, and another symplectic \(2\times 2\) block in \((N,\phi)\) on the lower left are completely separate. ### The geometry of the generalised Lagrangian mean (GLM) approach to wave-current interaction One particular wave Hamiltonian of interest is that corresponding to a closure of GLM. Following the initial contribution of Andrews and McIntyre [2], a number of papers have emerged which seek to describe the generalised Lagrangian mean (GLM) approach to wave mean flow interaction (WMFI) from a geometric perspective. Notable contributions to this field were made in the context of an Euler-Boussinesq fluid [22], for general Euler-Poincare systems on semidirect product Lie algebras [29], and, more recently, on any real Riemannian manifold [21]. A review of WMFI in an Euler-Boussinesq fluid.Gjaja and Holm [22] closed the GLM approach to WMFI [2] for stratified rotating incompressible fluid flow in the Euler-Boussinesq (EB) approximation. This was accomplished by assuming a complex vector Wentzel-Kramers-Brillouin (WKB) representation of the internal wave amplitude and performing an asymptotic expansion of Hamilton's principle for the EB model to several orders in the ratio of wave-amplitude to wave-length \(\alpha\) and the phase fluctuation ratio \(\epsilon\). Recently [32], the GH WKB closure of GLM at quadratic order \(O(\alpha^{2})\) in the wave amplitude, \(\alpha\), has been expressed as a composition of two maps and derived through the semidirect product [37] variational structure of an Euler-Boussinesq fluid. Understanding this approach by viewing the flow map as a composition of maps gives this model an intrinsic connection to the previous work of the authors on wave current interaction from the viewpoint of a composition of two maps [30, 31, 32]. In the geometric description of the Landau paradigm which separates a turbulent flow into fluctuations around a mean flow, the flow map considered here is expressed as the composition of fluctuation map acting via pullback on a mean flow map. Taking the time derivative of the Lagrangian trajectory corresponding to this map results in two velocities, the _Lagrangian mean_ and _Lagrangian disturbance_ velocities, as introduced in [2]. To achieve the aforementioned closure [22, 32], a WKB approximation is made in the displacement part of the Lagrangian trajectory. Performing the resulting approximations and expansions in Hamilton's Principle gives a closed system of equations for the mean flow and wave motion, featuring both the wave effects on currents and the effect of the currents on the waves. In particular, a dispersion relation was found for the Doppler-shifted frequency which extends that of the classical setting. In the classical setting it is assumed that vertical derivatives of the pressure dominate and a _buoyancy frequency_ is introduced to represent this effect. However, in the dispersion relation found for wave mean flow interactions [22, 32], the effect of the fluid pressure on the (Doppler-shifted) wave frequency is featured in its entirety. As we will see in Proposition 3.1, a similar such dispersion relation can be found in the vertical slice framework, and its comparison to the standard theory of two dimensional internal gravity waves turns out to be similar. ### A wave mean flow decomposition of dynamics in the vertical slice As in the three dimensional case, we begin by considering a flow map which is a composition between a mean flow and a fluctuating part. In particular, we represent spatial path of the flow map as \[\begin{pmatrix}x_{t}^{\xi}\\ y_{t}^{\xi}\\ z_{t}^{\xi}\end{pmatrix}=g_{t}\begin{pmatrix}X\\ Y\\ Z\end{pmatrix}=(Id+\alpha\xi_{t})\circ\bar{g}_{t}\begin{pmatrix}X\\ Y\\ Z\end{pmatrix}=\begin{pmatrix}x_{t}\\ y_{t}\\ z_{t}\end{pmatrix}+\alpha\mathbf{\xi}_{t}\begin{pmatrix}x_{t}\\ y_{t}\\ z_{t}\end{pmatrix}\,, \tag{3.21}\] where \((x^{\xi},y^{\xi},z^{\xi})\) are the coordinates of the full trajectory and \((x,y,z)\) are the _mean_ coordinates. As in the standard case, we will reserve the bold notation to mean the in-slice coordinates \(\mathbf{x}^{\xi}=(x^{\xi},z^{\xi})\), \(\mathbf{x}=(x,z)\), and \(\mathbf{X}=(X,Z)\). Here, the mean flow map, \(\bar{g}_{t}\), is assumed to be of the form (2.1) and corresponds to the _Lagrangian mean velocity_ \[\dot{\bar{g}}\bar{g}^{-1}\mathbf{x}_{t}=\overline{\mathbf{u}}_{t}(x_{t},z_{t })=(\overline{\mathbf{u}}_{S},\overline{u}_{T})\,, \tag{3.22}\] in which the bar denotes Lagrangian mean. The fluctuation term \(\mathbf{\xi}\) in (3.21) is defined to be an array of three scalar displacements in position defined on the vertical slice domain as functions of \(x\) and \(z\), i.e. \(\mathcal{F}(M)\). The array of displacements \(\mathbf{\xi}\) can be split into its slice and transverse components as \(\mathbf{\xi}=(\xi_{1},\xi_{2},\xi_{3})=(\mathbf{\xi}_{S},\xi_{T})\). Due to these assumptions and with the above notation, equation (3.21) can be rewritten as a pair of equations \[\mathbf{x}_{t}^{\xi}=g_{t}\mathbf{X} =(Id+\alpha\xi_{t})\circ\bar{g}_{t}\mathbf{X}=\mathbf{x}_{t}+\alpha \boldsymbol{\xi}_{S}(\mathbf{x}_{t},t)\,,\] \[y_{t}^{\xi}=g_{t}Y =(Id+\alpha\xi_{t})\circ\bar{g}_{t}Y=y_{t}+\alpha\xi_{T}(\mathbf{ x}_{t},t)\,.\] In this set-up, the full map \(g_{t}\) is also a map of the type described by equation (2.1) and thus can be described by an element of \(\mathrm{Diff}(M)\ltimes\mathcal{F}(M)\). Indeed, since \(\bar{g}_{t}\) is of the form (2.1) and each component of \(\boldsymbol{\xi}\) is a function of \(x\) and \(z\) only, we have \[\begin{pmatrix}x_{t}^{\xi}\\ y_{t}^{\xi}\\ z_{t}^{\xi}\end{pmatrix}=(Id+\alpha\xi_{t})\begin{pmatrix}x_{t}(X,Z)\\ y_{t}(X,Z)+Y\\ z_{t}(X,Z)\end{pmatrix}=\begin{pmatrix}x_{t}(X,Z)+\alpha\xi_{1}(x_{t}(X,Z),z_{t }(X,Z))\\ y_{t}(X,Z)+Y+\alpha\xi_{2}(x_{t}(X,Z),z_{t}(X,Z))\\ z_{t}(X,Z)+\alpha\xi_{3}(x_{t}(X,Z),z_{t}(X,Z))\end{pmatrix}\,,\] which is of the form (2.1). Taking the time derivative of the full map \(g_{t}\) reveals the structure of its tangent velocity field \[\begin{split}\mathbf{u}(x_{t}^{\xi},y_{t}^{\xi},z_{t}^{\xi},t)= \frac{d(x_{t}^{\xi},y_{t}^{\xi},z_{t}^{\xi})}{dt}&=\overline{\mathbf{u}}(x _{t},z_{t},t)+\alpha\left(\partial_{t}\boldsymbol{\xi}(x_{t},z_{t},t)+ \overline{\mathbf{u}}_{S}(x_{t},z_{t},t)\cdot\nabla\boldsymbol{\xi}(x_{t},z_{ t},t)\right)\\ &=\overline{\mathbf{u}}(x_{t},z_{t},t)+\alpha(\partial_{t}+ \mathcal{L}_{\overline{\mathbf{u}}_{S}})\begin{pmatrix}\xi_{1}\\ \xi_{2}\\ \xi_{3}\end{pmatrix}\,.\end{split} \tag{3.23}\] Notice that \(\overline{\mathbf{u}}\) and \(\boldsymbol{\xi}\) are both three dimensional objects defined on our two dimensional domain, \(M\). We thus introduce notation We then make the assumption that the fluctuating part takes the form \[\boldsymbol{\xi}(\mathbf{x},t)=\mathbf{a}(\epsilon\mathbf{x},\epsilon t)e^{i \phi(\epsilon\mathbf{x},\epsilon t)/\epsilon}+\mathbf{a}^{*}(\epsilon\mathbf{ x},\epsilon t)e^{-i\phi(\epsilon\mathbf{x},\epsilon t)/\epsilon}\,, \tag{3.24}\] which is motivated by a WKB approximation. The pressure therefore also decomposes into mean and fluctuating parts as \[p(\mathbf{x}^{\xi},t)=p_{0}(\mathbf{x}^{\xi},t)+\sum_{j\geq 1}\alpha^{j} \left(b_{j}(\epsilon\mathbf{x}^{\xi},\epsilon t)e^{ij\phi(\epsilon\mathbf{x} ^{\xi},\epsilon t)/\epsilon}+b_{j}^{*}(\epsilon\mathbf{x}^{\xi},\epsilon t)e^ {-ij\phi(\epsilon\mathbf{x}^{\xi},\epsilon t)/\epsilon}\right)\,. \tag{3.25}\] Since we have introduced a wave phase variable, \(\phi(\mathbf{x},t)\), we can define the wave vector, frequency, and doppler-shifted frequency in the standard way \[\mathbf{k}=\nabla\phi\,,\quad\omega=-\partial_{t}\phi\,,\quad\text{and}\quad \widetilde{\omega}=-(\partial_{t}+\mathbf{u}_{S}\cdot\nabla)\phi\,. \tag{3.26}\] As demonstrated in Appendix A, the action corresponding to the Euler-Boussinesq Eady model under these approximations is \[\begin{split} S[\overline{\mathbf{u}}_{S},\overline{u}_{T},D, \theta_{s},p_{0},b,\mathbf{a}]&=\int_{t_{0}}^{t_{1}}\ell[ \overline{\mathbf{u}}_{S},\overline{u}_{T},D,\theta_{s},p_{0},b,\mathbf{a}]\,dt \\ &=\int_{t_{0}}^{t_{1}}\int_{M}\frac{D}{2}\bigg{(}\big{|}\overline{ \mathbf{u}}_{S}\big{|}^{2}+\overline{u}_{T}^{2}+2\alpha^{2}\widetilde{\omega}^ {2}\left(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{T}a_{T}^{*}\right)\bigg{)} \\ &\qquad\qquad+Df\overline{u}_{T}x+Df\alpha^{2}i\widetilde{\omega} \left(-a_{T}a_{1}^{*}+a_{T}^{*}a_{1}\right)+\frac{g}{\theta_{0}}D\left(z- \frac{H}{2}\right)\theta_{s}\\ &\qquad\qquad-D\alpha^{2}i\left(b\mathbf{k}\cdot\mathbf{a}_{S}^{ *}-b^{*}\mathbf{k}\cdot\mathbf{a}_{S}\right)-D\alpha^{2}a_{i}^{*}a_{j}\frac{ \partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\\ &\qquad\qquad+\left(1-D\right)p_{0}+\mathcal{O}(\alpha^{2}\epsilon )\,d^{2}x\,dt\,,\end{split} \tag{3.27}\] where we sum over \(i,j\in\{1,2\}\). By constraining the relationship between \(N\), \(\phi\), and \(\widetilde{\omega}\), we have \[\begin{split} S[\overline{\mathbf{u}}_{S},\overline{u}_{T},D, \theta_{s},p_{0},b,\mathbf{a}]&=\int_{t_{0}}^{t_{1}}\ell[ \overline{\mathbf{u}}_{S},\overline{u}_{T},D,\theta_{s},p_{0},b,\mathbf{a}]\, dt\\ &=\int_{t_{0}}^{t_{1}}\int_{M}\frac{D}{2}\bigg{(}\big{|} \overline{\mathbf{u}}_{S}\big{|}^{2}+\overline{u}_{T}^{2}+2\alpha^{2} \widetilde{\omega}^{2}\left(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{T}a_{T}^ {*}\right)\bigg{)}\\ &\qquad\qquad+Df\overline{u}_{T}x+Df\alpha^{2}i\widetilde{ \omega}\left(-a_{T}a_{1}^{*}+a_{T}^{*}a_{1}\right)+\frac{g}{\theta_{0}}D\left( z-\frac{H}{2}\right)\theta_{s}\\ &\qquad\qquad-D\alpha^{2}i\left(b\mathbf{k}\cdot\mathbf{a}_{S}^{* }-b^{*}\mathbf{k}\cdot\mathbf{a}_{S}\right)-D\alpha^{2}a_{i}^{*}a_{j}\frac{ \partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}+(1-D)p_{0}\,d^{2}x\\ &\qquad\qquad+\alpha^{2}\left\langle N\,,\,-\frac{\partial}{ \partial\epsilon t}\phi-\overline{\mathbf{u}}_{S}\cdot\nabla_{\mathbf{c} \mathbf{x}}\phi-\widetilde{\omega}\right\rangle+\mathcal{O}(\alpha^{2} \epsilon)\,dt\,.\end{split} \tag{3.28}\] Taking variations in (3.28), we have the following (3.29) where we have introduced a reduced notation for the variation in the density, \(D\), \[\begin{split}\varpi:=\frac{\delta\ell}{\delta D}=& \ \frac{|\overline{\mathbf{u}}_{S}|^{2}+\overline{u}_{T}^{2}}{2}+f \overline{u}_{T}x+\frac{g}{\theta_{0}}\left(z-\frac{H}{2}\right)\theta_{s}+ \alpha^{2}\widetilde{\omega}^{2}\left(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a _{T}a_{T}^{*}\right)-p_{0}\\ &\qquad\qquad+\alpha^{2}fi\widetilde{\omega}\left(-a_{T}a_{1}^{* }+a_{T}^{*}a_{1}\right)-\alpha^{2}i\left(b\mathbf{k}\cdot\mathbf{a}_{S}^{*}-b^ {*}\mathbf{k}\cdot\mathbf{a}_{S}\right)-\alpha^{2}a_{i}^{*}a_{j}\frac{\partial ^{2}p_{0}}{\partial x_{i}\partial x_{j}}\,.\end{split} \tag{3.30}\] As explained in Appendix A, the advected quantities featuring in the action (3.27) are defined along the mean part of the Lagrangian trajectory. For example, a scalar advected quantity satisfies \(a_{t}(\mathbf{x}_{t}^{\xi})=a_{t}((Id+\alpha\xi_{t})\circ\bar{g}_{t}\mathbf{x}_ {0})=a_{0}(\mathbf{x}_{0})\). Thus, by defining \(a_{t}^{\xi}(\mathbf{x}_{t})=a_{t}((Id+\alpha\xi_{t})\mathbf{x}_{t})\), we see that the quantity \(a_{t}^{\xi}\) is advected by the _mean_ flow. This idea can be generalised to advected quantities with a different geometric form by examining how their basis transforms (see Appendix A), and the advected quantities, \(\theta_{s}\) and \(Dd^{2}x\), found in the action (3.27) are of this form. The Euler-Poincare equations given in Remark 2.1 may therefore be applied to our wave current interaction action, where the transport velocity is the in-slice Lagrangian mean velocity. The remaining variations will separately give the wave dynamics. The total momentum equation. The _total momentum_ of the system is the 1-form density defined by \[M=\mathbf{M}\cdot d\mathbf{x}\otimes d^{2}x:=\frac{\delta\ell}{\delta\overline{u }_{S}}=D\overline{\mathbf{u}}_{S}\cdot d\mathbf{x}\otimes d^{2}x-\alpha^{2}N \mathbf{k}\cdot d\mathbf{x}\otimes d^{2}x\,. \tag{3.31}\] We assemble the variational derivatives of the Lagrangian (with respect to \(\overline{u}_{S}\), \(\overline{u}_{T}\), \(D\), and \(\theta_{s}\)) into the Euler-Poincare equation (2.6), we have \[(\partial_{t}+\mathcal{L}_{\overline{u}_{S}})\left[\frac{1}{D}\left(D \overline{\mathbf{u}}_{S}-\alpha^{2}N\mathbf{k}\right)\cdot d\mathbf{x}\right] +\frac{1}{D}\left(D\overline{u}_{T}+Dfx\right)d\overline{u}_{T}=d\varpi- \frac{1}{D}\left[\frac{g}{\theta_{0}}D\bigg{(}z-\frac{H}{2}\bigg{)}\right]\,.\] For the wave action density, \(N\), defined by the variation in the doppler-shifted phase, we have \[\frac{N}{D}=2\widetilde{\omega}\left(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{ T}a_{T}^{*}\right)+if(-a_{T}a_{1}^{*}+a_{T}^{*}a_{1})\,, \tag{3.32}\] and we thus have wea \[\begin{split}(\partial_{t}+\mathcal{L}_{\overline{u}_{S}})\left( \frac{\boldsymbol{M}\cdot d\mathbf{x}}{D}\right)&=d\left(\frac{| \overline{\mathbf{u}}|^{2}}{2}-p_{0}\right)+f\overline{u}_{T}\widehat{x}\cdot d \mathbf{x}+\frac{g}{\theta_{0}}\theta_{s}\widehat{z}\cdot d\mathbf{x}\\ &\quad+\alpha^{2}d\left(\widetilde{\omega}\frac{N}{D}-\widetilde{ \omega}^{2}(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{T}a_{T}^{*})-a_{i}^{*}a_ {j}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\right)\,.\end{split} \tag{3.33}\] In vector calculus notation1, the equation of motion is Footnote 1: Note that we have used the identity \(\mathcal{L}_{u}(\boldsymbol{A}\cdot d\mathbf{x})=(-\boldsymbol{u}\times\text{ curl}\boldsymbol{A}+\nabla(\boldsymbol{u}\cdot\boldsymbol{A}))\cdot d \mathbf{x}\) \[\begin{split}\partial_{t}\frac{\boldsymbol{M}}{D}-\overline{ \mathbf{u}}_{S}\times\text{curl}\frac{\boldsymbol{M}}{D}&+\nabla \left(\frac{|\overline{\mathbf{u}}_{S}|^{2}}{2}+p_{0}\right)-f\overline{u}_{ T}\widehat{x}-\frac{g}{\theta_{0}}\theta_{s}\widehat{z}\\ &+\alpha^{2}\left(-\omega\frac{N}{D}+\widetilde{\omega}^{2}( \mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{T}a_{T}^{*})+a_{i}^{*}a_{j}\frac{ \partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\right)=0\,,\end{split} \tag{3.34}\] where the term \(\nabla(\overline{\mathbf{u}}_{S}\cdot(\boldsymbol{M}/D))\) arising from the Lie derivative has been combined with the remainder of the equation, in particular serving to remove the Doppler shift from the frequency appearing in the first term of order \(\alpha^{2}\). The mean flow momentum equation. We decompose the total momentum (3.31) into a mean flow momentum and wave momentum \[\boldsymbol{M}=D\overline{\mathbf{u}}_{S}-\alpha^{2}N\mathbf{k}=:\boldsymbol{ m}-\mathbf{p}\,. \tag{3.35}\] From the variational principle we have equations for \(N\) and \(\phi\), \[\partial_{t}N+\text{div}(N\overline{\mathbf{u}}_{S}) =-i\text{div}(Db\mathbf{a}_{S}^{*}-Db^{*}\mathbf{a}_{S})\,, \tag{3.36}\] \[\partial_{t}\phi+\overline{\mathbf{u}}_{S}\cdot\nabla\phi =-\widetilde{\omega}\,, \tag{3.37}\] which, when combined, give an equation for the wave momentum \[(\partial_{t}+\mathcal{L}_{\overline{\mathbf{u}}_{S}})(\mathbf{p}\cdot d \mathbf{x})=\alpha^{2}(\partial_{t}+\mathcal{L}_{\overline{\mathbf{u}}_{S}})( Nd\phi)=-\alpha^{2}Nd\widetilde{\omega}-\alpha^{2}\text{div}\left(N \boldsymbol{v}_{g}\right)d\phi\,, \tag{3.38}\] where \[\boldsymbol{v}_{g}:=\frac{iD}{N}\left(\mathbf{a}_{S}^{*}b-\mathbf{a}_{S}b^{*} \right)\,.\] Combining this equation with the Euler-Poincare equation (3.33) gives the equation for the mean flow momentum \[\begin{split}\partial_{t}\overline{\mathbf{u}}_{S}-\overline{ \mathbf{u}}_{S}\times\text{curl}\overline{\mathbf{u}}_{S}&=-\nabla \left(\frac{|\overline{\mathbf{u}}_{S}|^{2}}{2}+p_{0}\right)+f\overline{u}_{T} \widehat{x}+\frac{g}{\theta_{0}}\theta_{s}\widehat{z}\\ &\quad-\alpha^{2}\left(-\widetilde{\omega}\frac{N}{D}+\widetilde{ \omega}^{2}(\mathbf{a}_{S}\cdot\mathbf{a}_{S}^{*}+a_{T}a_{T}^{*})+a_{i}^{*}a_ {j}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\right)\\ &\quad-\frac{\alpha^{2}}{D}\left(N\nabla\widetilde{\omega}+\mathbf{ k}\text{div}\left(N\boldsymbol{v}_{g}\right)\right)\,.\end{split} \tag{3.39}\] Notice that by moving the time derivative of the wave momentum, \((\partial_{t}+\mathcal{L}_{\overline{\mathbf{u}}_{S}})(\mathbf{p}\cdot d\mathbf{x})\), to the right hand side of the equation of motion, we no longer have the term in our Lie derivative which removed the Doppler shift from the frequency in equation (3.34). Thus, all occurrences of the wave frequency in equation (3.39) are Doppler shifted. The equation for the mean transverse velocity.The Euler-Poincare equation (2.7) from Remark 2.1, when combined with Hamilton's Principle (3.29), immediately gives the equation for the mean transverse velocity \[\partial_{t}\overline{u}_{T}+\overline{\mathbf{u}}_{S}\cdot\nabla\overline{u}_ {T}+f\overline{\mathbf{u}}_{S}\cdot\widehat{x}=-\frac{g}{\theta_{0}}\bigg{(}z- \frac{H}{2}\bigg{)}s\,. \tag{3.40}\] **Remark 3.4**.: _This equation is identical to that present in the standard Euler-Boussinesq \(\!\!\!\)Eady model [9]. This is to be expected, since each element of the fluctuating term \(\boldsymbol{\xi}\) in (3.21) is a function on \(M\) and thus has no derivative in the direction transverse to the slice. As such, the wave effect on current occurs only on the velocity field \(\overline{\mathbf{u}}_{S}\) within the vertical slice._ The advection equations and incompressibility.The variation in \(p_{0}\) implies that \(D=1\) up to order \(\alpha^{2}\epsilon\). If we combine this with the advection equation for the mass density, we have incompressibility \[\begin{gathered} D=1\\ \partial_{t}D+\operatorname{div}(D\overline{\mathbf{u}}_{S})=0 \end{gathered}\quad\Longrightarrow\quad\nabla\cdot\overline{ \mathbf{u}}_{S}=0\,. \tag{3.41}\] These equations are to be considered together in tandem with the advection equation for the scalar buoyancy, \[\partial_{t}\theta_{s}+\overline{\mathbf{u}}_{S}\cdot\nabla\theta_{s}+ \overline{u}_{T}s=0\,. \tag{3.42}\] The wave dynamics.Using the variations of \(\delta a_{1}\), \(\delta a_{2}\), \(\delta a_{T}\), \(\delta b\) and the stationary condition, the linear set of equations \[\begin{gathered}\widetilde{\omega}^{2}a_{1}-if\widetilde{\omega} a_{T}-ibk_{1}-a_{j}\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{j}}=0\,, \\ \widetilde{\omega}^{2}a_{2}-ibk_{2}-a_{j}\frac{\partial^{2}p_{0}} {\partial x_{2}\partial x_{j}}=0\,,\\ \widetilde{\omega}^{2}a_{T}+if\widetilde{\omega}a_{1}=0\,,\\ \mathbf{k}\cdot\mathbf{a}_{S}=0\,,\end{gathered} \tag{3.43}\] implies a linear dispersion relation for the Doppler shifted wave frequency \(\widetilde{\omega}\) as follows. Taking the dot product of the first two equations with \(\mathbf{k}\) and using \(\mathbf{k}\cdot\mathbf{a}_{S}=0\) gives the equation for \(b\) in terms of \(a_{1}\) \[ib|\mathbf{k}|^{2}=-f^{2}a_{1}k_{1}-k_{i}a_{j}\frac{\partial^{2}p_{0}}{ \partial x_{i}\partial x_{j}}\,,\] where we have substituted \(a_{T}\) in terms of \(a_{1}\) using \(\widetilde{\omega}a_{T}=-ifa_{1}\). Then, we have a set of linear equations involving \(a_{1}\) and \(a_{2}\) \[\begin{gathered}\widetilde{\omega}^{2}a_{1}-f^{2}a_{1}-a_{j}\frac {\partial^{2}p_{0}}{\partial x_{1}\partial x_{j}}+\frac{k_{1}}{|\mathbf{k}|^{2 }}\left(f^{2}a_{1}k_{1}+k_{i}a_{j}\frac{\partial^{2}p_{0}}{\partial x_{i} \partial x_{j}}\right)=0\,,\\ \widetilde{\omega}^{2}a_{2}-a_{j}\frac{\partial^{2}p_{0}}{\partial x _{2}\partial x_{j}}+\frac{k_{2}}{|\mathbf{k}|^{2}}\left(f^{2}a_{1}k_{1}+k_{i} a_{j}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\right)=0\,,\end{gathered} \tag{3.44}\] which can be assembled into a matrix form \[\begin{pmatrix}\widetilde{\omega}^{2}-f^{2}+\frac{k_{1}k_{i}}{|\mathbf{k}|^{2}} \frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{1}}+\frac{f^{2}k_{1}^{2}}{| \mathbf{k}|^{2}}-\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{1}}&\frac{ k_{1}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i} \partial x_{2}}-\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\\ \frac{k_{2}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i} \partial x_{1}}+\frac{f^{2}k_{1}k_{2}}{|\mathbf{k}|^{2}}-\frac{\partial^{2}p_{ 0}}{\partial x_{2}\partial x_{1}}&\widetilde{\omega}^{2}+\frac{k_{2}k_{i}}{| \mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{ \partial^{2}p_{0}}{\partial x_{2}\partial x_{2}}\end{pmatrix}\begin{pmatrix}a_{ 1}\cr a_{2}\end{pmatrix}=0\,. \tag{3.45}\] **Proposition 3.1**.: _The solvability condition for \(a_{1}\) and \(a_{2}\) implies the following dispersion relation_ \[\widetilde{\omega}^{2}=\frac{f^{2}k_{2}^{2}}{|\mathbf{k}|^{2}}+\left(\delta_{ ij}-\frac{k_{i}k_{j}}{|\mathbf{k}|^{2}}\right)\frac{\partial^{2}p_{0}}{ \partial x_{i}\partial x_{j}}\,. \tag{3.46}\] Proof.: The solvability condition for \(a_{1}\) and \(a_{2}\) is that the determinant of this matrix found in equation (3.45) vanishes. To compute this determinant, we will separate it into terms multiplying \(\widetilde{\omega}\), those multiplying \(f\) (but not \(\widetilde{\omega}\)), and those featuring neither \(f\) nor \(\widetilde{\omega}\). \[\begin{split} 0&=\det\begin{pmatrix} \widetilde{\omega}^{2}-f^{2}+\frac{k_{1}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^ {2}p_{0}}{\partial x_{i}\partial x_{1}}+\frac{f^{2}k_{1}^{2}}{|\mathbf{k}|^{2} }-\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{1}}&\frac{k_{1}k_{i}}{| \mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{ \partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\\ \frac{k_{2}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i} \partial x_{1}}+\frac{f^{2}k_{1}k_{2}}{|\mathbf{k}|^{2}}-\frac{\partial^{2}p_{ 0}}{\partial x_{2}\partial x_{1}}&\widetilde{\omega}^{2}+\frac{k_{2}k_{i}}{| \mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{ \partial^{2}p_{0}}{\partial x_{2}\partial x_{2}}\end{pmatrix}\\ =:A[\widetilde{\omega},\widetilde{\omega}f]+B[f]+C[1]\,.\end{split} \tag{3.47}\] We see that the terms corresponding to \(A[\widetilde{\omega},\widetilde{\omega}f]\) produce the desired dispersion relation. We first demonstrate that the other terms coming from the determinant, \(B[f]\) and \(C[1]\), do not contribute. Beginning by evaluating terms which multiply the Coriolis parameter but do not feature \(\widetilde{\omega}^{2}\), we have \[B[f] =-\frac{f^{2}k_{2}^{2}}{|\mathbf{k}|^{2}}\left(\frac{k_{2}k_{i}} {|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac {\partial^{2}p_{0}}{\partial x_{2}\partial x_{2}}\right)-\frac{f^{2}k_{1}k_{2} }{|\mathbf{k}|^{2}}\left(\frac{k_{1}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2 }p_{0}}{\partial x_{i}\partial x_{2}}-\frac{\partial^{2}p_{0}}{\partial x_{1} \partial x_{2}}\right)\] \[=\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\left(\frac {f^{2}k_{1}k_{2}}{|\mathbf{k}|^{2}}-\frac{f^{2}k_{1}^{2}k_{2}}{|\mathbf{k}|^{4 }}-\frac{f^{2}k_{1}^{2}k_{2}}{|\mathbf{k}|^{4}}\right)+\frac{\partial^{2}p_{0} }{\partial x_{2}\partial x_{2}}\left(\frac{f^{2}k_{2}^{2}}{|\mathbf{k}|^{2}}- \frac{f^{2}k_{2}^{4}}{|\mathbf{k}|^{4}}-\frac{f^{2}k_{1}^{2}k_{2}^{2}}{| \mathbf{k}|^{4}}\right)\] \[=0\,.\] Similarly, we may evaluate the terms which feature neither \(f\) nor \(\widetilde{\omega}\) \[C[1] =\left(\frac{k_{1}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0} }{\partial x_{i}\partial x_{1}}-\frac{\partial^{2}p_{0}}{\partial x_{1} \partial x_{1}}\right)\left(\frac{k_{2}k_{i}}{|\mathbf{k}|^{2}}\frac{\partial^ {2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{\partial^{2}p_{0}}{\partial x_{2} \partial x_{2}}\right)\] \[\qquad\qquad-\left(\frac{k_{2}k_{i}}{|\mathbf{k}|^{2}}\frac{ \partial^{2}p_{0}}{\partial x_{i}\partial x_{1}}-\frac{\partial^{2}p_{0}}{ \partial x_{2}\partial x_{1}}\right)\left(\frac{k_{1}k_{i}}{|\mathbf{k}|^{2}} \frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{\partial^{2}p_{0}}{ \partial x_{1}\partial x_{2}}\right)\] \[=\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{1}}\frac{ \partial^{2}p_{0}}{\partial x_{2}\partial x_{2}}\bigg{(}1-\frac{k_{1}^{2}}{| \mathbf{k}|^{2}}-\frac{k_{2}^{2}}{|\mathbf{k}|^{2}}\bigg{)}\] \[\quad+\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{1}}\frac{ \partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\bigg{(}\frac{k_{1}^{3}k_{2}}{| \mathbf{k}|^{4}}-\frac{k_{1}k_{2}}{|\mathbf{k}|^{2}}-\frac{k_{1}^{3}k_{2}}{| \mathbf{k}|^{4}}+\frac{k_{2}k_{1}}{|\mathbf{k}|^{2}}\bigg{)}\] \[\quad+\frac{\partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\frac{ \partial^{2}p_{0}}{\partial x_{1}\partial x_{2}}\bigg{(}\frac{k_{1}^{2}k_{2}^{2}}{| \mathbf{k}|^{4}}-\frac{k_{1}^{2}k_{2}^{2}}{|\mathbf{k}|^{4}}+\frac{k_{2}^{2}}{| \mathbf{k}|^{2}}+\frac{k_{1}^{2}}{|\mathbf{k}|^{2}}-1\bigg{)}\] \[=0\,.\] Thus, we have \(A[\widetilde{\omega},\widetilde{\omega}f]=0\), and hence \[\widetilde{\omega}^{4} =-\widetilde{\omega}^{2}\left(\frac{k_{2}k_{i}}{|\mathbf{k}|^{2}} \frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{2}}-\frac{\partial^{2}p_{0}}{ \partial x_{2}\partial x_{2}}\right)-\widetilde{\omega}^{2}\left(-f^{2}+\frac{k_{1}k _{i}}{|\mathbf{k}|^{2}}\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{1}}+ \frac{f^{2}k_{1}^{2}}{|\mathbf{k}|^{2}}-\frac{\partial^{2}p_{0}}{\partial x_{1} \partial x_{1}}\right)\] \[=\widetilde{\omega}^{2}\frac{f^{2}k_{2}^{2}}{|\mathbf{k}|^{2}} +\widetilde{\omega}^{2}\left(\delta_{ij}-\frac{k_{i}k_{j}}{|\mathbf{k}|^{2}} \right)\frac{\partial^{2}p_{0}}{\partial x_{i}\partial x_{j}}\,.\] This produces the dispersion relation (3.46) after dividing by \(\widetilde{\omega}^{2}\). **Remark 3.5**.: _The dispersion relation (3.46) takes a more familiar form when reverting many of our modelling assumptions back to a more standard form. In particular, we have assumed that the waves are coupled to the mean flow and the frequency is Doppler shifted. We have assumed further that the pressure here is the complete pressure required to ensure that the flow is incompressible. If we instead consider the non-shifted frequency and make the standard assumption that the pressure increases with depth such that \(p_{0}\) only has derivatives in \(z\) and is related to the buoyancy frequency, \(\mathcal{N}\), in the standard way, the dispersion relation changes and we have_ \[\left.\begin{aligned} &\widetilde{\omega}\mapsto\omega\\ & p_{0}=\frac{\mathcal{N}^{2}z^{2}}{2}\end{aligned}\right\} \quad\implies\quad\omega^{2}=\frac{k_{2}^{2}f^{2}+k_{1}^{2}\mathcal{N}^{2}}{| \mathbf{k}|^{2}}\,. \tag{3.48}\] _Under these assumptions the dispersion properties of the waves found in our model are a generalisation of the classical theory for internal gravity waves. This is clear, since the above dispersion relation may be found in standard texts on fluid dynamics [45]._ ## 4 Stochastic advection by Lie transport (SALT) for VSMs As discussed at the beginning of Section 3, parameterisations of fast fluctuations are essential in the numerical simulation of geophysical fluid dynamics. A more modern approach is through the application of stochastic parameterisation schemes, where one obtains a statistical representation of uncertainty during ensemble forecasting simulations. In this section, we briefly consider a stochastic parameterisation scheme known as SALT [27] applied to the family of VSMs. Recall that the Euler-Poincare equations resulting from Theorem 2.1 can be written as the following Lie-Poisson equations on \((\mathfrak{X}(M)\ltimes\mathcal{F}(M))^{*}\times V^{*}\), \[\frac{\partial}{\partial t}\begin{bmatrix}m_{S}\\ D\\ m_{T}\\ \theta_{s}\end{bmatrix}=-\begin{bmatrix}\text{ad}_{\square}^{*}m_{S}&\square \diamond D&\square\diamond m_{T}&\square\diamond\vartheta_{s}\\ \mathcal{L}_{\square}D&0&0&0\\ \mathcal{L}_{\square}m_{T}&0&0&-s\\ \mathcal{L}_{\square}\vartheta_{s}&0&s&0\end{bmatrix}\begin{bmatrix}\delta h /\delta m_{S}\\ \delta h/\delta D\\ \delta h/\delta m_{T}\\ \delta h/\delta\vartheta_{s}\end{bmatrix}\,,\] ((2.28) revisited) where the reduced Hamiltonian, \(h(m_{S},D,m_{T},\theta_{s})\), is related to the Lagrangian through the Legendre transform \[h(m_{S},D,m_{T},\vartheta_{s})=\langle m_{S}\,,\,u_{S}\rangle+\langle m_{T}\,, \,u_{T}\rangle-\ell(u_{S},u_{T},D,\vartheta_{s},p)\,. \tag{4.1}\] **Remark 4.1**.: _As for the velocity field, we will denote the momentum by \(m_{S}\), and the coefficients of the momentum with respect to the geometric basis by \(\mathbf{m}_{S}\)._ As noted in [44], particular care needs to be taken in the stochastic variational description of incompressible fluids. That is, the variation in \(D\) gains a stochastic part corresponding to the stochastic part of the pressure. To achieve this, we first define \(H\) to mean the Hamiltonian without the deterministic pressure contribution. We then augment this Hamiltonian with stochastic transport velocity fields, \(\xi_{Si}\) and \(\xi_{Ti}\), and the stochastic pressure constraint, as \[H\,dt-\text{d}p(D-1)+\sum_{i}\left\langle m_{S}\,,\,\xi_{Si}\right\rangle \circ dW_{t}^{i}+\sum_{i}\left\langle m_{T}\,,\,\xi_{Ti}\right\rangle\circ dW _{t}^{i}\,. \tag{4.2}\] This augmentation leads to the stochastic Lie-Poisson equations for vertical slice dynamics, expressed in their geometric form as, cf. equation (2.28), \[\text{d}\begin{bmatrix}m_{S}\\ D\\ m_{T}\\ \vartheta_{s}\end{bmatrix}=-\begin{bmatrix}\text{ad}_{\square}^{*}m_{S}& \square\diamond D&\square\diamond m_{T}&\square\diamond\vartheta_{s}\\ \mathcal{L}_{\square}D&0&0&0\\ \mathcal{L}_{\square}m_{T}&0&0&-s\\ \mathcal{L}_{\square}\vartheta_{s}&0&s&0\end{bmatrix}\begin{bmatrix}\text{d} x_{S}:=u_{S}\,dt+\sum_{i}\xi_{Si}\circ dW_{t}^{i}\\ \delta H/\delta D\,dt-\text{d}p\\ \text{d}x_{T}:=u_{T}\,dt+\sum_{i}\xi_{Ti}\circ dW_{t}^{i}\\ \delta H/\delta\vartheta_{s}\,dt\end{bmatrix}. \tag{4.3}\] Thus, we have \[(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}})m_{S} =-\mathrm{d}x_{T}\diamond m_{T}\,dt-\frac{\delta H}{\delta\vartheta_{s }}\diamond\vartheta_{s}\,dt-\frac{\delta H}{\delta D}\diamond D\,dt+\mathrm{d}p \diamond D\,, \tag{4.4}\] \[(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}})m_{T} =\frac{\delta H}{\delta\theta_{s}}s\,dt\,,\] (4.5) \[(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}})\vartheta_{s} =-s\,\mathrm{d}x_{T}\,,\] (4.6) \[(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}})(Dd^{2}x) =0\,. \tag{4.7}\] ### The Euler-Boussinesq Eady model with SALT We first Legendre transform the Lagrangian for the Euler-Boussinesq Eady model, and remove the deterministic pressure terms, to get \[H(m_{S},m_{T},\theta_{s},D)=\int_{M}\frac{1}{2D}\left(|\mathbf{m}_{S}|^{2}+m_{T}^{2 }\right)-Dfm_{T}x-\frac{g}{\theta_{0}}D\left(z-\frac{H}{2}\right)\theta_{s}\, d^{2}x\,. \tag{4.8}\] The equation (4.4) corresponding to the above Hamiltonian is \[(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}})u_{S}^{\flat}+(u_{T}+fx)d(\mathrm{ d}x_{T})=-d\left(\frac{\delta H}{\delta D}\right)\,dt+\frac{1}{D}\frac{\delta H }{\delta\vartheta_{s}}d\vartheta_{s}\,dt-d(\mathrm{d}p)\,, \tag{4.9}\] where we have divided by the volume form and used the fact that \(\frac{1}{D}\frac{\delta\ell}{\delta u_{T}}=\frac{m_{T}}{D}=u_{T}+fx\). In vector calculus notation, we have \[\mathrm{d}\mathbf{u}_{S}+\mathbf{u}_{S}\cdot\nabla\mathbf{u}_{S} \,dt+\sum_{i}\mathbf{\xi}_{Si}\cdot\nabla\mathbf{u}_{S}\circ dW_{t}^{i}+\sum_{i}u_ {j}\nabla\xi_{Si}^{j}\circ dW_{t}^{i}+\sum_{i}u_{T}\nabla\xi_{Ti}\circ dW_{t}^ {i}+\sum_{i}fx\nabla\xi_{Ti}\circ dW_{t}^{i}\] \[=fu_{T}\widehat{x}\,dt+\frac{g}{\vartheta_{0}}\vartheta_{s} \widehat{z}\,dt-\nabla\mathrm{d}p\,, \tag{4.10}\] \[\mathrm{d}u_{T}+\mathbf{u}_{S}\cdot\nabla u_{T}\,dt+\sum_{i}\mathbf{ \xi}_{Si}\cdot\nabla u_{T}\circ dW_{t}^{i} =-f\mathbf{u}_{S}\cdot\widehat{x}\,dt-\frac{g}{\vartheta_{0}} \left(z-\frac{H}{2}\right)s\,dt\,,\] (4.11) \[\mathrm{d}\vartheta_{s}+\mathbf{u}_{S}\cdot\nabla\vartheta_{s}\, dt+\sum_{i}\mathbf{\xi}_{Si}\cdot\nabla\vartheta_{s}\circ dW_{t}^{i}+u_{T}s\,dt+\sum_{i} \xi_{Ti}s\circ dW_{t}^{i} =0\,,\] (4.12) \[\nabla\cdot\mathbf{u}_{S} =\nabla\cdot\mathbf{\xi}_{Si} =0\,. \tag{4.13}\] If we use the reduced notation \(\mathrm{d}\mathbf{x}_{S}\) and \(\mathrm{d}\mathbf{x}_{T}\), and introduce the notation \(\mathbf{u}=(\mathbf{u}_{S},u_{T})\) and \(\mathbf{\xi}=(\mathbf{\xi}_{S},\xi_{T})\), then the equations can be written in the following more compact form \[\mathrm{d}\mathbf{u}_{S}+\mathrm{d}\mathbf{x}_{S}\cdot\nabla \mathbf{u}_{S}+\sum_{i}u_{j}\nabla\xi^{j}\circ dW_{t}^{i}+\sum_{i}fx\nabla\xi _{Ti}\circ dW_{t}^{i} =fu_{T}\widehat{x}\,dt+\frac{g}{\vartheta_{0}}\vartheta_{s} \widehat{z}\,dt-\nabla\mathrm{d}p\,, \tag{4.14}\] \[\mathrm{d}u_{T}+\mathrm{d}\mathbf{x}_{S}\cdot\nabla u_{T} =-f\mathbf{u}_{S}\cdot\widehat{x}\,dt-\frac{g}{\vartheta_{0}} \left(z-\frac{H}{2}\right)s\,dt\,,\] (4.15) \[\mathrm{d}\vartheta_{s}+\mathrm{d}\mathbf{x}_{S}\cdot\nabla \vartheta_{s}+s\,\mathrm{d}\mathbf{x}_{T} =0\,,\] (4.16) \[\nabla\cdot\mathbf{u}_{S}=\nabla\cdot\mathbf{\xi}_{Si} =0\,. \tag{4.17}\] **Theorem 4.1** (Kelvin-Noether).: _For a vertical slice model with SALT as introduced in equation (4.3), we have that_ \[\mathrm{d}\oint_{\gamma_{t}}\left(s\frac{m_{S}}{D}-\frac{m_{T}}{D}\nabla \vartheta_{s}\right)\cdot d\mathbf{x}=0\,, \tag{4.18}\] _where \(\gamma_{t}:C^{1}\mapsto M\) is a closed loop moving with the flow \(\mathrm{d}x_{S}\)._ Proof.: The proof of this theorem follows directly from the Lie-Poisson equations (4.3). Indeed, \[\mathrm{d}\oint_{\gamma_{t}}\left(s\frac{m_{S}}{D}-\frac{m_{T}}{D} \nabla\vartheta_{s}\right)\cdot d\mathbf{x} =\oint_{\gamma_{t}}(\mathrm{d}+\mathcal{L}_{\mathrm{d}x_{S}}) \left[\left(s\frac{m_{S}}{D}-\frac{m_{T}}{D}\nabla\vartheta_{s}\right)\cdot d \mathbf{x}\right]\] \[=\oint_{\gamma_{t}}s\left(-d\left(\frac{\delta H}{\delta D} \right)\,dt+s\,d\mathrm{d}p+\frac{1}{D}\frac{\delta H}{\delta\vartheta_{s}}d \vartheta_{s}\,dt-\frac{1}{D}m_{T}d(\mathrm{d}x_{T})\right)\] \[\qquad\qquad-\frac{s}{D}\frac{\delta}{\delta\vartheta_{s}}d \vartheta_{s}\,dt+\left(\frac{1}{D}m_{t}\right)d(s\,\mathrm{d}x_{T})\] \[=0\,,\] as required. **Remark 4.2**.: _The statement of the Kelvin-Noether theorem, when written in terms of the Lagrangian, is_ \[\mathrm{d}\oint_{\gamma_{t}}\left(s\bigg{(}\frac{1}{D}\frac{\delta\ell}{ \delta u_{S}}\bigg{)}-\bigg{(}\frac{1}{D}\frac{\delta\ell}{\delta u_{T}}\bigg{)} \nabla\vartheta_{s}\right)\cdot d\mathbf{x}=0\,.\] **Remark 4.3**.: _In this section, we have included SALT noise in the vertical slice model. It is possible to go further, and include SALT noise in the WMFI models discussed in the previous section. Indeed, SALT noise can be coupled to each momentum term in equation (3.10), justk as we have coupled noise terms to the slice and transverse momenta in this section._ ## 5 Conclusion and Outlook The ocean modelling goal of the present paper has been to include internal gravity wave (IGW) motion in the Vertical Slice Model (VSM) with transverse flow introduced in [9]. The mathematical goal accompanying the goal for data assimilation is also to determine the Poisson/Hamiltonian structure of the resulting model, and thereby formulate a new model of stochastic parameterisation of advective transport which is extremely versatile and therefore is of potential use for quantifying uncertainty in a variety of ocean model simulations, in addition to the stochastic VSM derived here to illustrate the variational ocean modelling approach. Of course, many fundamental mathematical questions about the stochastic VSM derived here still remain open. Indeed, even the fundamental properties of existence, uniqueness and well-posedness have not yet been proven for the stochastic VSM, although the stability, well-posedness and blow-up criterion have been established for the deterministic case in [1]. Nonetheless, the stochastic VSMs do belong to the SALT class of fluid models and therefore they have the capability to quantify uncertainty. In fact, the SALT models are also known to be effective in systematically reducing the uncertainty in the spread of their stochastic ensembles, when the data science methods of calibration and assimilation in [7, 8] are applied. The application of the uncertainty reduction capabilities of the SALT models and their data calibration and assimilation results for these stochastic VSMs remain to be demonstrated elsewhere. ## Acknowledgements We are grateful to our friends, colleagues and collaborators for their advice and encouragement in the matters treated in this paper. We especially thank C. Cotter, I. Gjaja, J.C. McWilliams, C. Tronci, P. Bergold, and J. Woodfield for many insightful discussions of corresponding results similar to the ones derived here for the VSMs, and in earlier work together in deriving models of complex fluids, turbulence, plasma dynamics, vertical slice models and the quantum-classical hydrodynamic description of molecules. DH and RH were partially supported during the present work by Office of Naval Research (ONR) grant award N00014-22-1-2082, "Stochastic Parameterization of Ocean Turbulence for Observational Networks". DH and OS were partially supported during the present work by European Research Council (ERC) Synergy grant "Stochastic Transport in Upper Ocean Dynamics" (STUOD) - DLV-856408.
2301.01529
Envy-free dynamic pricing schemes
A combinatorial market consists of a set of indivisible items and a set of agents, where each agent has a valuation function that specifies for each subset of items its value for the given agent. From an optimization point of view, the goal is usually to determine a pair of pricing and allocation of the items that provides an efficient distribution of the resources, i.e., maximizes the social welfare, or is as profitable as possible for the seller, i.e., maximizes the revenue. To overcome the weaknesses of mechanisms operating with static prices, a recent line of research has concentrated on dynamic pricing schemes. In this model, agents arrive in an unspecified sequential order, and the prices can be updated between two agent-arrivals. Though the dynamic setting is capable of maximizing social welfare in various scenarios, the assumption that the agents arrive one after the other eliminates the standard concept of fairness. In this paper, we study the existence of optimal dynamic prices under fairness constraints in unit-demand markets. We propose four possible notions of envy-freeness of different strength depending on the time period over which agents compare themselves to others: the entire time horizon, only the past, only the future, or only the present. For social welfare maximization, while the first definition leads to Walrasian equilibria, we give polynomial-time algorithms that always find envy-free optimal dynamic prices in the remaining three cases. In contrast, for revenue maximization, we show that the corresponding problems are APX-hard if the ordering of the agents is fixed. On the positive side, we give polynomial-time algorithms for the setting when the seller can choose the order in which agents arrive.
Kristóf Bérczi, Laura Codazzi, Julian Golak, Alexander Grigoriev
2023-01-04T10:47:28Z
http://arxiv.org/abs/2301.01529v1
# Envy-free dynamic pricing schemes ###### Abstract A combinatorial market consists of a set of indivisible items and a set of agents, where each agent has a valuation function that specifies for each subset of items its value for the given agent. From an optimization point of view, the goal is usually to determine a pair of pricing and allocation of the items that provides an efficient distribution of the resources, i.e., maximizes the social welfare, or is as profitable as possible for the seller, i.e., maximizes the revenue. To overcome the weaknesses of mechanisms operating with static prices, a recent line of research has concentrated on dynamic pricing schemes. In this model, agents arrive in an unspecified sequential order, and the prices can be updated between two agent-arrivals. Though the dynamic setting is capable of maximizing social welfare in various scenarios, the assumption that the agents arrive one after the other eliminates the standard concept of fairness. In this paper, we study the existence of optimal dynamic prices under fairness constraints in unit-demand markets. We propose four possible notions of envy-freeness of different strength depending on the time period over which agents compare themselves to others: the entire time horizon, only the past, only the future, or only the present. For social welfare maximization, while the first definition leads to Walrasian equilibria, we give polynomial-time algorithms that always find envy-free optimal dynamic prices in the remaining three cases. In contrast, for revenue maximization, we show that the corresponding problems are APX-hard if the ordering of the agents is fixed. On the positive side, we give polynomial-time algorithms for the setting when the seller can choose the order in which agents arrive. **Keywords:** Algorithms, Dynamic pricing scheme, Envy-free allocations, Revenue maximization, Social welfare maximization ## 1 Introduction The great availability of surveys and marketing researches help sellers to predict the interest of costumers in different products, which opens up the possibility to apply user specific pricing processes. While this helps customers in purchase decisions, it makes the pricing process even more challenging for sellers. In this context, some simple economic rules are straightforward: low prices typically attract more customers but have a small revenue per item, while high prices generate a greater revenue per item but attract less customers. As sellers aim at maintaining customers' satisfaction while achieving high revenue, the need for effective pricing strategies is increasing. We consider combinatorial markets, in which a set of indivisible items is to be distributed among a set of agents. Each agent has a valuation for each subset of items that measures how much receiving the bundle would be worth to the agent. An allocation assigns a subset of items to each agent so that every item is assigned to at most one of them. In a posted price mechanism, the seller can set the prices of the items individually, and the utility of an agent for a given bundle of items is the agent's value for the bundle minus the total price of all contained items - the utility hence measures the agent's happiness when buying all the items of the bundle at the given prices. An allocation is considered to be envy-free if no agent would prefer to be assigned a different bundle of items. In this paper, we study resource allocation problems in a dynamic environment from two perspectives. First, we focus on how to set prices so that a market equilibrium maximizes the overall social welfare, that is, the total sum of the agents' values. Second, we consider how to set prices that the seller's profit is maximized. To the best of our knowledge, the present work is the first one extending the concept of envy-freeness to the dynamic setting. Previous work.Achieving optimal social welfare through simple mechanisms has been the center of attention for a long time due to its far-reaching applications. In particular, posted price mechanisms became a key approach to allocate resources, hence finding optimal pricing schemes is a fundamental question in combinatorial markets. A pair of pricing and allocation is called a Walrasian equilibrium if all the items are assigned to someone and each agent receives a bundle that maximizes her utility - the definition automatically implies that the corresponding allocation maximizes social welfare. The idea of Walrasian equilibria was introduced already in the late 1800s [16] and the existence of such an equilibria was verified for gross substitutes valuations by Kelso and Crawford [14]. However, it was pointed out by Cohen-Addad et al. [8] and independently by Hsu et al. [13] that the existence of Walrasian allocations strongly depends on the tie breaking process, usually carried out by a central coordinator. If the agents arrive one after the other and choose an arbitrary bundle of items that maximizes their utility, then the absence of a tie-breaking rule may result in a suboptimal allocation with respect the social welfare. To overcome these difficulties, Cohen-Addad et al. [8] introduced the notion of dynamic prices that proved to be a powerful tool in designing markets without a central tie-breaking coordinator. In the proposed model, agents arrive in an unspecified sequential order and the item prices can be updated before the arrival of the next agent. Their main result is a polynomial time dynamic pricing scheme that achieves optimal social welfare in unit-demand markets. This work initiated the study of dynamic pricing schemes, and the existence of optimal dynamic prices was settled for three agents with multi-demand valuations by Berger, Eden and Feldman [5], for bi-demand valuations by Berczi, Berczi-Kovacs and Szogi [3], for two agents with certain matroid rank valuations by Berczi, Kakimura and Kobayashi [4], and recently for four agents with multi-demand valuations by Pashkovich and Xie [15]. The market clearing condition can lead to Walrasian equilibrium with low revenue for the seller, even if prices are as high as possible. In the seminal work of Guruswami et al. [12], envy-free pricing was introduced as a relaxation of Walrasian equilibrium by dropping the requirement on the clearance of the market, but keeping the same fairness condition. In their model, the goal is to maximize the revenue when each agent has a demand and valuation for each bundle of items and each item has limited supply. The authors showed that maximizing the revenue is APX-hard already for unit-demand markets, and provided logarithmic approximations in the number of customers for the unit-demand and single-parameter cases. A similar hardness was proved independently by Aggarawal et al. [1]. Subsequently several versions of the problem have been shown to have poly-logarithmic inapproximability, see e.g. the works of Briest [6] and Chalmersook, Laekhanukit and Nanogkai [7]. Bansal et al. [2] adapted the concept of envy-freeness to a pricing over time scheme. In their model, there is a single item with unlimited supply, and each agent is associated with a time interval over which she will consider buying a copy of the item, together with a maximum value the agent is willing to pay for it. The seller's goal is to set the prices at every time unit so as to maximize revenue from the sale of copies of the item over the time period. Our results.The original motivation behind dynamic pricing schemes was to shift the tie-breaking process from the central coordinator to the customers, as in reality customers choose bundles of items without caring about social optimum. As it is shown by the above mentioned results, the dynamic setting is indeed capable of maximizing social welfare without the need for a central coordinator. On the other hand, this approach has an implication on the fairness of the final allocation that is usually not emphasized. The model assumes that the customers' sole objective is to pick a bundle of items maximizing their utility with respect to the prices available at their arrival, and they are not concerned with prices at earlier and/or later times. This means that envy-freeness is ensured only locally, and the final allocation together with the prices at which the items were bought do not necessarily form an envy-free solution over all time horizon. Our first contribution is initiating the study of dynamic pricing schemes under global fairness constraints. We extend the concept of envy-freeness to the dynamic setting in unit-demand markets, proposing four possible notions of different strength depending on whether the agents are concerned about the prices throughout entire horizon, only in the past, only in the future, or only at their arrival. Note that the last case corresponds to the standard setting of dynamic pricing problems. We prove that, while ensuring envy-freeness for the entire time horizon basically brings the problem back to the case of static prices, the optimum social welfare can be achieved through envy-free dynamic prices in the remaining cases. When it comes to revenue maximization, the results on the poly-logarithmic inapproximability of the optimal profit call for new approaches. Based on the success of dynamic pricing schemes in social welfare maximization, a natural idea is to combine the dynamic model with revenue maximization. The work of Bansal et al. [2] proposes a setup that resembles this idea. However, their model has a single item having unlimited supply, and agents are sold the item at the minimum price during their bid interval, which results in an allocation that is, again, envy-free only locally. Our second contribution is the analysis of the revenue maximization problem in a dynamic setting where fairness is defined using one of the above mentioned four possibilities. We show that, in contrast to welfare maximization, the flexibility of dynamic prices does not help in this case, and hence most of the problems are APX-hard. Most of previous work on dynamic pricing schemes assumed that the customers arrive one after the other in an unspecified order. Apart from this standard case, we also consider two further options in all the above mentioned scenarios: when the customers arrive in a predetermined order, and when the seller has the opportunity to determine their order. Practical motivation.Each notion of envy-freeness considered in his paper represents a natural concept of fairness that appears in everyday life. In certain markets, agents who arrive late do not perceive unfairness for prices that were posed earlier. In such situations, the seller tries to ensure that agents are not penalised by arriving too early. In other cases, discounting prices over time is a common strategy to guarantee clearance of the market. In such markets, agents are more inclined to accept that prices are low when the overall stock is low, and hence not to perceive an unfair pricing procedure. These are reasonable assumptions on customer behaviour and are resembled by our notion of envy-freeness when agents are concerned about prices only in the future and past, respectively. In various market scenarios customers are obliged to register their arrival and interest in certain items. These cases can be further differentiated depending on whether buyers register to free time slots or they are assigned by the seller. Accordingly, the proposed assumptions on the arrival of agents is a realistic problem that sellers are facing. The rest of the paper is organized as follows. Basic notation and definitions are given in Section 2. Social welfare maximization is considered in Section 3. The main results of the paper are presented in Sections 3.2 and 3.3, where we give polynomial-time algorithms for social welfare maximization using ex-post and ex-ante envy-free dynamic prices, respectively. Results on revenue maximization are discussed in Section 4. Finally, in Section 5 we summarize the paper and list open problems that are subject of future research. ## 2 Preliminaries Basic notation.We denote sets of _real_ and _non-negative real numbers_ by \(\mathbb{R}\) and \(\mathbb{R}_{+}\), respectively. For a positive integer \(t\), we use \([t]\) to denote the set \(\{1,\ldots,t\}\). Let \(\mathcal{I}\) be a ground set. For two subsets \(X,Y\subseteq\mathcal{I}\), their _symmetric difference_ is \(X\triangle Y\coloneqq(X\setminus Y)\cup(Y\setminus X)\). When \(Y\) consists of a single element \(y\), then the _difference_\(X\setminus\{y\}\) and _union_\(X\cup\{y\}\) are abbreviated by \(X-y\) and \(X+y\), respectively. For a function \(f\colon\mathcal{I}\to\mathbb{R}\), the total sum of its values over \(X\) is denoted by \(f(X)\coloneqq\sum_{s\in X}f(s)\). For \(X=\emptyset\), we define \(f(\emptyset)=0\). Graphs.We denote a _bipartite graph_ by \(G=(\mathcal{I},\mathcal{A};E)\), where \(\mathcal{I}\) and \(\mathcal{A}\) are the vertex classes and \(E\) is the set of edges. By _edge-weights_, we mean a function \(w\colon E\to\mathbb{R}_{+}\). For a subset \(X\subseteq\mathcal{I}\cup\mathcal{A}\), the _subgraph of \(G\) induced by \(X\)_ is the graph obtained from \(G\) by deleting all the vertices not contained in \(X\), together with edges incident to them. We denote an edge of the graph going between \(a\in\mathcal{A}\) and \(i\in\mathcal{I}\) by \(ai\). By orienting the edges of a bipartite graph, we get a _directed graph_\(D=(\mathcal{I},\mathcal{A};F)\), where \(F\) is the set of arcs. A directed graph is called _strongly connected_ if every vertex is reachable from every other vertex through a directed path. A _strongly connected component_ of a directed graph is a subgraph that is strongly connected and is maximal with respect this property. By contracting each strongly connected component of a directed graph to a single vertex, one obtains an acyclic directed graph. Therefore, the strongly connected components have a so-called _topological ordering_ in which every arc going between components goes from an earlier component to a later one. Market model.A combinatorial market consists of a set \(\mathcal{I}\) of _indivisible items_ and a set \(\mathcal{A}\) of _agents_. Throughout the paper, we denote by \(m\coloneqq|\mathcal{I}|\) and \(n\coloneqq|\mathcal{A}|\) the numbers of items and agents, respectively. An _allocation_\(\mathbf{X}\) assigns each agent \(a\) a subset \(X_{a}\) of items so that each item is assigned to at most one agent. In a _unit-demand market_, each agent \(a\in\mathcal{A}\) has a valuation \(v_{a}\colon\mathcal{I}\to\mathbb{R}_{+}\) over individual items and she desires only a single good, that is, we consider allocations \(\mathbf{X}\) with \(|X_{a}|\leq 1\) for \(a\in\mathcal{A}\) - in such cases we denote the item obtained by agent \(a\) by \(x_{a}\). We always assume that the agents' valuations are known in advance. Furthermore, we assume that \(v_{a}(\emptyset)=0\) for all agents \(a\in\mathcal{A}\). Given prices \(p(i)\) for each item \(i\in\mathcal{I}\), the _utility_ of agent \(a\) for item \(i\) is \(u_{a}(i)\coloneqq v_{a}(i)-p(i)\). Then the _social welfare_ corresponding to the allocation is \(\sum_{a\in\mathcal{A}}v_{a}(x_{a})\), while the _revenue_ of the seller is \(\sum_{a\in\mathcal{A}}p(x_{a})\). In a _static pricing scheme_, the seller sets the price \(p(i)\) of each item \(i\in\mathcal{I}\) in advance. Two fundamental problems in combinatorial markets are to find a pair of pricing vector \(p\colon\mathcal{I}\to\mathbb{R}_{+}\) and allocation \(\mathbf{X}\) such that the social welfare or the revenue is maximized. In contrast, in a _dynamic pricing scheme_ the agents arrive one after the other, and the seller can update the prices between their arrivals based on the remaining sets of items and agents. The order in which agents arrive is represented by a bijection \(\sigma\colon\mathcal{A}\to[n]\). The sets of agents, items and prices available before the arrival of the \(t\)th agent are denoted by \(\mathcal{A}_{t}\), \(\mathcal{I}_{t}\) and \(p_{t}\), respectively. The utility of agent \(a\) for item \(i\) at time step \(t\) is then defined as \(u_{a,t}(i)\coloneqq v_{a}(i)-p_{t}(i)\). The next agent always chooses an item that maximizes her utility. After the last buyer has left, the pricing scheme terminates and results in pricing vectors \(\mathbf{p}=(p_{1},\ldots,p_{n})\) and an allocation \(\mathbf{X}=(x_{1},\ldots,x_{n})\), where \(p_{t}\) is the price vector available at the arrival of the \(t\)th agent and \(x_{t}\) is the item allocated to her. Note that \(x_{t}\) might be an empty set if the utility of the agent is non-positive for each item in \(\mathcal{I}_{t}\). We call a dynamic pricing scheme _optimal_ if the final allocation maximizes the objective, that is, the social welfare or the revenue, irrespective of the order in which the agents arrived. In what follows, we define different variants of the model. Depending on whether ties between items are broken by the seller or the agents, we distinguish two cases: 1. _Seller-chooses_. If there are several items maximizing the utility of the current agent, then the seller decides which one to allocate to her. 2. _Agent-chooses_. If there are several items maximizing the utility of the current agent, then she decides which one to take. In terms of finding an optimal pricing, problem (C1) is easier. Indeed, given an optimal pricing for (C2), the seller can always decide to allocate the item that was chosen by the agent. Previous works generally assumed that agents arrive in an unspecified order. Besides this, we consider two further variants based on the control and information of the arrival process: 1. _Unspecified._ The agents arrive in a fixed order that the seller has no information on. 2. _Predetermined._ The agents arrive in a fixed order that the seller knows in advance. 3. _Alterable._ The order of the agents is determined by the seller. Our model differs from earlier ones mainly in that we are seeking for optimal pricing schemes under fairness constraints. In the static setting, a pair of pricing \(p\) and allocation \(\mathbf{x}\) is _envy-free_ if \(x_{a}\in\arg\max\{u_{a}(i)\mid i\in\mathcal{I}\}\) holds for each agent \(a\in\mathcal{A}\). The dynamic setting naturally suggests variants in which envy-freeness is defined over a subset of time steps. Let \(T_{a}\subseteq[n]\) be a subset of time steps for each agent \(a\in\mathcal{A}\). Then price vectors \(\mathbf{p}=(p_{1},\ldots,p_{n})\) and allocation \(\mathbf{X}=(x_{1},\ldots,x_{n})\) form an envy-free allocation if \(x_{a}\in\arg\max\{u_{a,t}(i)\mid t\in T_{a},i\in\mathcal{I}_{t}\}\) for each agent \(a\in\mathcal{A}\). We propose four possible notions of envy-freeness of different strength depending on the time period over which agents compare themselves to others: 1. _Strong envy-freeness._ Agents consider prices for the whole time horizon, that is, \(T_{a}=\{1,\ldots,n\}\) for \(a\in\mathcal{A}\). 2. _Ex-post envy-freeness._ Agents consider prices available after and at their arrival, that is, \(T_{a}=\{\sigma(a),\ldots,n\}\) for \(a\in\mathcal{A}\). 3. _Ex-ante envy-freeness._ Agents consider prices available before and at their arrival, that is, \(T_{a}=\{1,\ldots,\sigma(a)\}\) for \(a\in\mathcal{A}\). (F4) _Weak envy-freeness._ Agents consider prices at their arrival, that is, \(T_{a}=\{\sigma(a)\}\) for \(a\in\mathcal{A}\). Using this terminology, optimal dynamic pricing schemes discussed in [3, 4, 5, 8, 15] provide weakly envy-free solutions. It is worth mentioning that, though at first sight they might seem to be symmetric, the ex-post and ex-ante cases turns out to behave quite differently. As for the _objective function_, we either consider the _social welfare_\(W(\mathbf{X})=\sum_{a\in\mathcal{A}}v_{a}(x_{a})\) or the _revenue_ of all sold items \(R(\mathbf{p},\mathbf{X})=\sum_{a\in\mathcal{A}}p_{\sigma(a)}(x_{a})\). These variants and the results presented in the paper are summarized in Table 1. The results are split horizontally by the type of envy-freeness considered, while the columns are indexed by the type of the ordering of the agents. Algorithmic results hold irrespective of how agents break ties, while hardness results hold even if ties are broken by the seller. It is worth noting that the \(O(\log(n))\)-approximation algorithm of Guruswami et al. [12] extends to all of variants of envy-free pricing where the objective is to maximize the revenue. **Remark 1**.: In strongly envy-free pricing models, optimizing with respect to social welfare or revenue may lead to non-deterministic solutions when ties are broken by agents. In such cases, 'optimality' of a pricing scheme is not well-defined. This is well-illustrated by the classic example of Cohen-Addad et al. [8] with three items \(i_{1},i_{2},i_{3}\) and three agents \(a_{1},a_{2},a_{3}\) having valuations \(v_{a_{j}}(i_{j})=v_{a_{j}}(i_{j+1})=1\ v_{a_{j}}(i_{j+2})=0\) for \(j\in[3]\), where indices are meant in a cyclic order. Since each item has the same value for two of the agents, strong envy-freeness implies that the seller should set the prices uniformly and cannot update them. Assume that \(p(i_{1})=p(i_{2})=p(i_{3})=1\) and that agent \(j_{3}\) arrives first who chooses item \(i_{3}\). If \(j_{1}\) arrives next, she is indifferent between \(i_{1}\) and \(i_{2}\), but the achieved social welfare or revenue heavily depends on her decision. To overcome these difficulties, one could consider different objective function, such as worst-case or average social welfare or revenue. However, this lies beyond the scope of this paper and we postpone them as subjects of future research. Weighted coverings.A unit-demand combinatorial market can be represented by a complete edge-weighted bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\), where vertex classes \(\mathcal{I}\) and \(\mathcal{A}\) correspond to the sets of items and agents, respectively. For any item \(i\in\mathcal{I}\) and agent \(a\in\mathcal{A}\), the weight of the edge \(ai\) is \(w(ia)\coloneqq v_{a}(i)\). Then there is a one-to-one correspondence between allocations maximizing social welfare and maximum weight matchings of \(G\). Even more, the maximum weight of a matching is clearly an upper bound on the maximum revenue achievable through any pricing mechanism. These observations motivate to investigate dynamic pricing schemes through the lenses of maximum weight matchings. Let us denote the vertex set of \(G\) by \(V\coloneqq\mathcal{I}\cup\mathcal{A}\). A function \(\pi\colon V\to\mathbb{R}\) is a _weighted covering_ if \(\pi(i)+\pi(a)\geq w(ia)\) holds for every edge \(ia\in E\). The _total value_ of the covering is \(\pi(V)=\sum_{v\in V}\pi(v)\). A weighted covering of minimum total value is called _optimal_. A cornerstone result of graph optimization is due to Egervary [9] who provided a min-max characterization for the maximum weight of a matching in a bipartite graph. **Theorem 2** (Egervary [9]).: _Let \(G=(\mathcal{I},\mathcal{A};E)\) be a bipartite graph and \(w\colon E\to\mathbb{R}\) be a weight function on the set of edges. Then the maximum weight of a matching is equal to the minimum total value of a non-negative weighted covering \(\pi\) of \(w\)._ Given a weighted covering \(\pi\), item \(i\in\mathcal{I}\) and agent \(a\in\mathcal{A}\), the edge \(ai\) is called _tight with respect to \(\pi\)_ if \(\pi(i)+\pi(a)=w(ia)\). The _subgraph of tight edges_ is then denoted by \(G_{\pi}=(\mathcal{I},\mathcal{A};E_{\pi})\). We call an edge \(ia\in E\)_legal_ if there exists a maximum weight matching containing it, and in such a case we say that \(i\) is _legal_ for \(a\). It is known that legal edges are always tight with respect to any optimal weighted covering, while the converse does not always hold, that is, a tight edge is not necessarily legal. However, [3, Lemma 5] showed that a careful choice of \(\pi\) ensures the sets of tight and legal edges to coincide. **Lemma 3** (Berczi, Berczi-Kovacs and Szogi [3]).: _The optimal \(\pi\) attaining the minimum in Theorem 2 can be chosen such that_ 1. _an edge_ \(ai\) _is tight with respect to_ \(\pi\) _if and only if it is legal, and_ 2. \(\pi(v)=0\) _for some_ \(v\in V\) _if and only if there exists a maximum weight matching_ \(M\) _with_ \(d_{M}=0\)_._ _Furthermore, such a \(\pi\) can be determined in polynomial time._ Finally, we will use the following technical lemma, see [3, Lemma 1]. **Lemma 4**.: _Given a bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\) corresponding to unit-demand combinatorial market, we may assume that all items are covered by every maximum weight matching of \(G\)._ ## 3 Maximizing the social welfare Since a Walrasian equilibrium maximizes social welfare and ensures envy-freeness at the same time, the existence of optimal dynamic pricing schemes under fairness constraints is settled when ties are broken by the seller. The seminal paper of Cohen-Addad et al. [8] initiated the study of the agent-chooses case, and provided an algorithm for determining a weakly envy-free solution in unit-demand markets. A different proof of the same result was later given by Berczi, Berczi-Kovacs and Szogi [3]. Their algorithm starts with a minimum weighted covering provided by Lemma 3 in the edge-weighted bipartite graph representing the market, and sets the initial prices according to the covering values. As a result, the first agent \(a\) chooses an item \(i\) such that \(ai\) is tight, hence \(i\) is legal for \(a\). Based on this, it may seem that, keeping the same prices throughout, the resulting allocation will eventually be a maximum weight matching. However, after item \(i\) is taken, an edge that was legal before might become non-legal. To overcome this, the weighted covering needs to be updated in the remaining graph at each time step, which causes the price of an item fluctuating over time. For that reason, the algorithm does not extend to the ex-post and ex-ante envy-free cases. To prevent the fluctuation of prices, we do not recompute the weighted covering at each time step from scratch. Instead, we fix a single weighted covering at the very beginning, and then we always slightly modify it to control the agents' choices in such a way that 1. no matter which agent arrives next, if she is covered by every maximum weight matching in the current graph then she picks an item that is legal for her, otherwise she either picks an item that is legal for her or does not take an item at all, and 2. the price-changes from time step \(t-1\) to \(t\) are limited to non-increases in the ex-post and to non-decreases in the ex-ante case. The first property implies that the final allocation corresponds to a maximum weight matching of \(G\), hence it maximizes social welfare. The second property ensures that the resulting allocation meets the requirements of ex-post or ex-ante envy-freeness. ### Preparations Consider the edge-weighted bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\) representing the market. By Lemma 4, we may assume that all items are covered by every maximum weight matching of \(G\). Take a weighted covering \(\pi\) provided by Lemma 3. Throughout this section, tightness of an edge is always meant with respect to \(\pi\). Recall that \(G_{\pi}\) denotes the subgraph of tight edges. At time step \(t\), the sets of remaining agents and items are denoted by \(\mathcal{A}_{t}\) and \(\mathcal{I}_{t}\), respectively. We denote the subgraph of \(G_{\pi}\) induced by vertices \(\mathcal{I}_{t}\cup\mathcal{A}_{t}\) by \(G_{t}=(V_{t},E_{t})\). For a maximum weight matching \(M_{t}\) of \(G_{t}\), let \(x^{t}_{a}\) denote the item to which \(a\) is matched in \(M_{t}\) if such an item exists, otherwise define \(x^{t}_{a}\) to be the empty set. Note that the edges \(ax^{t}_{a}\) are obviously legal in \(G_{t}\). Given such a matching \(M_{t}\), we construct a directed graph \(D_{t}\) as follows. We add another copy of every edge in \(M_{t}\) to \(G_{t}\) that we refer to as dummy edges. Then we orient the original copies in \(M_{t}\) from \(\mathcal{I}_{t}\) to \(\mathcal{A}_{t}\), and orient all the remaining edges - including the dummy ones - from \(\mathcal{A}_{t}\) to \(\mathcal{I}_{t}\). We denote the strongly connected components of the resulting directed graph by \(C^{t}_{1},\ldots,C^{t}_{q_{t}}\) indexed according to a topological ordering, see Figure 1 for an example. The following technical claims will be used later. **Claim 5**.: _Let \(a\in\mathcal{A}_{t}\) and \(i\in\mathcal{I}_{t}\) be such that they are in the same strongly connected component of \(D_{t}\) and \(ai\) is tight. Then \(i\) is legal for \(a\) in \(G_{t}\)._ Proof.: If the edge is oriented from \(i\) to \(a\), then \(ia\in M_{t}\) and the statement clearly holds. Otherwise, the edge is oriented from \(a\) to \(i\). Since every arc of a strongly connected digraph is contained in a directed cycle, it suffices to show that any directed cycle \(C\) consists only of legal edges. This follows from the fact that all the edges of \(G_{t}\) are tight, hence \(M_{t}\triangle C\) is also a maximum weight matching of \(G_{t}\), implying that it consists of legal edges. In both the ex-post and ex-ante cases, our proof builds on the following idea. It is not difficult to check that if one sets the price of each item to its \(\pi\) value, then agents strictly prefer items to which they are connected through a tight edge over items for which the corresponding edge is not tight, see Lemma 6. However, as time passes, a tight edge is no longer necessarily legal. In order to prevent the next agent to choose such an edge, we will slightly change the prices in a case-specific way. To do this, let \(\delta>0\) be a constant for which \[\delta<\frac{1}{2}\min\bigl{\{}\min\{\pi(a)+\pi(i)-v_{a}(i)\mid ia\in E\text{ is not tight}\},\ \min\{\pi(i)\mid i\in\mathcal{I}\}\bigr{\}}.\] Note that such a \(\delta\) exists by Lemmas 3(b) and 4. We also choose a small constant \(0<\varepsilon<\delta/(n2^{n})\) that will be used later on. The next claim shows that tight edges lead to greater utility than non tight ones. **Claim 6**.: _Let \(a\in\mathcal{A}_{t}\) and \(i,i^{\prime}\in\mathcal{I}_{t}\) be such that \(ai\) is tight, \(ai^{\prime}\) is not tight. If \(p_{t}(i)\leq\pi(i)+\delta\) and \(p_{t}(i^{\prime})\geq\pi(i^{\prime})-\delta\), then \(a\) strictly prefers \(i\) over \(i^{\prime}\)._ Proof.: Assume that \(p_{t}(i)\leq\pi(i)+\delta\) and \(p_{t}(i^{\prime})\geq\pi(i^{\prime})-\delta\). Since \(ai\) is tight and \(ai^{\prime}\) is not tight, we have \(v_{a}(i)=\pi(a)+\pi(i)\) and \(v_{a}(i^{\prime})+2\delta<\pi(a)+\pi(i^{\prime})\) by the definition of \(\delta\). Thus we get \[u_{a,t}(i^{\prime}) \leq v_{a}(i^{\prime})-(\pi(i^{\prime})-\delta)\] \[<\pi(a)-\delta\] \[=v_{a}(i)-(\pi(i)+\delta)\] \[\leq u_{a,t}(i).\] Hence the utility of \(a\) for item \(i\) is strictly larger than for item \(i^{\prime}\), concluding the proof. Finally, the following two technical claim follows easily from the definitions. **Claim 7**.: _Let \(a\in\mathcal{A}_{t}\) and \(i\in\mathcal{I}_{t}\) be such that \(ai\) is not tight. If \(p_{t}(i)\geq\pi(i)-\delta\), then \(u_{a,t}(i)<\pi(a)\)._ Proof.: As \(ai\) is not tight, the definition of \(\delta\) implies \(u_{a,t}(i)=v_{a}(i)-p_{t}(i)<(\pi(a)+\pi(i)-\delta)-(\pi(i)-\delta)=\pi(a)\) as stated. **Claim 8**.: _Let \(a\in\mathcal{A}_{t}\) and \(i\in\mathcal{I}_{t}\) be such that \(ai\) is tight. If \(p_{t}(i)=\pi(i)+\omega\) for some real number \(\omega\), then \(u_{a,t}(i)=\pi(a)-\omega\)._ Proof.: As \(ai\) is tight, we get \(u_{a,t}(i)=v_{a}(i)-p_{t}(i)=(\pi(a)+\pi(i))-(\pi(i)+\omega)=\pi(a)-\omega\) as stated. Figure 1: Illustration of the constructions for the ex-post and ex-ante cases. Circles and squares correspond to agents and items, respectively. Thick edges denote a maximum weight matching \(M_{t}\), while \(C_{1}^{t},\dots,C_{5}^{t}\) are the strongly connected components of the directed graph \(D_{t}\). The set of agents not matched in \(M_{t}\) is denoted by \(R_{t}\coloneqq\{a\in\mathcal{A}_{t}\mid x_{a}^{t}=\emptyset\}\). Note that \(\pi(a)=0\) for each \(a\in R_{1}\) by Lemma 3(b). In fact, we will maintain this property for each set \(R_{t}\) throughout the pricing process. From this point, we discuss the ex-post and ex-ante cases separately as their proofs differ in that prices must be set differently. ### Ex-post envy-free pricing The algorithm in [3] for the unit-demand case updates the prices at each time step using an arbitrary weighted covering in the remaining graph. In order to adapt a similar idea for the ex-post case, we do this in a controlled manner which ensures that prices do not increase over time. **Theorem 9**.: _There exists a welfare-maximizing dynamic pricing scheme for the unit-demand ex-post envy-free pricing problem even if ties are broken by the agents. Furthermore, the optimal prices can be determined in polynomial time._ Proof.: Throughout the algorithm, we maintain a maximum weight matching \(M_{t}\) in the graph \(G_{t}\) of remaining agents and items. Recall that \(R_{t}\) denotes the set of agents not covered by the matching \(M_{t}\). Initially, \(\pi(a)=0\) for \(a\in R_{1}\) by Lemma 3(b), and we will maintain this property for each \(a\in R_{t}\) throughout. We describe a general phase \(t\in[n]\) of the pricing process. We define the set \[S_{t}\coloneqq\{v\in V_{t}\mid\text{there exists a directed path to $v$ from an agent $a\in R_{t}$}\},\] see Figure 0(b) for an example. In particular, \(R_{t}\subseteq S_{t}\). Note that either \(C_{j}^{t}\subseteq S_{t}\) or \(C_{j}^{t}\cap S_{t}=\emptyset\) for each strongly connected component \(C_{j}^{t}\) of \(D_{t}\). The prices are then updated as follows: \[p_{t}(i)=\begin{cases}\pi(i)+\delta/2^{t}+j\varepsilon&\text{if $i\in C_{j}^{t}$ such that $C_{j}^{t}\subseteq S_{t}$},\\ \pi(i)-\delta(1-1/2^{t})+j\varepsilon&\text{if $i\in C_{j}^{t}$ such that $C_{j}^{t}\cap S_{t}=\emptyset$}.\end{cases}\] By the choice of \(\delta\) and \(\varepsilon\), the prices are non-negative. Let \(a\in\mathcal{A}_{t}\) denote the agent who arrives at time step \(t\). The next three claims together show that the prices satisfy property 1. **Claim 10**.: _If \(a\in V_{t}\setminus S_{t}\), or \(a\in S_{t}\setminus R_{t}\) and \(\pi(a)>0\), then she chooses an item that is legal for her in \(G_{t}\)._ Proof.: Since \(a\in V_{t}\setminus R_{t}\), the matching \(M_{t}\) covers \(a\) and hence \(x_{a}\) is non-empty. Let \(i\in\mathcal{I}_{t}\) be an arbitrary item distinct from \(x_{a}\). If \(ai\) is not tight, then the agent strictly prefers \(x_{a}\) over \(i\) by Claim 6. If \(ai\) is tight but \(i\) is in a different strongly connected component than \(a\), then the index of the component of \(i\) is strictly larger than that of \(a\). Furthermore, by the definition of the set \(S_{t}\), either both of them are in \(S_{t}\) or \(a\notin S_{t}\) and \(i\in S_{t}\). These together imply that \(p_{t}(x_{a})-\pi(x_{a})<p_{t}(i)-\pi(i)\), hence the agent strictly prefers \(x_{a}\) over \(i\) by Claim 8. Finally, if \(ai\) is tight and \(i\) is in the same strongly connected component as \(a\), then \(ai\) is legal by Claim 5. The utility of \(a\) is non-negative for such items by the choice of \(\delta\) and the assumptions on \(a\), hence the claim follows. **Claim 11**.: _If \(a\in S_{t}\setminus R_{t}\) and \(\pi(a)=0\), then she takes no item at all and there exists a maximum weight matching in \(G_{t}\) that does not cover \(a\)._ Proof.: Let \(i\in\mathcal{I}_{t}\) be an arbitrary item. If \(ai\) is not tight, then the utility of \(a\) for \(i\) is negative by Claim 7. If \(ai\) is tight, then \(i\in S_{t}\) by the definition of the set \(S_{t}\). This implies \(p_{t}(i)>\pi(i)\) by the definition of the prices, hence the utility of \(a\) for \(i\) is negative by Claim 8. However, in such cases there exists a directed path \(P\) to \(a\) from an agent \(a^{\prime}\in R_{t}\). The fact that \(P\) consist of tight edges together with \(\pi(a)=\pi(a^{\prime})=0\) imply that \(M_{t}\triangle P\) is also a maximum weight matching in \(G_{t}\). **Claim 12**.: _If \(a\in R_{t}\), then she takes no item at all._ Proof.: By assumption, we have \(\pi(a)=0\). Let \(i\in\mathcal{I}_{t}\) be an arbitrary item. If \(ai\) is not tight, then the utility of \(a\) for \(i\) is negative by Claim 7. If \(ai\) is tight, then \(i\in S_{t}\) by the definition of the set \(S_{t}\). This implies \(p_{t}(i)>\pi(i)\) by the definition of the prices, hence the utility of \(a\) for \(i\) is negative by Claim 8. The matching \(M_{t}\), and implicitly the set \(R_{t}\), is updated as follows. If \(a\in V_{t}\setminus S_{t}\), or \(a\in S_{t}\setminus R_{t}\) and \(\pi(a)>0\), then \(a\) takes an item \(i\) from her strongly connected component. If \(ai\in M_{t}\), then set \(M_{t+1}\coloneqq M_{t}\setminus\{ai\}\). Otherwise, let \(C\) be a directed cycle of \(D_{t}\) containing the arc \(ai\), and set \(M_{t+1}\coloneqq(M_{t}\triangle C)\setminus\{ai\}\). In this case, we have \(R_{t+1}=R_{t}\). If \(a\in S_{t}\) and \(\pi(a)=0\), then consider the directed path \(P\) to \(a\) from an agent \(a^{\prime}\in R_{t}\), and set \(M_{t+1}\coloneqq(M_{t}\triangle P)\setminus\{ai\}\), implying \(R_{t+1}=R_{t}\setminus\{a^{\prime}\}\). Finally, if \(a\in R_{t}\), then \(M_{t+1}\coloneqq M_{t}\), hence \(R_{t+1}=R_{t}\). It remains to verify that the pricing scheme satisfies property (B), which is done by the following statement. **Claim 13**.: _The price of any item does not increase over time._ Proof.: At each phase of the algorithm, the price of an item \(i\) is obtained by shifting its original weighted covering value. Though the structure of the directed graph \(D_{t}\) and therefore the index of the strongly connected component containing \(i\) might change from phase to phase, the choice of \(\varepsilon\) ensures that \(\pi(i)+\delta/2^{t}>\pi(i)+\delta/2^{t+1}+n\varepsilon\) and \(\pi(i)-\delta(1-1/2^{t})>\pi(i)-\delta(1-1/2^{t+1})+n\varepsilon\). Hence, in order to verify the claim, it suffices to show that \(S_{t+1}\subseteq S_{t}\). Note that \(M_{t+1}\) is chosen in such a way that no arc of \(D_{t+1}\) leaves the set \(S_{t}\cap\mathcal{I}_{t+1}\). Indeed, \(D_{t+1}\) is obtained from \(D_{t}\) by possibly reorienting a directed cycle or a directed path that lies completely in \(S_{t}\), and then deleting an agent and possibly an item. These steps do not result in a directed arc leaving \(S_{t}\cap\mathcal{I}_{t+1}\), implying \(S_{t+1}\subseteq S_{t}\). By Claims 10-12, if the next agent is covered by the matching \(M_{t}\) then she either chooses an item that is legal for her, or \(M_{t+1}\) is also a maximum weight matching of \(G_{t}\). Otherwise, she does not take any of the items. This implies that the resulting allocation corresponds to a maximum weight matching of \(G\) and hence maximizes social welfare. By Claim 13, the prices do not increase over time. This implies that the solution is ex-post envy-free, concluding the proof of the theorem. ### Ex-ante envy-free pricing To give an algorithm for the ex-ante case, we adopt the proof of Theorem 9. However, to ensure that the final prices and allocation form an ex-ante envy-free solution, prices have to be updated differently. **Theorem 14**.: _There exists a welfare-maximizing dynamic pricing scheme for the unit-demand ex-ante envy-free pricing problem even if ties are broken by the agents. Furthermore, the optimal prices can be determined in polynomial time._ Proof.: Similarly to the ex-post case, we maintain a maximum weight matching \(M_{t}\) in the graph \(G_{t}\) of remaining agents and items, and denote by \(R_{t}\) the set of agents not covered by the matching \(M_{t}\). Initially, \(\pi(a)=0\) for \(a\in R_{1}\) by Lemma 3(b), and we will maintain this property for each \(a\in R_{t}\) throughout. We describe a general phase \(t\in[n]\) of the pricing process. We define the set \[S_{t}\coloneqq\{v\in V_{t}\mid\text{there exists a directed path from $v$ to an agent $a\in\mathcal{A}_{t}\setminus R_{t}$ with $\pi(a)=0$}\},\] see Figure 0(c) for an example. Note that either \(C_{j}^{t}\subseteq S_{t}\) or \(C_{j}^{t}\cap S_{t}=\emptyset\) for each strongly connected component \(C_{j}^{t}\) of \(D_{t}\). The prices are then updated as follows: \[p_{t}(i)=\begin{cases}\pi(i)-\delta/2^{t}+j\varepsilon&\text{if $i\in C_{j}^{t}$ such that $C_{j}^{t}\subseteq S_{t}$},\\ \pi(i)+\delta(1-1/2^{t})+j\varepsilon&\text{if $i\in C_{j}^{t}$ such that $C_{j}^{t}\cap S_{t}= \emptyset$}.\end{cases}\] By the choice of \(\delta\) and \(\varepsilon\), the prices are non-negative. Let \(a\in\mathcal{A}_{t}\) denote the agent who arrives at time step \(t\). The next three claims together show that the prices satisfy property 1. **Claim 15**.: _If \(a\in V_{t}\setminus R_{t}\), then she chooses an item that is legal for her in \(G_{t}\)._ Proof.: Since \(a\in V_{t}\setminus R_{t}\), the matching \(M_{t}\) covers \(a\) and hence \(x_{a}\) is non-empty. Let \(i\in\mathcal{I}_{t}\) be an arbitrary item distinct from \(x_{a}\). If \(ai\) is not tight, then the agent strictly prefers \(x_{a}\) over \(i\) by Claim 6. If \(ai\) is tight but \(i\) is in a different strongly connected component than \(a\), then the index of the component of \(i\) is strictly larger than that of \(a\). Furthermore, by the definition of the set \(S_{t}\), either both or none of them are contained in \(S_{t}\). These together imply that \(p_{t}(x_{a})-\pi(x_{a})<p_{t}(i)-\pi(i)\), hence the agent strictly prefers \(x_{a}\) over \(i\) by Claim 8. Finally, if \(ai\) is tight and \(i\) is in the same strongly connected component as \(a\), then \(ai\) is legal by Claim 5. The utility of \(a\) is non-negative for such items by the choice of \(\delta\), hence the claim follows. **Claim 16**.: _If \(a\in R_{t}\setminus S_{t}\), then she takes no item at all._ Proof.: By assumption, we have \(\pi(a)=0\). Let \(i\in\mathcal{I}_{t}\) be an arbitrary item. If \(ai\) is not tight, then the utility of \(a\) for \(i\) is negative by Claim 7. If \(ai\) is tight, then \(i\notin S_{t}\) by the definition of the set \(S_{t}\). This implies \(p_{t}(i)>\pi(i)\) by the definition of the prices, hence the utility of \(a\) for \(i\) is negative by Claim 8. **Claim 17**.: _If \(a\in R_{t}\cap S_{t}\), then she either chooses an item that is legal for her in \(G_{t}\), or takes no item at all._ Proof.: By assumption, we have \(\pi(a)=0\). Let \(i\in\mathcal{I}_{t}\) be an arbitrary item. If \(ai\) is not tight, then the utility of \(a\) for \(i\) is negative by Claim 7. If \(ai\) is tight but \(i\notin S_{t}\), then \(p_{t}(i)>\pi(i)\) by the definition of the prices, hence the utility of \(a\) for \(i\) is negative by Claim 8. If \(ai\) is tight and \(i\in S_{t}\), then there exists a directed path \(P\) from \(a\) to an agent \(a^{\prime}\) in \(D_{t}\) which is covered by \(M_{t}\) and \(\pi(a^{\prime})=0\). The fact that \(P\) consists of tight edges together with \(\pi(a)=\pi(a^{\prime})=0\) imply that \(M_{t}\triangle P\) is also a maximum weight matching in \(G_{t}\), there fore \(i\) is legal for \(a\). The matching \(M_{t}\), and implicitly the set \(R_{t}\), is updated as follows. If \(a\in V_{t}\setminus R_{t}\), then \(a\) takes an item \(i\) from her strongly connected component. If \(ai\in M_{t}\), then set \(M_{t+1}\coloneqq M_{t}\setminus\{ai\}\). Otherwise, let \(C\) be a directed cycle of \(D_{t}\) containing the arc \(ai\), and set \(M_{t+1}\coloneqq(M_{t}\triangle C)\setminus\{ai\}\). In this case, we have \(R_{t+1}=R_{t}\). If \(a\in R_{t}\setminus S_{t}\) or \(a\in R_{t}\cap S_{t}\) but \(a\) takes no item, then set \(M_{t+1}\coloneqq M_{t}\), implying \(R_{t+1}=R_{t}\setminus\{a\}\). Finally, if \(a\in R_{t}\cap S_{t}\) and \(a\) takes an item \(i\), then consider the directed path \(P\) from \(a\) to an agent \(a^{\prime}\) which is covered by \(M_{t}\) and \(\pi(a^{\prime})=0\), and set \(M_{t+1}\coloneqq(M_{t}\triangle P)\setminus\{ai\}\). Since we have \(R_{t+1}=R_{t}\setminus\{a\}\cup\{a^{\prime}\}\) and \(\pi(a)=0\), the property that each agent in \(R_{t+1}\) has \(\pi\) value \(0\) holds. It remains to verify that the pricing scheme satisfies property (B), which is done by the following statement. **Claim 18**.: _The price of any item does not decrease over time._ Proof.: At each phase of the algorithm, the price of an item \(i\) is obtained by shifting its original weighted covering value. Though the structure of the directed graph \(D_{t}\) and therefore the index of the strongly connected component containing \(i\) might change from phase to phase, the choice of \(\varepsilon\) ensures that \(\pi(i)-\delta/2^{t}+n\varepsilon<\pi(i)-\delta/2^{t+1}\) and \(\pi(i)+\delta(1-1/2^{t})+n\varepsilon<\pi(i)+\delta(1-1/2^{t+1})\). Hence, in order to verify the claim, it suffices to show that \(S_{t+1}\subseteq S_{t}\). Note that \(M_{t+1}\) is chosen in such a way that no arc of \(D_{t+1}\) enters the set \(S_{t}\cap\mathcal{I}_{t+1}\). Indeed, \(D_{t+1}\) is obtained from \(D_{t}\) by possibly reorienting a directed cycle or a directed path that lies completely in \(S_{t}\), and then deleting an agent and possibly an item. These steps do not result in a directed arc entering \(S_{t}\cap\mathcal{I}_{t+1}\), implying \(S_{t+1}\subseteq S_{t}\). By Claims 15-17, if the next agent is covered by the matching \(M_{t}\) then she chooses an item that is legal for her. Otherwise, she either chooses an item that is legal for her, or does not take any of the items. This implies that the resulting allocation corresponds to a maximum weight matching of \(G\) and hence maximizes social welfare. By Claim 18, the prices do not decrease over time. This implies that the solution is ex-ante envy-free, concluding the proof of the theorem. ## 4 Maximizing the revenue When it comes to revenue maximization in the static setting, the problem is not only hard to solve but also to approximate within a reasonable factor, and the difficulty stems from the lack of strong upper bounds. Clearly, the maximum weight of a matching is an upper bound on the total revenue achievable through pricing mechanisms, but the gap between optimal revenue and maximum weight of a matching may be \(O(\log(n))\). Indeed, consider a market with items \(\mathcal{I}=\{i_{1},\ldots,i_{n}\}\) and agents \(\mathcal{A}=\{a_{1},\ldots,a_{n}\}\). Let the valuations be defined as \(v_{a_{j}}(i_{k})\coloneqq 1/j\) for \(1\leq j\leq n\) and \(j\leq k\leq m\) and \(0\) otherwise, see Figure 2 for an illustration. Then there is a unique maximum weight matching between agents and items that consists of the edges \(i_{j}a_{j}\) for \(1\leq j\leq n\) with total weight \(\sum_{j=1}^{n}1/j\). For any pair of envy-free static pricing and allocation, if an agent \(a_{j}\) receives an item \(i_{k}\) at some price \(p(i_{k})\), then the price of all the other items must be at least \(p(i_{k})\) to ensure that agent \(a_{j}\) is not envious. On the other hand, the price \(p(i_{k})\) cannot be greater than \(1/j\) as otherwise the utility of agent \(a_{j}\) for item \(i_{k}\) is negative. These observations together imply that the price of each item sold is at most \(1/j\) where \(j\) is the largest index for which agent \(a_{j}\) receives an item. Hence the total revenue is at most \(j\cdot 1/j=1\), leading to an \(O(\log(n))\) gap as stated. In what follows, we turn our attention to the revenue maximization problem in a dynamic environment with fairness constraints. ### Hardness results When the solution is required to be strongly envy-free, then the dynamic setting does not make a difference compared to the static one, as shown by the following theorem. **Theorem 19**.: _Maximizing the revenue in the unit-demand strongly envy-free dynamic pricing problem is APX-hard, even if the agents' ordering is chosen and ties are broken by the seller._ Proof.: Let \(\sigma\) be an ordering of the agents, \(p_{1},\ldots,p_{n}\) be prices, and \(\mathbf{x}\) be an allocation that maximizes the revenue. Recall that \(x_{a}\) denotes the item received by agent \(a\in\mathcal{A}\) if exists, otherwise \(x_{a}\) is the empty set. We show that there exist static prices and an envy-free allocation resulting in the same revenue. As the reverse always holds, that is, any envy-free allocation with respect a static pricing can be seen as a strongly envy-free solution in the dynamic setting, this proves the theorem. We define a static pricing as follows: for each agent \(a\in\mathcal{A}\) with \(x_{a}\neq\emptyset\), set \(p(x_{a})\coloneqq p_{\sigma(a)}(x_{a})\), that is, we define the price of an allocated item to be the price at which it was sold. For the remaining items, set the price to \(+\infty\). We claim that the allocation \(\mathbf{x}\) is envy-free with respect to the static pricing \(p\), and hence has the same revenue as the dynamic solution. Indeed, this follows from the definition of strong envy-freeness, as \(x_{a}\in\arg\max\{v_{a}(i)-p_{t}(i)\mid t\in[n],i\in\mathcal{I}\}\) and \(p(x_{a})=p_{\sigma(a)}(x_{a})\) imply \(x_{a}\in\arg\max\{v_{a}(i)-p(i)\mid i\in\mathcal{I}\}\) for each \(a\in\mathcal{A}\). Unfortunately, the problem remains hard for weaker notions of envy-freeness. Our proof follows the main idea of the proof of Guruswami et al. [12] for the APX-hardness of revenue maximization. **Theorem 20**.: _Maximizing the revenue in the unit-demand ex-post and ex-ante envy-free dynamic pricing problems are APX-hard, even if the agents' ordering is known in advance and ties are broken by the seller._ Proof.: The proof is by reduction from Vertex Cover in 3-regular graphs. Given a 3-regular graph \(G=(V,E)\), Vertex Cover asks for a minimum number of vertices that includes at least one endpoint of every edge of the graph. This problem was shown to be APX-hard in [10]. Let \(G=(V,E)\) be a 3-regular graph with \(n\) vertices and \(m\) edges. We construct a pricing instance with items set \(\mathcal{I}\) and agent set \(\mathcal{A}\) consisting of \(4n\) items and \(m+n\) agents, respectively. For each vertex \(z\in V\), we add four vertex-items \(z_{1},z_{2},z_{3},z_{4}\) to \(\mathcal{I}\) and one vertex-agent to \(\mathcal{A}\) that, by abuse of notation, we also denote by \(z\). The valuation of the agent is then defined as \(v_{z}(z_{i})=2\) for \(i\in[4]\) and \(0\) for any other item. Furthermore, for each edge \(e=zw\in E\), we add an edge-agent \(e\) to \(\mathcal{A}\) with valuation \(v_{e}(z_{i})=v_{e}(w_{i})=1\) for \(i\in[4]\) and \(0\) for any other item; for an example, see Figure 3. We first consider the ex-post case. Assume that the ordering of the agents is such that edge-agents arrive first, followed by vertex-agents. We claim that for such an ordering, there exists an ex-post envy-free pricing scheme and allocation that results in a total revenue of \(m+2n-k\) if and only if there exists a vertex cover of size \(k\) in \(G\). Since \(m=3n/2\) and the minimum Figure 2: Illustration of \(O(\log(n))\) gap between the optimal revenue and maximum weight of a matching in the case of envy-free static pricing. vertex cover has size at least \(m/3=n/2\), a constant factor gap in the size of a vertex cover translates into a constant factor gap in the optimal profit for the pricing instance, which yields the desired APX-hardness result. To see the 'if' direction, let \(C\subseteq V\) be a vertex cover of \(G\) of size \(k\). For each vertex \(z\in V\), let \(p_{t}(z_{i})\coloneqq 1\) if \(z\in C\) and \(p_{t}(z_{i})\coloneqq 2\) otherwise for \(i\in[4]\) and \(t\in[n]\) - note that the prices do not change over time, implying that the final solution is ex-post envy-free. According to the ordering, edge-agents arrive first, and each of them takes one of the vertex-items that correspond to one of its endpoints that lies in \(C\) for a price of \(1\). Then vertex-agents arrive, and take a copy of the vertex-items corresponding to them for a price of \(2\). Note that, since the graph is \(3\)-regular and four vertex-items were added for each vertex, each agent receives an item, and hence the total revenue is \(m+2n-k\). To see the 'only if' direction, consider dynamic prices \(p_{1},\ldots,p_{n}\) and an ex-post envy-free allocation \(\mathbf{x}\) that maximizes the revenue for the ordering considered. It is not difficult to check that the pricing vectors can be assumed to take values \(1\) and \(2\) only. Let \(e=zw\) be the first edge-agent, if exists, who does not get any item. That is, all the remaining vertex-items from \(z_{1},\ldots,z_{4},w_{1},\ldots,w_{4}\) are priced at \(2\) upon the arrival of \(e\). If we reduce the price of one of these items, say \(z_{i}\), then we have to do the same modification for all the remaining vertex-items corresponding to \(z\) and for all the remaining time steps to ensure ex-post envy-freeness from the point of view of vertex-agent \(z\). This way, we lose a revenue of \(1\) coming from vertex-agent \(z\), but we gain this back by making a profit of \(1\) from edge-agent \(e\). By this observation, we may assume that for each vertex \(z\in V\) and time step \(t\in[n+m]\), either \(p_{t}(z_{i})=1\) for \(i\in[4]\) or \(p_{t}(z_{i})=2\) for \(i\in[4]\), and that vertices belonging to the former class form a vertex cover of \(G\). This implies that the revenue is at most \(m+2n-k\). If the ordering of the agents is such that vertex-agents arrive first, followed by edge-agents, then a similar argument shows the hardness of the ex-ante case. ### Algorithms In the previous section, we showed that if the seller has no control over the order in which agents arrive, then even the seemingly more flexible framework of dynamic pricing is not enough for maximizing the revenue in the ex-post and ex-ante envy-free settings. On the positive side, if the ordering can be chosen by the seller, then an optimal pricing scheme can be determined Figure 3: A \(3\)-regular graph \(G\) with \(n=4\) and \(m=6\), where grey vertices form a minimum vertex-cover of size \(k=3\). In the corresponding pricing instance, edges incident to vertex-agents and edge-agents have weights \(2\) and \(1\), respectively. When edge-agents arrive first, then setting the prices to \(1\) on grey elements, \(2\) otherwise, and allocating the items according to thick edges results in an ex-post envy-free solution with revenue \(11=m+2n-k\). efficiently. In what follows, we give polynomial-time algorithms for determining an ordering of the agents together with the price vectors so that the final allocation is ex-post or ex-ante envy-free and maximizes the revenue irrespective of how ties are broken by the agents. In both cases, we compare the solution to the maximum weight of a matching in the corresponding edge-weighted bipartite graph, which is clearly an upper bound for the revenue. Note that, in the agent-chooses case, an agent can decide not to take an item with utility \(0\) for her. For keeping the description of the algorithms simple, we assume that in such cases the agent decides to take the item. In the general case then one an decrease the prices of the items in each step by a small \(\varepsilon>0\), thus obtaining a revenue arbitrarily close to the maximum weight of a matching. First we consider the ex-post case. **Theorem 21**.: _If the ordering of the agents can be chosen by the seller, then there exists a revenue-maximizing dynamic pricing scheme for the unit-demand ex-post envy-free pricing problem even if ties are broken by the agents. Furthermore, the optimal ordering and prices can be determined in polynomial time._ Proof.: Consider the edge-weighted bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\) representing the market, where the weight of an edge \(ai\) is \(v_{a}(i)\) for \(i\in\mathcal{I}\), \(a\in\mathcal{A}\). By Lemma 4, we may assume that all the items are covered by every maximum weight matching of \(G\). Let \(M\subseteq E\) be an arbitrary maximum weight matching, and for each agent \(a\), let \(x_{a}\) denote the item to which \(a\) is matched in \(M\) if such an item exists, otherwise define \(x_{a}\) to be the empty set. Furthermore, take a weighted covering \(\pi\) provided by Lemma 3. Note that \(M\) consists of tight edges. We define the ordering \(\sigma\) of the agents as follows: agents not covered by \(M\) arrive first, and then the remaining agents arrive in a decreasing order according to their \(\pi\) value, where ties are broken arbitrarily. Now we describe how to set the prices at each time step. Define the prices to be \(+\infty\) until all the agents not covered by \(M\) have left. Then, at each time step, consider the next agent \(a\) and set the price of all remaining items to \(+\infty\) except for \(x_{a}\), for which we set the price to \(v_{a}(x_{a})\). Clearly, the agent will take the item \(x_{a}\), hence the resulting allocation corresponds to \(M\) and has total revenue equal to the maximum weight of a matching. It remains to verify that the pricing and the allocation thus obtained provide an ex-post envy-free solution. To see this, consider the arrival of an agent \(a\in\mathcal{A}\). As all the remaining items has been priced at \(+\infty\) so far except for \(x_{a}\) which is priced at \(v_{a}(x_{a})\), it is enough to show that \(a\) does not envy an item that was taken before her arrival. Those items were also priced at \(+\infty\) except for the time step when they were taken by the corresponding agent. So let \(a^{\prime}\) be an agent who arrived before \(a\) and took the item \(x_{a^{\prime}}\), that is, \(x_{a^{\prime}}\neq\emptyset\). Since \(\pi\) is a weighted covering, \(M\) consists of tight edges, and \(\pi(a^{\prime})\geq\pi(a)\), we get \[u_{a,\sigma(a^{\prime})}(x_{a^{\prime}}) =v_{a}(x_{a^{\prime}})-p_{\sigma(a^{\prime})}(x_{a^{\prime}})\] \[=v_{a}(x_{a^{\prime}})-v_{a^{\prime}}(x_{a^{\prime}})\] \[\leq(\pi(a)+\pi(x_{a^{\prime}}))-(\pi(a^{\prime})+\pi(x_{a^{ \prime}}))\] \[=\pi(a)-\pi(a^{\prime})\] \[\leq 0\] \[=v_{a}(x_{a})-v_{a}(x_{a})\] \[=u_{a,\sigma(a)}(x_{a}),\] which means that agent \(a\) does not envy the item \(x_{a^{\prime}}\). A similar proof works for the ex-ante setting as well. However, the proof is is slightly more complicated as maintaining ex-ante envy-freeness requires a careful choice of prices. As a result, the revenue of the final allocation is not exactly the maximum weight of a matching in the associated bipartite graph, but can be arbitrarily close to that. For simplicity, we still refer to such a pricing as 'optimal' in the statement of the theorem. **Theorem 22**.: _If the ordering of the agents can be chosen by the seller, then there exists a revenue-maximizing dynamic pricing scheme for the unit-demand ex-ante envy-free pricing problem even if ties are broken by the agents. Furthermore, the optimal ordering and prices can be determined in polynomial time._ Proof.: Consider the edge-weighted bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\), a maximum weight matching \(M\), \(x_{a}\) for \(a\in\mathcal{A}\), and weighted covering \(\pi\) as in the proof of Theorem 21. Let \(0<\delta<\min\bigl{\{}\min\{\pi(a)+\pi(i)-v_{a}(i)\mid ia\in E\text{ is not tight}\},\ \min\{\pi(i)\mid i\in\mathcal{I}\}\bigr{\}}\). Note that, by Lemma 3 and the assumption that each item is covered by every maximum weight matching of \(G\), such a \(\delta\) exists. Furthermore, let \(0<\varepsilon<\delta/2^{n}\). We define the ordering \(\sigma\) of the agents as follows: agents covered by \(M\) arrive first in an increasing order according to their \(\pi\) values where ties are broken arbitrarily, followed by the remaining agents. Now we describe how to set the prices at each time step. If an agent \(a\) arrives for which \(x_{a}\neq\emptyset\), then for each item \(i\in\mathcal{I}\) set its price to \(\pi(a)+\pi(i)-\delta/2^{\sigma(a)}+\varepsilon\) except for \(x_{a}\), for which we set the price to \(\pi(a)+\pi(x_{a})-\delta/2^{\sigma(a)}\). If \(x_{a}=\emptyset\), then define the prices to be \(+\infty\). By the definition of the ordering and the values \(\delta\) and \(\varepsilon\), the prices remain non-negative and do not decrease over time, hence the resulting allocation is automatically ex-ante envy-free. It suffices to show that each agent \(a\) chooses \(x_{a}\) upon arrival. Indeed, if this holds, then that results in a profit of \(\pi(a)+\pi(x_{a})-\delta/2^{\sigma(a)}\geq v_{a}(x_{a})-\delta\), where we used the fact that the edge \(x_{a}a\) is tight by Lemma 3(a). By choosing \(\delta\) small enough, the total revenue of the final allocation can be arbitrarily close to the weight of \(M\). Consider any remaining item \(i\) distinct from \(x_{a}\). As \(\pi\) is a weighted covering and \(x_{a}a\) is tight, we get \[u_{a,\sigma(a)}(i) =v_{a}(i)-p_{\sigma(a)}(i)\] \[=v_{a}(i)-(\pi(a)+\pi(i)-\delta/2^{\sigma(a)}+\varepsilon)\] \[<\delta/2^{\sigma(a)}\] \[=v_{a}(x_{a})-(\pi(a)+\pi(x_{a})-\delta/2^{\sigma(a)})\] \[=u_{a,\sigma(a)}(x_{a}).\] This means that \(x_{a}\) is the unique maximizer of the utility of \(a\) at time step \(\sigma(a)\) and has positive utility for \(a\), hence agent \(a\) takes \(x_{a}\) as stated. Finally, we settle the existence of weakly envy-free solutions when the ordering of the agents is fixed but known in advance. **Theorem 23**.: _If the ordering of the agents is known in advance, then there exists a revenue-maximizing dynamic pricing scheme for the unit-demand weakly envy-free pricing problem even if ties are broken by the agents. Furthermore, the optimal prices can be determined in polynomial time._ Proof.: Let \(\sigma\) denote the fixed ordering of the agents. Define an edge-weighted bipartite graph \(G=(\mathcal{I},\mathcal{A};E)\), maximum weight matching \(M\subseteq E\), and \(x_{a}\) for \(a\in\mathcal{A}\) as in the proof of Theorem 21. At the arrival of agent \(a\), set the price of all remaining items to \(+\infty\) except for \(x_{a}\), for which we set the price to \(v_{a}(x_{a})\). The agent clearly takes \(x_{a}\) at the maximum possible price, hence the resulting allocation and pricing are optimal. Conclusions In this paper, we studied the existence of optimal dynamic prices under fairness constraints in unit-demand markets. We proposed four possible notions of envy-freeness depending on the time period over which agents compare themselves to others, and settled the existence of optimal dynamic prices in various settings. We close the paper with mentioning a few open problems. While we concentrated on social welfare and revenue maximization problems, a natural question is to consider alternative objective functions such as the average or the max-min social welfare and revenue. Besides being interesting on their own, such functions may be used to overcome the difficulties mentioned in Remark 1. A recently line of research investigated the problem of balancing fairness and efficiency in markets, see e.g. [11]. It would be interesting to see how dynamic envy-free pricing behaves under such objective functions. Finally, the complexity of weak envy-free revenue maximization with unspecified order remains open. This variant is of special interest, since it naturally connects revenue maximization with the recent popular strain of research on dynamic pricing schemes. Acknowledgement.The work was supported by DAAD with funds of the Bundesministerium fur Bildung und Forschung (BMBF), the Lendulet Programme of the Hungarian Academy of Sciences - grant number LP2021-1/2021 and by the Hungarian National Research, Development and Innovation Office - NKFIH, grant number FK128673.
2310.14103
Revisiting Instruction Fine-tuned Model Evaluation to Guide Industrial Applications
Instruction Fine-Tuning (IFT) is a powerful paradigm that strengthens the zero-shot capabilities of Large Language Models (LLMs), but in doing so induces new evaluation metric requirements. We show LLM-based metrics to be well adapted to these requirements, and leverage them to conduct an investigation of task-specialization strategies, quantifying the trade-offs that emerge in practical industrial settings. Our findings offer practitioners actionable insights for real-world IFT model deployment.
Manuel Faysse, Gautier Viaud, Céline Hudelot, Pierre Colombo
2023-10-21T20:04:55Z
http://arxiv.org/abs/2310.14103v1
# Revisiting Instruction Fine-tuned Model Evaluation to Guide Industrial Applications ###### Abstract Instruction Fine-Tuning (IFT) is a powerful paradigm that strengthens the zero-shot capabilities of Large Language Models (LLMs), but in doing so induces new evaluation metric requirements. We show LLM-based metrics to be well adapted to these requirements, and leverage them to conduct an investigation of task-specialization strategies, quantifying the tradeoffs that emerge in practical industrial settings. Our findings offer practitioners actionable insights for real-world IFT model deployment. ## 1 Introduction Adapting pre-trained language models (LMs) for specific applications is central in industrial NLP to unlock task-specific performance gains and strengthen model alignment with industry requirements. A paradigm gaining traction is the use of instruction fine-tuned (IFT) models, LMs capable of following arbitrary instructions expressed in natural language (Wei et al., 2022; Sanh et al., 2022; Ouyang et al., 2022). Researchers primarily concentrate on improving general-purpose IFT models to be used as versatile agents capable of executing instructions expressed in natural language (Li et al., 2023; Zhou et al., 2023; Xu et al., 2023). In an industrial setting, prompting ChatGPT to improve the wording of an email, or to assist with a code snippet would be instances of this zero-shot utilization scenario, which we define as \(\mathcal{S}_{0}\). Critical industrial LLM applications may however not always align with \(\mathcal{S}_{0}\), and often prioritize two other settings. The first scenario, \(\mathcal{S}_{1}\), requires extending a generalist IFT model's capabilities to new specific tasks not included in the original instruction training set. The second scenario, \(\mathcal{S}_{2}\), centers around converting IFT models into specialized models proficient _exclusively_ on specific tasks. In \(\mathcal{S}_{1}\) for instance, a large company may want an LLM assistant for internal employee use, and decide to extend an openly available Chat model by training it to write memos with a specific templating scheme, to respond to internal FAQs, and to use internal coding tools, all the while retaining the original chat assistant's general purpose abilities. In \(\mathcal{S}_{2}\), that same company is only interested in a given specific task; extracting specific information from business documents, and specializes an IFT model for that purpose, aiming to leverage prompting and the generalization capabilities of the model for a more data-efficient training. In this paper, we thoroughly examine \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) by investigating the learning dynamics of specializing IFT models through a practical lens. To ensure the reliability of our tooling and the rigor of our conclusions, we first undertake a critical assessment of the current evaluation practices employed for IFT models. Formally, our contributions are: **Contribution 1.** IFT models are designed to handle tasks of diverse natures and varying difficulties. However, current metrics used to measure their performance are often task-specific (Zellers et al., 2019; Gao et al., 2021), or rely on automatic metrics designed for other intended purposes (Papineni et al., 2002; Lin, 2004). To address this limitation, we introduce two new requirements for metrics used to evaluate IFT models: Comparability Across Task (CAT) and Task and Format Agnostism (TFA). CAT imposes for metric scores to exhibit consistency across a diverse set of generative tasks, in contrast to the sole traditional focus of consistency within a specific task. TFA defines the need for metrics to demonstrate robustness to variations in the output formats. By highlighting the shortcomings of existing metrics in meeting CAT and TFA, we present compelling evidence that using LLMs as scoring agents is a viable evaluation alternative of IFT models. **Contribution 2.** We approach our examination of \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) from a practical perspective and focus on the trade-off between data availability and overall performance. Our analysis uncovers two distinct phases of learning during IFT model specialization: learning to format, and learning to solve tasks. Subsequently, we showcase how practitioners can (i) leverage synthetic data to facilitate learning the desired formatting aspects and (ii) use IFT models to reduce the need of expert data in industrial scenarios. Our study provides practical insights and actionable recommendations to practitioners looking to deploy IFT models in production settings.1 Footnote 1: Code and evaluation datasets are available on [https://github.com/ManueIsf/IFTEval](https://github.com/ManueIsf/IFTEval). ## 2 Re-evaluating IFT Model Evaluation ### What Should Good Scorers Measure? In scenarios \(\mathcal{S}_{0}\), \(\mathcal{S}_{1}\), and \(\mathcal{S}_{2}\), IFT models are trained to perform generative tasks. Unlike models designed for single tasks with known output formats, IFT models have the capacity to generate diverse valid responses across different tasks and formats (Ouyang et al., 2022). The novel capabilities of IFT models impose new considerations when selecting an automatic evaluation metric. **Comparability across tasks (CAT).** Standard evaluation metrics aim to fulfill one key requirement: coherence within each task with respect to human judgment (Specia et al., 2010). However, due to the multi-task nature of IFT models, the scores should also be comparable across different tasks (Colombo et al., 2022; Himmi et al., 2023). In other words, the scoring scale should be absolute and coherent with human preferences on all tasks. To measure the CAT we will mix samples of different tasks and compute the Spearman correlation (\(\rho\)) of their score with human judgment2. This requirement is essential in scenarios \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\) to measure model performance across different tasks, and make informed decisions regarding the trade-offs between model variants. Footnote 2: Common when benchmarking metrics (Bhandari et al., 2020; Colombo et al., 2021; Chun et al., 2022; Staerman et al., 2021; Fabbri et al., 2021; Colombo et al., 2021), we extend the tool to inter-task settings. **Task and Format-Agnostism (TFA).** Evaluation metrics should be robust to artifacts associated with the output format and to the nature of the evaluated task (Liang et al., 2022). Implementing task-specific scoring metrics is not a scalable solution for generalist IFT models. To measure TFA, we compute the relative target task improvement between models prompted in a zero-shot manner and models that mastered the task format (trained on 1000 target task samples). Comparing each metric's TFA to human-reported performance improvements allows to grasp the extent to which mastery of the task format influences the metric performance, independently of intrinsic task performance. In industrial scenarios, this requirement is essential as it ensures minimal bias in the evaluation due to training data formatting artifacts. In practice, many datasets that industrial actors may add to the training instruction set (\(\mathcal{S}_{1}\)), or fully train a model on (\(\mathcal{S}_{2}\)) have specific response formatting that differs from what a zero-shot model will answer, leading to a potentially large formatting bias. **Comparability intra-task (CIT).** While in no way a novel requirement, it is essential for metrics to measure performance consistently within a given task. We verify this by computing the Spearman \(\rho\) correlation coefficient between samples of a specific task and human judgments. In all industrial scenarios for IFT LLMs, rigorous model evaluation is necessarily linked to evaluation metrics that comply with both CAT and TFA, as well as the standard CIT measures. ### Existing Metrics **Current Evaluation.** Currently, two dominant paradigms emerge for assessing the performance of IFT models: (i) relying on reference-matching scoring metrics such as ROUGE-L (Lin, 2004), or normalized log-probabilities of class labels in few-shot classification benchmarks (Hendrycks et al., 2021; Gao et al., 2021; Zellers et al., 2019), and (ii) model ranking frameworks, based on pairwise preference comparisons of response quality judged by humans or LLM evaluators (Chiang et al., 2023; Dubois et al., 2023; Gudibande et al., 2023). Language Model based scoring has been shown to be a promising alternative on specific tasks, such as summarization (Liu et al., 2023; Colombo et al., 2022) or translation (Kocmi and Federmann, 2023; Xu et al., 2023). Our work extends these findings to showcase the multi-task scoring capabilities of LLMs with respect to CAT and TFA. **LMs as Viable Scoring Mechanisms.** Given the inherently open nature of IFT model generation, we adopt a reference-free setting to ensure unbiased evaluation. We present an input prompt and the corresponding generated response to the LLM3, prompting it to assign a score on a scale of 0 to 10, subsequently scaling it between 0 and 1 to facilitate comparisons with other evaluation metrics. **Baseline Metrics.** We assess the fulfillment of both CAT and TFA by comparing the proposed metrics against well-established _reference-based_ metrics, including ROUGE4, BScore (Zhang et al., 2020), and SBERT (Reimers and Gurevych, 2019), as well as a _machine learned_ metric, the OpenAssistant Reward Model (RM) (Kopf et al., 2023) trained on human preferences. Footnote 4: ROUGE-1 is used here, it is best on one-word long labels ### Experimental setup **Training an IFT model.** IFT models are trained by fine-tuning a base model on a large instruction corpus, collected either through human annotations (Ouyang et al., 2022; Kopf et al., 2023) or concatenating task-specific datasets (Sanh et al., 2022; Mishra et al., 2022). In line with recent work (Chiang et al., 2023; Wang et al., 2023; Peng et al., 2023), we leverage synthetic data as the base instruction set in our IFT models (Taori et al., 2023). **Benchmarking automatic metrics.** To benchmark the metrics, we rely on a combination of synthetic and real data. For _synthetic data_, we use the Alpaca GPT4 dataset (Taori et al., 2023), and tag the data in 13 task categories (see Sec. A.1) (_e.g._, logic, code, rewrite). For _human data_, we focus on tasks with industrial interests. Specifically, we include Natural Language Inference (Williams et al., 2018; Wang et al., 2019), Question Answering (Rajpurkar et al., 2016), NER (Tjong Kim Sang and De Meulder, 2003), and Sentiment Classification (Socher et al., 2013; Agirre et al., 2013)). To build our metric evaluation dataset, we train and run LLaMA-7B models (Touvron et al., 2023) on varied data mixtures and target tasks. For rigor, we also report scores on the summarization with human feedback dataset from (Stiennon et al., 2022) (SUM).5 Footnote 5: More details in Sec. B.1) ### Experimental results To better understand the limitation of existing metrics we conduct both single-task analysis to ensure that metrics are able to score tasks reliably as well as multi-task analysis, which is the standard setting for IFT models. Results are reported in Tab. 1. **CIT Analysis.** From Tab. 1(left, SUM), we observe that the average correlation with human scores for evaluated summaries are higher for LLM models than with traditionally used metrics. Intra-task correlations on all other _human data_ tasks, averaged in CIT lead to similar conclusions. **CAT Analysis.** Tab. 1(left) shows that all metrics, _with the exception of the GPT4-based metric_, exhibit weak or no correlation in the context of inter-task consistency. While it is true that existing metrics demonstrate the ability to differentiate between good and bad samples within a single task (CIT), their _performance falls short when confronted with the open setting imposed by IFT models_. **TFA Analysis.** On non-LLM-based metrics, performance gains reported between zero-shot models, and models trained on 1000 target-task samples (Tab. 1(left), TFA) largely exceed the 12.0 % relative improvement of human scores, and demonstrate how format, once learned, unrealistically boosts reference-based metrics which are heavily impacted by format. **Metric Similarity Analysis.** Fig. 1 displays metric correlation at the sample level on the synthetic dataset. The results align with (Chen et al., 2022), indicating a moderate to high correlation between BERT-based metrics and ROUGE-L. However, all metrics exhibit a low correlation with GPT4, indicating different response features are scored. **Zoom on GPT4.** Tab.1(right) shows a strong correlation between the results of GPT4-based metrics and the corresponding LLM task abilities reported in Wei et al. (2022) (Logic and Coding are non-trivial for LLMs, Writing tasks are relatively easier). However, reference-based metrics such as ROUGE suggest the opposite, as they are biased by the high syntactic overlap between model outputs and reference answers in these categories. The GPT3.5 scorer also highly rates results on the Logical Reasoning tasks, contrarily to GPT4. This is due to its lesser ability to spot logical inconsistencies in the evaluated responses (Bubeck et al., 2023), hinting that evaluator models must be capable at the evaluated task themselves in order to produce meaningful scores. Our findings highlight the inadequacy of existing metrics in quantifying the performance of IFT models, while emphasizing GPT4 as a promising candidate. This performance gap in evaluation capabilities is primarily explained by GPT4's reduced dependence to reference answers, leading to a more coherent and absolute evaluation scale CAT, and an improved robustness to variations in output formatting TFA. _The GPT4 scorer's powerful capabilities unlock the study of novel settings traditional metrics would struggle with_(Schaeffer et al., 2023). ## 3 IFT Models for Industrial Applications ### \(\mathcal{S}_{1}\): Improving Specific Tasks In this section, we delve into \(\mathcal{S}_{1}\), which specifically aims to extend an IFT model's capabilities to better perform on specific instructions. **Setting.** We fine-tune a base 7B-LLM model (Pythia Biderman et al. (2023), Bloom Scao et al. (2022), Falcon Penedo et al. (2023), or LLaMA) using synthetic instructions. In each training iteration, we introduce a selected number of \(N\)_real_ target task samples into the synthetic dataset. We evaluate the model performance on an independent test subset of the target task.6 Footnote 6: More experimental details are given in Sec. C.1.1. **Mastering Format to Foster Understanding.** Fig. 2 shows target task performance as the number of target task samples introduced within the base training set increases. Across all tasks and models, _specialization is biphasic_: first, _task output format_ is learned while overall performance remains constant, or even slightly decreases. Only once the format has been mastered, as noted by the spike of the Exact Match, does the model improve upon its _underlying task performance_(Human and GPT4 scores increase), eventually surpassing the original zero-shot performance. It is worth noting that this analysis is made possible by format-agnostic scorers (TFA) that can accurately decouple output format from underlying model performance. **Measuring Model Forgetting.** Performance on a test split of the Alpaca data shows little to no performance degradation (<1%) caused by the inclusion of new tasks to the training mix (Sec. C.1.2). **Leveraging Synthetic Data to Learn to Format.** Our findings suggest a straightforward approach to optimizing the use of real examples: _employ synthetic examples to assist the model in mastering the desired format before relying on real samples to enhance overall model performance_. We repeat the previous experiment, replacing the \(N\) human-annotated target task training samples (\(H\)), by GPT4 synthetically-generated samples (\(S\)), or synthetic samples with random labels (\(R\)) (Fig. 3) Exact Match shows synthetic or randomly labeled data can indeed be used to learn the desired format, although the better quality human data eventually yields better results with more samples. In (\(S\)+\(H\)), we train on 100 synthetic samples, then on \(N\) human-annotated samples. This technique enables the model to master the format before being trained on high-quality data, largely improving human annotated data sample efficiency. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Scores & SUM & CAT & CIT & TFA & Score & TFA \\ \hline ROUG & 0.28 & 0.22 & 0.57 & +51.9 \(\pm\) & 0.00 \\ BScore & 0.21 & 0.22 & 0.13 & +49.0 \(\pm\) & 0.00 \\ BSERT & 0.25 & 0.29 & 0.43 & +86.3 \(\pm\) & 0.00 \\ RM & 0.20 & 0.28 & 0.29 & +44.6 \(\pm\) & 0.00 \\ \hline GPT4 & **0.45** & **0.68** & **0.77** & +2.1 \(\pm\) & 0.00 \\ GPT3.5 & 0.42 & 0.19 & 0.48 & +9.5 \(\pm\) & 0.00 \\ \hline Human & 0.54 & - & - & +12.0 \(\pm\) & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 1: (_Left) \(\rho\) **correlation between human scores and metrics** on the summarization task (SUM), on the _human data_ tasks individually then averaged in (CIT), and on the concatenated human tasks (CAT) to form inter-task settings. (TFA) denotes relative improvement after 1000 target task samples are added to the training set. _(Right)_**Metric scores** averaged per _synthetic data_ category Figure 1: **Spearman \(\rho\) between metrics on synthetic data.** Figure 2: Incorporating \(0\leq N\leq 1000\) real task samples into IFT model training ### \(\mathcal{S}_{2}\): IFT models as Task-Specific Solvers **Setting.** We use four model architectures and, for each architecture, we employ the base model to train an IFT model variant using the synthetic Alpaca dataset. We then fine-tune both the base models and their IFT variants on a subset of \(N\) samples drawn from the target task. This setup simulates an industrial scenario in which limited data is available to specialize a model on a unique task, and assesses whether there are benefits to instruction-tuning a base model before fine-tuning it on the target task. **Results**. Fig. 4 demonstrates that IFT models exhibit enhanced performance in low-data scenarios (when \(10\leq N\leq 200\)). Intuitively, IFT models are better able to leverage the task description given in the prompts, thus enabling boosted zero-shot performance (Scao and Rush, 2021). This complements and steers model training when finetuned with small numbers of samples. When more samples are available (\(N\geq 200\)), the task pattern is sufficiently clear and the benefits of prompting and of the IFT training phase disappear. This finding aligns with the results presented in Sec. 3.1, emphasizing the potential of synthetic datasets to enhance data efficiency in industrial scenarios. ## 4 Key Takeaways for Practitioners **Leveraging LLM for evaluation.** Evaluating IFT models is challenging, as it mandates _comparability across tasks_ and _format-agnostism_, which standard metrics struggle with. While LLM scoring is not ideal (Limitations in Sec. 4), it is a strong option practitioners should add to their arsenal. **Leveraging Synthetic Data for Efficient Learning.** LLM-based evaluation uncovers the fact that leveraging synthetic data provides a quick and cost-effective approach to mastering format in low data regimes, with no performance degradation. This methodology proves viable across various scenarios, presenting an opportunity to more efficiently leverage potentially limited amounts of expert annotated data available. ### Limitations While this paper has argued in favor of using LLM as scorers, important drawbacks remain. The best-performing scorer at the moment, GPT4 is a proprietary, black-box access model, and no guarantees exist that it will remain accessible unchanged over time, leading to reproducibility issues and data privacy concerns. Since model and training data internals are not open-knowledge, analysis of scoring errors and internal model biases is also limited. Promising openly available alternative models are being developed, either general purpose LLMs aiming to disrupt the hegemony of GPT4 (Touvron et al., 2023; Bai et al., 2023), or smaller models specialized for automatic evaluation, often attempting to distillate GPT4's scoring abilities by training on GPT4 generated scores or scoring explanations (Xu et al., 2023; Liu et al., 2023). In the latter category, the Prometheus scoring model (Kim et al., 2023), based on LLama2, claims scoring performances on par with GPT4 in terms of human score correlations over a variety of tasks and benchmarks. Eventually, strong Open-Source LLMs should alleviate most of the concerns raised by relying on proprietary black-box models and we hope this work, by shedding light on the importance of LLM scoring, motivates these efforts to build open models with strong scoring abilities. Figure 4: GPT4 score on SST-2 test set after finetuning with \(0\leq N\leq 1000\) samples on a (base) LM or an IFT model. Further experiments can be found in Sec. C.2.2. Figure 3: Incorporating \(0\leq N\leq 1000\) (H)uman, (Sy)ynthetic and (R)andomly labeled synthetic data samples in IFT training set. (S+H) is trained on 100 synthetic samples, then \(N\) human data samples. ## Ethics Statement While this work intends to evaluate scorers across many different tasks and settings, it is essentially English-centric, and no conclusions are drawn about the robustness of LLM scorers in other languages. LLM scoring may also be affected by internal model bias acquired through pretraining or subsequent finetuning, and while efforts are made by OpenAI to mitigate bias, critical applications of LLM evaluation should consider that truly objective evaluation is not attainable. All data and base models used in this work originate from publicly available sources. The GPT4 Alpaca dataset is a variant of (Taori et al., 2023) built from synthetic data only, collected through the OpenAI API. The non-synthetic data are sourced from manually annotated, widely used datasets for NLP benchmarking. This work does not transgress any usage restrictions associated with these data sources. Base models used are either available through fully open-source licenses (Falcon, Pythia), or licenses with no restrictions for research purposes (LLaMA, Bloom). We estimate our experiments consumed 5500 GPU V100 hours, using a low-carbon compute cluster, amounting to about 950 kg of CO2 over the course of the project. To reduce the impact to a maximum, all runs are done through the efficient Low-Rank Adaptation training strategy (Hu et al., 2021), and only trained adapter weights are stored to minimize bandwidth and memory usage. API calls to external APIs are cached to minimize redundancies. ## Acknowledgements This work is partially supported by Illumina Technology, and by a grant from ANRT France. This work was performed using HPC resources from GENCI-IDRIS (Grant 2023-AD011014185). Secondary compute contributions were made on HPC resources (Grant 2023-AD103256) and (Grant 2023-AD101838).
2308.11231
Emerging γ-soft-like spectrum in 196Pt in the SU3-IBM (I)
Recently, it has been argued that a spherical-like spectrum emerges in the SU3-IBM, opening up new approaches to understand the {\gamma}-softness in realistic nuclei. In a previous paper, {\gamma}-softness with degeneracy of the ground and quasi-{\gamma} bands was observed. In this paper, another special point connected with the middle degenerate point is discussed, which is found to be related with the properties of 196Pt. This emergent {\gamma}-softness has also been shown to be important for understanding the prolate-oblate asymmetric shape phase transition. The low-lying spectra, B(E2) values and quadrupole moments in 196Pt are discussed showing that the new model can account for several observed features. This is the first part of the discussions on the {\gamma}-soft-like spectrum of 196Pt.
Tao Wang, Bing-cheng He, Chun-xiao Zhou, Dong-kang Li, Lorenzo Fortunato
2023-08-22T07:09:54Z
http://arxiv.org/abs/2308.11231v3
# Emerging \(\gamma\)-softness in \({}^{196}\)Pt in the SU3-IBM ###### Abstract Recently, it has been argued that a new \(\gamma\)-soft rotational spectrum emerges in the interacting boson model with SU(3) higher-order interactions, opening up new approaches to understand the \(\gamma\)-softness in realistic nuclei. In a previous paper, \(\gamma\)-softness with degeneracy of the ground and quasi-\(\gamma\) bands is observed, which displays a O(5) partial dynamical symmetry. In this paper, another special point connected with the middle degenerate point is discussed, which is found to be related with the properties of \({}^{196}\)Pt. This emergent \(\gamma\)-softness has also been shown to be important for understanding the prolate-oblate asymmetric shape phase transition. The low-lying spectra, \(B(E2)\) values and quadrupole moments in \({}^{196}\)Pt are discussed showing that the new model can account for several observed features. ## I Introduction Recently, an extension of the interacting boson model with SU(3) higher-order interactions (SU3-IBM for short) was proposed to describe the spherical-like \(\gamma\)-soft spectra in \({}^{110}\)Cd [1], to explain the puzzling \(B(E2)\) anomaly [2; 3], to discuss the prolate-oblate asymmetric shape phase transition in Hf-Hg region [4], and to provide an E(5)-like description for \({}^{82}\)Kr [5]. O(6) higher-order interactions were found to be unable to explain the \(B(E2)\) anomaly [6]. These works imply that the SU(3) symmetry dominates the deformation of a nucleus, and the \(\gamma\)-softness in realistic nuclei is an emergent phenomenon, which has a deep relationship with the SU(3) symmetry. Cd isotopes show a new \(\gamma\)-soft rotational behavior, which is unexpected in standard nuclear structure studies [8; 9; 10; 11; 12; 13; 14]. The \(B(E2)\) anomaly also seems uncommon, which rejects conventional theoretical explanations, including the interacting boson model (IBM-2) calculations based on the SkM\({}^{*}\) energy-density functional and the symmetry-conserving configuration mixing (SCCM) calculations [15; 16; 17; 18]. The successful explanation of these two abnormal phenomena makes the new theory of SU3-IBM very attractive, and further exploration of the applications of the theory becomes very valuable, especially on various phenomena related to \(\gamma\)-softness in nuclear spectra. What needs to be emphasized is that these two anomalous phenomena cannot be explained by the interacting boson model with the O(6) \(\gamma\)-softness [6]. Recent studies on the prolate-oblate asymmetric shape phase transition revealed that the key ingredient of the new model SU3-IBM is to describe the oblate shape with the SU(3) third-order interaction [4]. \(\gamma\)-softness comes from the competition between the prolate shape and the oblate shape, thus in the SU3-IBM the new \(\gamma\)-softness is an emergent phenomenon. The IBM provides an elegant approach to describe the low-lying collective excited behaviors in nuclear structure [19]. In the simplest IBM-1, the basic building constituents are the \(s\) and \(d\) bosons with angular momentum \(l=0\) and \(l=2\) respectively, and the collective states of a nucleus can be spanned by the su(6) algebra. Up to two-body interactions, a consistent-\(Q\) (CQ) Hamiltonian adopted in this model is [20; 21] \[\hat{H}_{1}=c[(1-\eta)\hat{n}_{d}-\frac{\eta}{N}\hat{Q}_{\chi}\cdot\hat{Q}_{ \chi}]. \tag{1}\] Here \(\hat{n}_{d}\) is the \(d\)-boson number operator, \(\hat{Q}_{\chi}=[d^{l}\times\tilde{s}+s^{\dagger}\times\tilde{d}]^{(2)}+\chi[d^ {l}\times\tilde{d}]^{(2)}\) is the generalized quadrupole operator, \(N\) is the total boson number, \(c\) is a scale parameter and \(0\leq\eta\leq 1,-\frac{\sqrt{7}}{2}\leq\chi\leq\frac{\sqrt{7}}{2}\) are parameters that allow to span a full range of different nuclear spectra. Although the formalism is simple, it can describe the spherical (\(\eta=0\), the U(5) limit), prolate (\(\eta=1\), \(\chi=-\frac{\sqrt{7}}{2}\), the SU(3) limit), oblate (\(\eta=1\), \(\chi=\frac{\sqrt{7}}{2}\), the SU(3) case) and \(\gamma\)-unstable (\(\eta=1\), \(\chi=0\), the O(6) limit) nuclei. This Hamiltonian is extensively used in fitting realistic nuclear spectra and discussing the shape phase transitions between different shapes [19; 20; 21]. More than a decade ago, one of the authors (L. Fortunato) and his collaborators generalized the simple formalism (1), and a cubic-\(Q\) interaction is introduced as follows [22] \[\hat{H}_{2}=c[(1-\eta)\hat{n}_{d}-\frac{\eta}{N}(\hat{Q}_{\chi}\cdot\hat{Q}_{ \chi}+\frac{\kappa_{3}}{N}[\hat{Q}_{\chi}\times\hat{Q}_{\chi}\times\hat{Q}_{ \chi}]^{(0)})], \tag{2}\] where \(\kappa_{3}\) is the coefficient of the cubic term. In the SU(3) limit, when \(\chi=-\frac{\sqrt{7}}{2}\), the cubic interaction can describe an oblate shape (SU(3) oblate), which is different from the previous \(\overline{\text{SU}(3)}\) oblate shape in Hamiltonian (1). This indicates that the previous description of the oblate shape with \(\overline{\text{SU}(3)}\) symmetry can be replaced by the SU(3) symmetry and a new evolutional path from the prolate shape to the oblate shape can be established within only the SU(3) limit, see the bottom black line in Fig. 1. Thus an analytically solvable prolate-oblate shape phase transitional description within the SU(3) limit can be provided, see Ref. [23], which offers a rare example for finite-\(N\) first-order quantum shape transition. The phase transitional point is also a degenerate point [23], which implies a hidden symmetry [1]. This hidden symmetry is responsible for the whole new progress in [1; 2; 3; 4; 5; 6]. Moreover, in this extended Hamiltonian \(\hat{H}_{2}\), there is only a very tiny region of rigid triaxiality in the large-\(N\) limit at \(\chi=-\frac{\sqrt{5}}{2}\) when the parameter changes from the U(5) limit to the SU(3) degenerate point, see the green line in Fig. 1. These new results presented by Ref. [22; 23] encourage us to understand the existing experimental phenomena from a new perspective. Some new and unexpected results have emerged recently. A new shape triangle can be drawn (see Fig. 1), which is similar to the Casten triangle related to the Hamiltonian (1) [24]. In the SU3-IBM new theory [1], new \(\gamma\)-soft triaxial rotation is found, which is different from the O(6) \(\gamma\)-unrelated rotational mode in Hamiltonian (1). The shape transitional behaviors from the U(5) limit to the SU(3) degenerate point was numerically explored (green line in Fig. 1). The key observation is that, spherical-like \(\gamma\)-soft triaxial rotational spectra actually exists (see Fig. 5 (a)), which may be the candidate to solve the spherical nucleus puzzle [11; 12; 25]. Within the parameter region of the green line in Fig. 1, we find the unexpected result that there is an accidental degeneracy of the corresponding energy levels between the ground and quasi-\(\gamma\) bands such that they form an exactly degenerate multiplet, which represents a O(5) partial dynamical symmetry. It should be pointed out that this partial dynamical symmetry is not the same thing as the previous studies in [26]. It will be clear, from the ensuing discussion, that while this degenerate multiplet corresponds to that found in the SO(5) symmetry with quantum number \(\tau=2\), the next one (\(\tau=3\)) is not exactly degenerate, a feature that is often observed in actual nuclear spectra. Historically, higher-order interactions in IBM-1 were introduced to describe \(\gamma\)-rigid triaxial deformation and interactions \([d^{\dagger}d^{\dagger}d^{\dagger}]^{(L)}\cdot[\bar{d}\bar{d}\bar{d}]^{(L)}\) can play a key role for triaxiality of the ground state [27; 28]. An important progress related with our works is investigating SU(3) symmetry-conserving higher-order interactions [29]. Subsequently, within the SU(3) limit, an algebraic realization of the rigid asymmetric rotor was established [30; 31]. Recently, this realization has been used to explain the \(B(E2)\) anomaly [3]. SU(3) third-order and fourth-order interactions are also discussed in Ref. [32; 33; 34; 35; 36]. Higher-order terms are also important in partial dynamical symmetry [26]. Higher-order interaction (\(\hat{Q}_{0}\times\hat{Q}_{0}\times\hat{Q}_{0}\))\({}^{(0)}\) can present a rotational spectrum [37], where \(\hat{Q}_{0}\) is the quadrupole operator in the O(6) limit. This result was further studied by [38; 39]. However the O(6) symmetry was questioned in [6]. In these series of new developments [1; 2; 3; 4; 5; 6; 22; 23; 30; 31], SU(3) higher-order interactions begin to show an extremely important role, albeit at a phenomenological level. These higher-order interactions have already been shown to be relevant to some realistic anomalies in nuclear structure [1; 2; 3], so introducing these terms is of practical significance. The \(\gamma\)-soft shape was first described in Ref. [40], where the geometric Hamiltonian is not dependent on the \(\gamma\) variable. In the IBM, the \(\gamma\)-soft case can be described by the O(6) limit [41; 42] and the nucleus of \({}^{196}\)Pt was the first candidate for the O(6) spectra. However there was still some debates about it [43; 44]. In the IBM-2 [19], triaxial shape can be described even with up to two-body interactions [45; 46; 47; 48]. Three-body interactions \([d^{\dagger}d^{\dagger}d^{\dagger}]^{(L)}\cdot[\bar{d}\bar{d}\bar{d}]^{(L)}\) are also used in the IBM-2 to investigate the \(\gamma\) triaxiality [49]. In the sdg-IBM, \(l=4\) g bosons can be introduced and hexadecapole deformation can be discussed [50]. Except for the IBM, triaxial shapes are also investigated by many existing nuclear models [20; 25; 51; 52; 53; 54; 55; 56; 20]. Although \({}^{196}\)Pt seems to adapt to the description in terms of the O(6) symmetry, some noticeable deviations still exist and cannot be described at a satisfactory level in the IBM. The first drawback is that it has a large electric quadrupole moment [43], \(Q_{2^{+}_{1}}\)=0.62(8), pointing towards the oblate side. The second is that the staggering feature of \(\gamma\) band breaks the O(5) symmetry, which seems to be intermediate between the \(\gamma\)-soft and \(\gamma\)-rigid. The Figure 1: New shape triangle: the top point of the triangle presents the U(5) limit, which is spherical shape. The two bottom points and the black line between them are all within the SU(3) limit.The left bottom point presents the SU(3) prolate shape, and the right one presents the SU(3) oblate shape. third is the positions of the \(0^{+}_{2}\), \(0^{+}_{3}\), \(0^{+}_{4}\) states, which can not be reproduced well [58]. Recently two important results in the SU3-IBM are also found [4; 5]. In [4], the SU(3)-IBM is used to describe the prolate-oblate shape phase transition with an asymmetric way, which can well explain the shape transitions from \({}^{180}\)Hf to \({}^{200}\)Hg including the nucleus \({}^{196}\)Pt. It was found that, in this shape phase transition, another special point that shows accidental degeneracy features, can be located near the middle of the degenerate line and it can be used to describe the properties of \({}^{196}\)Pt [4]. In [5], a shape transitional behavior like the one from the U(5) limit to the O(6) limit in IBM is found by introducing the SU(3) fourth-order interaction, which can describe the E(5)-like \(\gamma\)-softness in \({}^{82}\)Kr. Following the ideas of the previous researches, further exploration of the applications of the SU3-IBM is necessary. The \({}^{196}\)Pt is the focus. The three obvious deficiencies discussed above can be well overcome simultaneously. This nucleus is usually regarded as a typical example with O(6) symmetry in Hamiltonian (1). We have found that the new model can give a more reasonable description and our work provides a new understanding for \(\gamma\)-softness in \({}^{196}\)Pt and other similar nuclei, which is related with the SU(3) symmetry. Thus these results in the SU3-IBM ([1; 2; 3; 4; 5; 6] and this paper) together confirm the validity of the new idea. ## II Hamiltonian In the SU3-IBM, the \(d\) boson number operator \(\hat{n}_{d}\) must be included, which can describe a spherical shape. This is vital for the pairing interaction between the valence nucleons. Other interacting terms are all SU(3) conserving invariants. The traditional second-order interactions can describe the prolate shape. Ref. [22] pointed out that the third-order Casimir operator can describe the oblate shape. Other higher-order interactions should be considered in some peculiar phenomena, such as \(B(E2)\) anomaly and some unusual experimental data which can not be described by previous theories [2; 3]. In [5], the square of the second-order Casimir operator is found to be vital for the \(\gamma\)-softness of the realistic nuclei. In this paper, the third-order invariant operator and the square of the second-order invariant operator are introduced into the interactions, like in Ref. [5] (the fourth-order interaction is only a supplementary term here). Although this is a simple formalism in the SU3-IBM, it shows many new interesting phenomena. Thus the Hamiltonian discussed in this paper is \[\hat{H} = c\left[(1-\eta)\hat{n}_{d}+\eta\left(-\frac{\hat{C}_{2}[\text{ SU}(3)]}{2N}\right.\right. \tag{3}\] \[\left.\left.+\kappa\frac{\hat{C}_{3}[\text{SU}(3)]}{2N^{2}}+\xi \frac{\hat{C}_{2}^{2}[\text{SU}(3)]}{2N^{3}}\right)\right],\] where \(0\leq\eta\leq 1\), \(c\) is a global energy scale parameter, \(N\) is the boson number, \(\kappa\) is the coefficient of the cubic term, \(\kappa=\frac{9\kappa_{3}}{2\sqrt{35}}\), \(\xi\) is the coefficient of the fourth-order interaction, and \(\hat{C}_{2}[\text{SU}(3)]\) and \(\hat{C}_{3}[\text{SU}(3)]\) are the second-order and third-order SU(3) Casimir operators separately. If the fourth-order term is not considered, the Hamiltonian (3) can be described by the new shape triangle in Fig. 1. In the SU(3) limit the two Casimir operators can be related with the quadrupole second or third-order interactions as following \[\hat{C}_{2}[\text{SU}(3)]=2\hat{Q}\cdot\hat{Q}+\frac{3}{4}\hat{L}\cdot\hat{L}, \tag{4}\] \[\hat{C}_{3}[\text{SU}(3)]=-\frac{4}{9}\sqrt{35}[\hat{Q}\times\hat{Q}\times \hat{Q}]^{(0)}-\frac{\sqrt{15}}{2}[\hat{L}\times\hat{Q}\times\hat{L}]^{(0)}. \tag{5}\] For a given SU(3) irrep \((\lambda,\mu)\), the eigenvalues of the two Casimir operators under the group chain U(6) \(\supset\) SU(3) \(\supset\) O(3) are given as \[\langle\hat{C}_{2}[\text{SU}(3)]\rangle=\lambda^{2}+\mu^{2}+\lambda\mu+3 \lambda+3\mu, \tag{6}\] \[\langle\hat{C}_{3}[\text{SU}(3)]\rangle=\frac{1}{9}(\lambda-\mu)(2\lambda+ \mu+3)(\lambda+2\mu+3). \tag{7}\] If \(\kappa=\frac{3N}{2N+3}\), the second term in Hamiltonian (3) describes the SU(3) degenerate point (\(\xi=0\)). It should be noticed that the location of the SU(3) degenerate point along the variable \(\kappa\) is related to the boson number \(N\)[4]. For \({}^{196}\)Pt, \(N=6\), it is found at \(\kappa=1.2\). In the large-\(N\) limit, \(\kappa\to 1.5\). At this degenerate point, the SU(3) irreps satisfying the condition \(\lambda+2\mu=2N\) are all degenerate. Figure 2: Partial low-lying level evolution along the green line in Fig. 1 for \(N=6\). ## III \(\gamma\)-soft spectra for the point \(B\) Fig. 2 presents the partial low-lying level evolutions from the U(5) limit to the SU(3) degenerate point for \(N=6\) as a function of \(\eta\). The choice of the boson number \(N=6\) correponds to \({}^{196}\)Pt. (In the previous paper [1], \(N=7\) is discussed for the \({}^{110}\)Cd.) It's clear that the four lowest \(0^{+}\) states are all degenerate if \(\eta=1.0\). The key finding is that the \(4^{+}_{1}\) state and the \(2^{+}_{2}\) state are degenerate, as well as the triplet of states \(6^{+}_{1}\), \(4^{+}_{2}\), \(3^{+}_{1}\). This degeneracy can hold for some higher levels, which indicates a O(5) partial dynamical symmetry. Unfortunately the reason for this degeneracy is still unknown. This unexpected \(\gamma\)-softness is found in Ref. [1]. The SU(3) degenerate point is found at \(\kappa=1.2\). A doubtedly useful for understanding the new \(\gamma\)-softness. Fig. 3 plots level evolution of the \(4^{+}_{1}\), \(2^{+}_{2}\) and \(0^{+}_{2}\) states for the parameter \(\kappa\) from 1.0 to 1.6 when \(\eta=0.5\), \(\xi=0\) and \(N=6\). Obviously there are two crossing points between the \(4^{+}_{1}\) and \(2^{+}_{2}\) states at \(\kappa_{A}=1.188\) and \(\kappa_{B}=1.404\). The position relationships of these three states are very important for understanding the \(\gamma\)-softness in realistic nuclei. The left one is the point \(A\), and the right point is denoted by \(B\), which is the special point discussed in this paper. The location of the point \(B\) is \(\kappa_{B}=1.404\). Point \(A\) is biased toward the prolate side and point \(B\) toward the oblate side. In the \(\gamma\)-soft region between the two points, point \(B\) is closest to the oblate shape. \({}^{196}\)Pt is a \(\gamma\)-soft nucleus with large positive quadruple moment, so it is natural to investigate whether the spectrum at point \(B\) can be used to describe this nucleus. It should be noticed that the value of \(\kappa_{A}=1.188\) is somewhat smaller than the value of the SU(3) degenerate point 1.2, thus the real degenerate line having the O(5) partial dynamical symmetry between the U(5) limit and the SU(3) degenerate point is not the directly connected green line in Fig. 1 [22], which is also discussed in [4]. There is a sudden shape change through the SU(3) degenerate point from the prolate shape to the oblate shape [23]. However in the large-\(N\) limit [22], the point \(A\) is the critical point between the prolate shape and the \(\gamma\)-rigid triaxial shape, while the point \(B\) is the critical point between the \(\gamma\)-rigid triaxial shape and the oblate shape. Thus the positions of these points are different from each other [4]. However, for small \(N\), this connected line is a good approximation [4]. In Fig. 1, the blue line passes through the point \(B\). It should be noticed that, for any \(N\geq 4\), this special point \(B\) exists. Fig. 4 (a) presents the partial low-lying level evolutions along the blue line via the point \(B\). The degeneracy between the ground band and the \(\gamma\)-band are somewhat broken, which can be also seen Figure 3: Level evolution of the \(4^{+}_{1}\), \(2^{+}_{2}\) and \(0^{+}_{2}\) states for the parameter \(\kappa\) from 1.0 to 1.6 when \(\eta=0.5\), \(\xi=0\) and \(N=6\). Two crossing points \(A\) and \(B\), where accidental degeneracy occurs, can be observed. ergy spectra of the point \(B\). Compared with Fig. 2, the levels of \(0^{+}_{2}\) and \(2^{+}_{3}\) states move up a little and \(6^{+}_{1}\), \(4^{+}_{2}\), \(3^{+}_{1}\) and \(0^{+}_{2}\) are nearly degenerate. Thus the low-lying part of the spectra is similar to that shown in the O(6) limit [19]. For point \(B\), the energies of \(0^{+}_{2}\) and \(0^{+}_{3}\) states are 1.1637c and 2.1472c in Fig. 4 (a). The ratio of the two states \(R^{\prime}=E_{0^{+}_{3}}/E_{0^{+}_{2}}=1.845\) is much larger than the experimental one 1.236. In Ref. [5], it is found that, the introduction of the fourth-order interaction \(\hat{C}^{2}_{2}[\mathrm{SU(3)}]\) can reduce the energy difference between the \(0^{+}_{2}\) and \(0^{+}_{3}\) states, even to zero. Fig. 6 presents the level evolutions of \(0^{+}_{2}\) and \(0^{+}_{3}\) states when \(\xi\) increases from 0 to 0.3 for \(\eta=0.5\) and \(\kappa=1.404\). It should be noticed that the fourth-order interaction is only a supplementary term here [4], so the value of \(\xi\) is small. The distance between the two states reduces first and then gets bigger. The minimum value is around 0.144, at which \(R^{\prime}=1.188\), smaller than the experimental value. In this paper, for a better fitting of the quadruple moment, we select \(\xi=0.05\). Fig. 4 (b) presents the partial low-lying level evolutions when \(\eta\) varies from 0 to 1 for \(\kappa=1.404\) and \(\xi=0.05\). The most significant change is that the positions of the \(0^{+}_{2}\) state increases from 1.16c to 1.47c. The spectra of the position can be seen in Fig. 5 (c), which seems somewhat irregular. But the low-lying part looks very similar to the spectra of the realistic \(\gamma\)-soft nucleus. For comparison, Fig. 5 (a) presents the spectra of the point \(A\) for \(N=6\), which is not shown in [1] and seems different from the two figures below. In Fig. 5 (a) the low-lying part is very similar to the vibrational spectra of a rigid spherical nucleus. The key difference is that some bands (\(0^{+}_{4}\), \(2^{+}_{8}\), \(0^{+}_{6}\) as bandheads) are greatly elevated. When moving from the point \(A\) to the point \(B\), then with the adding of fourth-order term, the degeneracies are gradually broken and the familiar \(\gamma\)-soft feature emerges, while the regularity of the spectra is weakened. Figure 5: Energy spectra of (a) the point \(A\), (b) the cousin point \(B\) and (c) adding the fourth-order interaction when \(\xi=0.05\) for \(N=6\). Figure 6: Level evolution of the \(0^{+}_{2}\) and \(0^{+}_{3}\) states for the parameter \(\xi\) from 0 to 0.30 when \(\eta=0.5\), \(\kappa=1.404\) and \(N=6\). ## IV \(B(e2)\) values and quadrupole moments for the point\(b\) The \(B(E2)\) values are vital for understanding the collective behaviors. In the common experiences of nuclear structure studies, we often expect a definite relationship between the energy spectra and the corresponding \(B(E2)\) values, especially in the IBM. However this relation may lead to the wrong conclusions. In the spherical nucleus puzzle [9; 11; 12], the energy spectra of the Cd isotopes are similar to the ones of the rigid spherical vibrations, but the \(B(E2)\) values are experimentally found to violate the expectations. Thus new perspectives on the shape evolution from the magic number nucleus to the deformation need to be developed. In the \(B(E2)\) anomaly [15; 16; 17; 18], this case becomes more obvious. From the level evolutions of the Pt-Os-W isotopes with neutron number, the energy spectra of \({}^{172}\)Pt, \({}^{168,170}\)Os, \({}^{166}\)W seem normal, but their \(B(E2)\) values completely exceed expectations. Thus collective behaviors cannot be determined solely by the energy spectra. Various nuclear spectroscopic methods are needed [12]. For understanding \(\gamma\)-softness, the \(B(E2)\) values are also necessary. Especially when the new \(\gamma\)-softness is provided [1], how to distinguish the different \(\gamma\) softness becomes more and more important in the description of the realistic nuclei properties. The \(E2\) operator is defined as \[\hat{T}(E2)=e\hat{Q}, \tag{8}\] where \(e\) is the boson effective charge. The evolution of the \(B(E2;2^{+}_{1}\to 0^{+}_{1})\), \(B(E2;0^{+}_{2}\to 2^{+}_{1})\), \(B(E2;0^{+}_{2}\to 2^{+}_{2})\), \(B(E2;0^{+}_{3}\to 2^{+}_{1})\), and \(B(E2;0^{+}_{3}\to 2^{+}_{2})\) values are plotted along the blue line in Fig. 1 for \(N=6\). In Fig. 7 (a) the \(B(E2;2^{+}_{1}\to 0^{+}_{1})\) value is nearly the same. For \(\eta=1.0\), it describes an oblate shape [23], thus the \(B(E2;2^{+}_{1}\to 0^{+}_{1})\) value is suppressed. With the increasing of \(\eta\), the values of \(B(E2;0^{+}_{2}\to 2^{+}_{1})\) and \(B(E2;0^{+}_{3}\to 2^{+}_{2})\) get reduced while the ones of the \(B(E2;0^{+}_{2}\to 2^{+}_{2})\) becomes larger. The trends are similar to the ones along the green line with degeneracy and the \(\gamma\)-softness can emerge. When the fourth-order interaction is introduced in Fig. 7 (b), at \(\eta=0.5\), the value of \(B(E2;0^{+}_{2}\to 2^{+}_{2})\) can be reduced. Fig. 8 shows the quadrupole moments of the \(2^{+}_{1}\) state for \(\xi=0\) (the solid blue line) and \(\xi=0.05\) (the solid red line). It is shown that, for the blue line, when \(\eta\geq 0.372\), the value becomes positive, which means an oblate deformation. For the red line, it bends to the oblate side. Figure 8: The evolution of the quadrupole moment of the \(2^{+}_{1}\) state along the blue line in Fig. 1 for \(N=6\) when \(\xi=0\) (the solid blue line) and \(\xi=0.05\) (the solid red line). ## V Theoretical fitting of \({}^{196}\)Pt Without considering other higher-order interactions in the SU(3) limit, the properties of the point \(B\) or adding the fourth-order interaction are used to fit the structure of \({}^{196}\)Pt. Although the precision needs to be improved, the fitting results seem excellent. For \(\xi=0\) at point \(B\), the overall energy parameter \(c\) in \(\hat{H}\) is 0.9753 MeV to make the energy value of the \(0^{+}_{2}\) state equal to the experimental one. The \(L^{2}\) term is also added to fit the \(2^{+}_{1}\) state, which is 0.00803 MeV. The theoretical spectra of point \(B\) shown in Fig. 9 (b) are compared with the experimental data shown in Fig. 9 (a). The theory and the experiment correspond well qualitatively, and we can see that the position relationships of each energy level are also consistent. The rotational-like \(\gamma\)-band in \({}^{196}\)Pt is an interesting problem, and the theoretical spectra have similar structures. \(0^{+}_{4}\) and \(0^{+}_{5}\) states also fit well. The main drawback is that the \(0^{+}_{3}\), \(2^{+}_{5}\) and \(2^{+}_{6}\) are somewhat higher, that is, the energy difference between the \(0^{+}_{3}\) and \(0^{+}_{2}\) states is somewhat larger than the experimental result. This is the typical feature of the new \(\gamma\)-soft rotational mode [1]. For \(\xi=0.05\), better fitting results can be obtained, where the characteristics of the \(\gamma\) band are consistent with the actual situation and the energies of the \(0^{+}_{3}\) and \(0^{+}_{4}\) states are also reduced. For reducing the energies of the higher-levels, Pan _et al._ presented a new method to provide an excellent fitting result for \({}^{194}\)Pt [59], which may be used to improve the fitting precision in the SU3-IBM. Table 1 lists the \(B(E2)\) values of some low-lying states in \({}^{196}\)Pt, the point \(B\) (Res.1), adding the fourth-order term (Res.2), the O(6) partial dynamical symmetry model (PDS) [58] and the modified soft-rotor model (MSR) [60]. These three models are all related to higher-order interactions in IBM. In the O(6) partial dynamical symmetry, one three-body interactions that is partially solvable in O(6) symmetry can be constructed, which can mix the \(\Sigma=4\) and \(\Sigma=2\), but it does not change the case \(\Sigma=6\) for \(N=6\) (\(\Sigma\) is the O(6) label). In the modified soft-rotor model, the higher-order interactions are used to fit the Pt isotopes, which is inspired by the O(6) higher-order symmetry description of \({}^{194}\)Pt [59]. For Res.1, this \(\gamma\)-soft description of the point \(B\) can show a good consistency with the experimental data qualitatively [61]. From the overall fitting results, it looks like somewhat worse than the other two theories, but the result is still good considering that the parameters of the point \(B\) are not adjustable. When the fourth-order interactions are introduced for \(\xi=0.05\), the fit can be greatly improved. To better substantiate this conclusion, a quantitative analysis is mandatory. The first quantity that we might study is the staggering parameter \(S(J)\) in \(\gamma\) band energies [62; 63] defined as \[S(J)=\frac{(E_{J}-E_{J-1})-(E_{J-1}-E_{J-2})}{E_{2^{+}_{1}}}, \tag{9}\] which quantifies how adjacent levels within a \(\gamma\) band are grouped. Fig. 10 presents the \(S(J)\) for \(J=4,5,6\) in experimental data, Res.1 and Res.2 in the SU3-IBM, MSR [60], CQ [60] and O(6) symmetry [58]. Black squares are the experimental results. CQ shows the typical \(\gamma\)-soft feature of strong staggering. O(6), MSR and Res.1 display similar trends, while Res.2 gives the best fitting. Figure 9: Energy spectra of (a) \({}^{196}\)Pt, and of the Hamiltonian in Eq. (3) at the point \(B\) (b) and when adding the fourth-order interaction when \(\xi=0.05\) for \(N=6\) (c). The second is on the positions of the \(0^{+}_{2}\), \(0^{+}_{3}\) and \(0^{+}_{4}\) states in the spectra. \(R^{\prime}=E_{0^{+}_{3}}^{-}/E_{0^{+}_{2}}\) is the energy ratio between the \(0^{+}_{2}\) and \(0^{+}_{3}\) states. \(R=E_{0^{+}_{4}}/E_{0^{+}_{3}}^{-}-2\) is used in [58]. Table 2 presents the \(R^{\prime}\) and \(R\) in experimental data, Res.1, Res.2, MSR [60], CQ [60], O(6) symmetry [58] and PDS [58]. For \(R^{\prime}\), Res.2, MSR and CQ can offer reasonable results. For \(R\), Re.1, Res.2 and PDS can give the best results that are consistent with the experiment data. In [58], although the introduction of higher-order interaction can fit the \(R\) well, it increases \(R^{\prime}\). Res.2 is the only theory that makes both values more consistent. What needs to be mentioned is that the \(0^{+}_{4}\) sate in \({}^{194}\)Pt is an intruder state [59], thus the experimental \(0^{+}_{5}\) (or \(0^{+}_{4}\)) state in \({}^{196}\)Pt may be also an intruder state. The experimental data about the quadrupole moments are rare, which are very sensitive to specific nuclear structure models. Table 3 presents the quadrupole moments of some low-lying states in \({}^{196}\)Pt, which contain the results of Res.1, Res.2, the MSR model [60] and CQ [60]. The average variance of the fit \(\Delta Q\) are calculated. Our results are more consistent with the experimental data. The quadrupole moment of these states in the \(O(6)\) limit are all zero **with**\(\Delta Q=0.64\). The quadrupole moments can be a very useful indicator for judging the success of different nuclear models. The results of these three quantitative calculations favour the descritption in terms of SU3-IBM over previous theories. ial shapes, while the deformed shape with a strong triaxial instability is demonstrated on \({}^{166}\)Er, which has shown to be related with self-organization mechanism [64; 65]. Holding an obstinate simple view of collective motions in nuclei seems very undesirable. Further experimental information on the two puzzles may lead us to get a more accurate description on nuclear structures. This article can be seen as an extension of the previous works [1; 4]. In these papers, the SU3-IBM is proposed, which is inspired by the interesting findings in Ref. [22; 23]. When only the U(5) limit and the SU(3) limit are concerned, \(\gamma\)-soft rotational mode can also occur as an emergent phenomenon if the SU(3) higher-order interactions are introduced into the common formalism. This is very different from the \(\gamma\)-softness related to the O(6) limit. In the traditional IBM, O(6) symmetry is exactly solvable, and \(\gamma\)-unstable spectra can be expected. Especially the SU3-IBM can describe the \(B(E2)\) anomaly, which make the model particularly useful. Fig. 11 presents partial energy spectra of \({}^{120}\)Cd, \({}^{196}\)Pt and \({}^{132}\)Ba, for their boson number are all \(N=6\). Ref. [10] obtains the spectra of \({}^{120}\)Cd, in which the \(0^{+}_{3}\) state at the three-phonon level is absent. The energy of the \(0^{+}_{3}\) state of \({}^{120}\)Cd is predicted by our theory, which is around 2300 KeV, or even higher. Thus this is really a new \(\gamma\)-soft rotational mode [8; 9; 12], despite it looks much like the rigid spherical vibrational excitation mode. If we consider that the IBM can truly describe the collective excitations in realistic nuclei, the only way to describe the new \(\gamma\)-soft behaviors is to introduce the SU(3) higher-order interactions. The theory works better than expected [1; 2; 3]. \({}^{196}\)Pt and \({}^{132}\)Ba are two typical traditional \(\gamma\)-soft nuclei. In the IBM, this kind of \(\gamma\)-softness is related to the O(6) symmetry, or O(5) symmetry, such as E(5) critical point description [67; 68]. In fact, this case is not related to the deformation \(\gamma\) parameter. The SU3-IBM follows such a principle that triaxiality arises from the competition between prolate shape and oblate shape. In this paper, we show that the new emerging \(\gamma\)-softness can be also related to the traditional \(\gamma\)-soft spectra and their \(B(E2)\) behaviors. When other SU(3) higher-order interactions are introduced, the fitting of spectra appears excellent. This will be further discussed in following papers. In previous discussions, spectra and low-lying \(B(E2)\) values of various \(\gamma\)-soft rotational modes in different nuclear structure theories are very similar, so distinguishing various \(\gamma\)-softness in different theories is becoming extremely important. New \(\gamma\)-softness in the normal states of Cd nuclei can be only described by the SU3-IBM. However the spherical nucleus puzzle is still full of debates [13; 14; 69; 70]. A complete description of \({}^{110,112}\)Cd including configuration mixing, like Ref. [71; 72; 73; 74; 75], is in progress. \(B(E2)\) values between higher levels are useful, such as the \(0^{+}_{3}\), \(2^{+}_{4}\) and \(2^{+}_{5}\) states. Quadrupole moments of the low-lying states may provide great value in distinguishing among the various models. Though a lot of theoretical work still needs to be done, we expect that many observations in \(\gamma-\)soft nuclei might be described by the SU3-IBM in a unified way. E(5)-like \(\gamma\)-softness in \({}^{82}\)Kr is found in the new model [5]. Each interaction in the SU3-IBM has a clear geometric meaning, and different from common considerations in microscopic theory, such as SD-pair shell model [76; 77]. In the microscopic theory, various deformations resulting from the proton-neutron interactions can give rise to different deformation shapes, including the \(\gamma\)-soft rotational mode [78; 79]. In the SU3-IBM, this is not so, and specific interaction corresponds to certain shape, and the \(\gamma\)-softness is an emergent phenomenon, which can not be expected before numerical calculations. How to understand this point is an interesting problem. From the existing results, the SU3-IBM is closer to the geometric collective model [51]. It seems important to investigate further the emerging \(\gamma\)-softness in the geometric model [80; 81]. Since the seminal works by Elliott [82; 83; 84], the SU(3) symmetry has played a key role for the description of the rotational spectra from the perspective of a microscopic shell model [85; 86; 87; 88; 89; 90; 36]. It was also found that the SU(3) symmetry plays a key role in the description of shell-like quartetting of nucleons, and the SU(3) third-order Casimir operator \(\hat{C}_{3}[\)SU(3)\(]\) is needed to describe the experimental spectra in order to distinguish between the prolate and oblate shapes [98; 99; 100; 101]. In the previous studies, the SU(3) symmetry has been related to the prolate shape. In our new findings ([1; 2; 3; 4; 5; 6] and this paper), the SU(3) symmetry actually dominates all the quadrupole deformations. In particular, in this model, one can obtain large quadrupole moments, even with spectra that resemble a gamma-soft situation, a feature that cannot be obtained with partial dynamical symmetry models. We expect that this new perspective can be further used in the SU(3) shell model [88; 89]. Along the dashed red line through the \(A\) and \(B\) point, Figure 11: Partial energy spectra in \({}^{120}\)Cd normal states, \({}^{196}\)Pt and \({}^{132}\)Ba [66]. The energy of the \(0^{+}_{3}\) state in the \({}^{120}\)Cd normal states is predicted by the authors. shape phase transition from the prolate shape to the oblate shape has been studied [4], and this is an asymmetric evolution, which is different from the symmetric one in \(\hat{H}_{1}\)[102; 103]. The study of the properties of \({}^{196}\)Pt in this paper further supports this perspective and a more detailed investigation on the prolate-oblate shape phase transition is also needed. In addition, in Ref. [5], the E(5)-like energy spectrum was also found in the new model, where there are also some new findings pointing to the \(\gamma\)-softness, which are further used to understand the \({}^{196}\)Pt. We look forward to further explore the relationship between the \(\gamma\)-softness and the rigid triaxial-rotor mode [104]. ## VII Conclusions Recently, the interacting boson model with SU(3) higher-order interactions (SU3-IBM) was proposed [1; 2; 3] to resolve the spherical nucleus puzzle [8; 9; 10; 11; 12; 13; 14] and the \(B(E2)\) anomaly [15; 16; 17; 18]. Although this model can be obtained by only considering the U(5) limit and the SU(3) limit, the \(\gamma\)-soft rotational mode can emerge with (quasi)-O(5) partial dynamical symmetry. These results are extending our view of the interacting boson model (IBM). The successful descriptions of the two abnormal phenomena is also changing our view of nuclear structure and shape evolution. As said by Heyde and Wood:"sphericity is a special case of deformation." and "the reference frame must be fundamentally one of a deformed many-body system." [11]. Along the same lines, with the present paper we show that also \(\gamma\)-softness is a special case of deformation. The emerging \(\gamma\)-softness may play a key role in the shift of perspective. Following the previous studies [1; 4], the emerging \(\gamma\)-soft rotational mode can be explored to explain the properties of \({}^{196}\)Pt. In our studies, a special point, which is near the middle point of the degenerate line connected the U(5) limit and the SU(3) degenerate point, is explored. The purpose of this paper is only to explore the relationship between the emerging \(\gamma\)-softness and the \(\gamma\)-soft properties in realistic nuclei. Further detailed fitting will be done in future when other SU(3) higher-order interactions are introduced. Further investigation of the \(\gamma\)-rigid triaxiality is also important in the SU3-IBM. This is a delicate topic [49]. The phase diagram of the SU3-IBM will be given in future, and it can offer meaningful guidance to the rigid triaxiality. 6-\(d\) interaction may be also valuable [28]. Distinguishing between the different \(\gamma\)-softness and discussing the differences between the \(\gamma\)-softness and the rigid triaxiality are topics that require further exploration. Finally, a direct discussion and fitting of the new model on the oblate nuclei, such as \({}^{196-204}\)Hg, is no doubt extremely important for understanding the SU3-IBM and for establishing the relationship between the new model and the SU(3) shell model. ## VIII Acknowledgment This research is supported by the Educational Department of Jilin Province, China (JJKH20210526KJ). C.-x.Z. gratefully acknowledges support from the Project Supported by Scientific Research Fund of Hunan Provincial Education Department, China (21A0427).
2305.06969
A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges
The widespread adoption of Machine Learning systems, especially in more decision-critical applications such as criminal sentencing and bank loans, has led to increased concerns about fairness implications. Algorithms and metrics have been developed to mitigate and measure these discriminations. More recently, works have identified a more challenging form of bias called intersectional bias, which encompasses multiple sensitive attributes, such as race and gender, together. In this survey, we review the state-of-the-art in intersectional fairness. We present a taxonomy for intersectional notions of fairness and mitigation. Finally, we identify the key challenges and provide researchers with guidelines for future directions.
Usman Gohar, Lu Cheng
2023-05-11T16:49:22Z
http://arxiv.org/abs/2305.06969v2
# A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges ###### Abstract The widespread adoption of Machine Learning systems, especially in more decision-critical applications such as criminal sentencing and bank loans, has led to increased concerns about fairness implications. Algorithms and metrics have been developed to mitigate and measure these discriminations. More recently, works have identified a more challenging form of bias called intersectional bias, which encompasses multiple sensitive attributes, such as race and gender, together. In this survey, we review the state-of-the-art in intersectional fairness. We present a taxonomy for intersectional notions of fairness and mitigation. Finally, we identify the key challenges and provide researchers with guidelines for future directions. ## 1 Introduction Machine learning (ML) has been increasingly used in high-stake applications such as loans, criminal sentencing, and hiring decisions with reported fairness implications for different demographic groups [1]. Measuring and mitigating discrimination in ML/AI systems has been studied extensively [14]. Such works have focused on two specific categories of algorithmic fairness: Group or individual fairness. The majority of early group fairness research was focused on one dimension of group identity, e.g., race or gender. This setting is defined as _independent_ groups fairness [23]. However, recent works have identified a more nuanced case of group unfairness that spans multiple subgroups based on Crenshaw's theory of "intersectionality" [12] called _intersectional group fairness_. At a high level, intersectionality states that interaction along multiple dimensions of identity produces unique and differing levels of discrimination for various possible subgroups, e.g., a Black woman's experience of discrimination differs from both women and Black people in general. Finally, _gerrymandering_ groups are the union of independent and intersectional groups. Figure 1 shows an example of these group fairness definitions using "gender" and "race". By categorizing people _only_ into distinct overlapping groups, independent group fairness fails to consider the discrimination people face at the intersection of such groups. This has been well-studied in philosophy and social psychology (e.g., [1, 1]), but recent works also demand urgency to do so in ML fairness. Specifically, an ML predictor might be fair w.r.t the _independent groups_ but not _intersectional groups_. For example, [1] identified accuracy disparities that were more significant for Black Women in gender classification algorithms, compared to independent groups. In NLP, works have evaluated popular generative models [13, 14] and also identified such cases of intersectional bias. Compared to the binary view of fairness in the independent case, the problem of intersectional fairness poses unique challenges. For instance, for what level of granularity of intersectional groups should fairness be guaranteed? On the other hand, smaller subgroups have higher data sparsity, resulting in higher uncertainty [15]. Furthermore, an intersectional identity often amplifies biases that might not exist in its constituent groups (e.g., Black woman vs. Black or Woman), rendering traditional mitigation techniques ineffective. To this end, an emerging body of work, e.g., subgroup fairness [13] and multicalibration [16], has proposed various notions of intersectional fairness and mitigation techniques that provide a level of guarantee against intersectional discrimination. Multiple extensive surveys on fairness in ML have been conducted, such as [12] and [1]. However, they mainly consider the _independent_ group fairness and individual fairness while only briefly discussing _intersectional_ cases. To bridge this gap, we review the existing fairness literature on intersectional and gerrymandering groups. In particular, we examine existing notions of intersectional fairness in ML and AI and investigate the techniques that enable fair learning for intersectionality. Figure 1: Definitions of group fairness [23]. Our main contributions are: 1. We propose the first taxonomy (Fig. 2) for the notions of intersectional fairness and fair learning methods for mitigating intersectional discrimination. 2. We thoroughly examine representative intersectional fairness notions and learning methods and discuss their limitations. 3. We conclude with the main challenges faced and point out the open problems in the area. ## 2 Notions of Intersectional Fairness Intersectionality, as opposed to group fairness based on independent protected groups (e.g., gender), postulates that the sum of human experiences with discrimination cannot be limited to individual groups alone [14]. Predictors can appear fair when evaluated on independent groups but not at their intersections [1]. Satisfying traditional group fairness for intersectionality is infeasible due to potentially infinite overlapping subgroups. This section reviews fairness notions for intersectionality that limit the number of subgroups by balancing the requirements of group fairness and the stronger notion of individual fairness. **Notations**. Each individual is denoted by a tuple \((x,y)\) where \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\) denote the instance and ground-truth label, respectively. Let \(A=\{s_{1}....s_{n}\}\) be the set of size \(n\) protected attributes, \(f\) a predictor, and \(f(x)\) the predictor output. ### Subgroup Fairness A pioneering work [13] proposes a stronger notion of group fairness, called _subgroup fairness_, that holds over a large number of _structured_ subgroups that can be learned efficiently. In particular, statistical parity (SP) subgroup fairness limits the number of subgroups by disregarding those with limited representations in the data and relaxes the requirement of statistical parity. Let \(\mathcal{C}=\{c:\mathcal{X}\rightarrow\{0,1\}\}\) be a collection of characteristic functions where \(c(s)=1\) indicates that an individual with protected attribute \(s\) is in subgroup \(c\). **Definition 1**.: \(f(x)\) _is \(\gamma-\)SP subgroup fair if \(\forall\ c\in\mathcal{C}\):_ \[|P(f(x)=1)-P(f(x)=1|c(s)=1)|\\ \times P(c(s)=1)\leq\gamma, \tag{1}\] The \(\gamma\)-SP is determined by the worst-case group \(c\in\mathcal{C}\). The first term in Eq. 1 is a penalty on the difference in probability between the positive outcome for a specific subgroup \(c\) and for the entire population. The smaller the difference, the fairer the outcome. The second term reweighs the difference by the proportion of the size of each subgroup in relation to the population. Consequently, the unfairness of smaller-sized groups is down-weighted in the final \(\gamma-\)SP estimation. Thus, it may not adequately protect small subgroups, even if they have high levels of unfairness. Similarly, subgroup fairness can be applied to the false positive (FP) rate. ### Calibration-based Fairness Calibration in binary prediction tasks refers to the accuracy of a predictor's confidence in its predictions [13]. It ensures that the predicted probability distribution for each output class \(f(x)=v\) is equal to the actual data probability distribution, i.e., the true expectation is equal to \(v\). For example, if six out of ten samples are positive, the underlying probability and expected predicted probability should also be 0.6. Independently, [1] proposes multicalibration that requires all subgroups to be well-calibrated, assuming access to a class of efficiently-learnable characteristic functions. Formally: **Definition 2**.: _Given a parameter \(\alpha\in[0,1]\), \(f(x)\) is (\(\mathcal{C},\alpha\))-multicalibrated if for all predicted values \(v\in[0,1]\), \(\forall c\in\mathcal{C}\)_ \[|\mathbb{E}[c(x)\cdot(y-v)|f(x)=v]|\leq\alpha. \tag{2}\] The parameter \(\alpha\) allows for a less stringent requirement on calibration, i.e., a small miscalibration error \(\alpha\) is allowed. Intuitively, a rich class \(\mathcal{C}\) will contain groups beyond independent cases, such as intersectional groups, leading to stronger fairness guarantees. _Muliticcuracy_[15] replaces calibration with accuracy constraints to propose a weaker fairness notion, which requires a predictor to be at least \(\alpha\)-accurate: \(\mathbb{E}[c(x)\cdot f(x)-y(x)]\leq\alpha\ \forall c\in\mathcal{C}\). Compared to multicalibration, multiaccuracy is less computationally expensive as it is not conditioned on the calibration of each output class across a rich class of intersecting subgroups. These notions define two extremes between efficiency and strong fairness guarantees. To find a balance between the two, [1] introduces a hierarchy of weighted multicalibration. Formally: **Definition 3**.: _Given \(\mathcal{C}\) and a weight class \(\mathcal{W}\), \(f(x)\) is (\(\mathcal{C},\mathcal{W},\alpha)-\)multicalibrated, if \(\forall c\in\mathcal{C}\) and \(\forall w\in\mathcal{W}\)_ \[|\mathbb{E}[c(x).w(f(x))(y-f(x)]|\leq\alpha. \tag{3}\] Figure 2: The taxonomy for notions of intersectional fairness and fair learning methods. The choice of class \(\mathcal{W}\) can lead to multiple variations of the multicalibration notions. _Low-degree multicalibration_ is defined as taking the weight function \(w(f(x))\) to be a class of \(k-1\) polynomials with \(k\) degrees. As shown in Figure 3, when \(k=1\), \(w(f(x))\) is constant, and we get the efficient albeit weaker fairness notion of _multiaccuracy_. At higher degrees of polynomial, it converges to _multicalibration_. For a class of 1-Lipschitz functions, we get the \((C,\alpha)\)-_smooth-multicalibration_. Finally, if the predictor is calibrated on prediction intervals instead of calibrating on each predicted value, we arrive at _full-multicalibration_. As such, this hierarchy interpolates the space between multiaccuracy and multicalibration, increasing the strength of fairness guarantees and complexity at higher levels. ### Metric-based Fairness Another line of work [20] addressed the computational concerns of satisfying fairness for possibly large number of intersectional groups by relaxing the notion of the seminal work of [17] on individual fairness. Individual fairness requires that given a similarity metric if the distance between a pair of individuals is small, a predictor should output similar classification distributions. Inspired by this, [20] proposes a relaxed generalization of individual fairness which allows a small fairness error, called _approximate-metric_ fairness. Similar to subgroup fairness and multicalibration works, the relaxation allows the use of efficient learning algorithms that protects every sufficiently-large subgroup. However, unlike those works, the subgroups are not defined a priori. Formally: **Definition 4**.: _For a small \(\alpha\in[0,1]\) and \(\gamma\in[0,1]\), \(f(x)\) is (\(\alpha,\gamma\))-approximately metric fair w.r.t similarity metric \(d\) and data distribution \(\mathcal{D}\) if_ \[\operatorname*{\mathbb{P}}_{(x,x^{{}^{\prime}})\sim\mathcal{D}}[|f(x)-f(x^{{} ^{\prime}})|\geq d(x,x^{{}^{\prime}})+\gamma]\leq\alpha, \tag{4}\] where \((x,x^{{}^{\prime}})\) are two individuals sampled from the dataset. The parameters \(\alpha\) and \(\gamma\) allow for small errors in similarity and the metric-fairness measures. Approximate-metric fairness requires that individual fairness holds for all but a small fraction of \(\alpha\) pairs of individuals. Consequently, it protects all subgroups of size greater than \(\alpha\) as members within the subgroups are treated similarly to those outside. Approximate-metric fairness assumes that the similarity metric is already known for individuals. To relax the assumption, [14] introduces _metric-multifairness_, which supports any similarity metric and requires that similar _subgroups_ are treated similarly based on the average distance between individuals in those groups. Formally: **Definition 5**.: _For a small constant \(\gamma>0\) and an unknown similarity metric \(d\), \(f(x)\) is \((C,d,\tau)\)-metric multifair if_ \[\operatorname*{\mathbb{E}}_{(x,x^{{}^{\prime}})\sim A}[|f(x)-f(x^{{}^{\prime} })|]\leq\operatorname*{\mathbb{E}}_{(x,x^{{}^{\prime}})\sim A}[d(x,x^{{}^{ \prime}})]+\gamma. \tag{5}\] More specifically, it requires that individuals in subgroups are treated differently only if they differ substantially from the average difference between individuals within the subgroup. There exist other works (e.g., [14]) that define metric-based fairness in online learning; however, we limit ourselves to the offline setting due to space constraints. ### Differential Fairness Anti-discrimination laws [16] in the United States declare an outcome as biased if the ratio of probabilities of a favorable outcome between an advantaged and disadvantaged group is less than 0.8. _Differential Fairness_ (DF) [20] extends this rule to protect multidimensional intersectional categories. But instead of using a fixed threshold at 80%, DF used a sliding scale, similar to the concept of "differential privacy" [17], to measure the unfairness of a predictor w.r.t intersectional groups. **Definition 6**.: \(f(x)\) _is \(\epsilon-\)differentially fair if_ \[e^{-\epsilon}\leq\frac{P(f(x)=y|s_{i})}{P(f(x)=y|s_{i})}\leq e^{\epsilon}, \tag{6}\] holds for all tuples \((s_{i},s_{j})\in A\times A\) where \(0\leq P(s_{j})\leq 1\). For small values of \(\epsilon\), the DF criterion states that probabilities of favored outcomes will be similar for any combination of intersectional groups. Unlike other notions, DF ensures fairness for all possible groups, regardless of their size. To estimate the probabilities in Eq. 6, empirical counts for each subgroup can be used. However, it suffers from data sparsity at higher intersections of groups. This can be addressed by using a Dirichlet prior. Finally, [20] also proposes _DF-bias amplification_, which measures the discrimination of a predictor by taking the difference of the DF of the dataset (\(\epsilon_{1}\)) and the predictor (\(\epsilon_{2}\)). Furthermore, [20] extended the DF notion to other standard group fairness notions such as Statistical Parity, Equality of Opportunity, False Positive Rate Parity, and equalized odds. The ratio in Eq. 6 is replaced with the specific group fairness definition for which the \(\epsilon\) is measured. ### Max-Min Fairness The _Max-min_ (or _min-max_) notion of fairness is based on the Rawlsian principle of distributive justice [13]. This principle allows for inequalities but aims to maximize the minimum utility across different protected groups. Given a predictor and a fairness metric, it aims to maximize the fairness of the worst-off subgroup. The Max-Min Fairness [1] is extended to intersectional cases by measuring the fairness of any combination of intersectional Figure 3: Hierarchy of multicalibration that interpolates from multic accuracy (MA) to mutilcalibration (MC) [1]. subgroups using existing fairness definitions and then taking the ratio of the maximum and minimum values from this list of subgroups. A ratio below 1 indicates a disparity between groups, with greater disparity if the ratio is closer to 0. This ratio can be applied to any existing fairness or performance measures like AUC. However, this definition also suffers from low data sparsity when the number of dimensions of intersectionality increases. ### Probabilistic Fairness Differential fairness uses a Dirichlet prior of uniform parameter \(\alpha\) to resolve the issue of intersectional groups having zero counts in the data. The parameter affects this empirical count approach and may miss high-risk subgroups not represented in the data. To solve this, _Probabilistic Fairness_(Molina and Loiseau, 2022) relaxes the requirement of guaranteeing fairness for all subgroups using a probabilistic approach. Formally, **Definition 7**.: _For \(\epsilon\geq 0\) and \(\delta\in[0,1]\), a predictor is \((\epsilon,\delta)\)-probably intersectionality fair if_ \[P(U\geq\epsilon)\leq\delta, \tag{7}\] where \(U=u(f(x),s,s^{{}^{\prime}})\) measures unfairness for a randomly chosen prediction and two protected groups (\(s\neq s^{{}^{\prime}}\)) to compare them. Probabilistic fairness captures the expected size \(\delta\) of the population for which the predictor discriminates more than \(\epsilon\). ### Discussions In contrast to traditional fairness metrics, intersectional fairness notions encapsulate a greater number of more granular subgroups and intersectional identities. Existing notions of intersectional fairness depend on the level of unfairness experienced by the most disadvantaged group across all intersections of individual groups. These notions only differ in their approaches for identifying and limiting the vast number of such subgroups to efficiently measure fairness. One risk with designing intersectional notions is that they arbitrarily limit subgroups based on various methods that can efficiently identify them, hence, participating in the same fairness gerrymandering it attempts to solve (Kong, 2022). For instance, subgroup fairness uses a weight term to prove generalization guarantees w.r.t the underlying population; however, by doing so, it down-weights minority groups which fails to adequately protect them. This highlights the need for a broader involvement of stakeholders to design notions and not simply rely on computational methods. While Max-Min and Differential Fairness truly encapsulates all such groups without disregarding any subgroup, they suffer from data sparsity. Probabilistic approaches might be favorable in such scenarios. Finally, future work could explore the applicability of these definitions in other domains (e.g., recommender systems, NLP, and so on) and define notions for continuous attributes. ## 3 Improving Intersectional Fairness ML systems have been shown to exhibit unfairness due to biases in data (Mehrabi _et al._, 2021) and algorithms (Buolamwini and Gebru, 2018; Gohar _et al._, 2022). In response, there have been great efforts to mitigate bias and improve fairness in ML systems. However, comparatively fewer fair learning algorithms were proposed to address the unique challenges of intersectional fairness. Here, we review the two lines of approaches for intersectional fairness learning: _Intersectional Fairness with Demographics_ and _Intersectional Fairness Without Demographics_. ### Intersectional Fairness with Demographics A surge of methods have been proposed to mitigate bias by learning fair models (Mehrabi _et al._, 2021). These techniques are generally applied to the training data (pre-processing), the learning algorithm (in-processing), or the predictions (post-processing). Intersectional fairness with demographics mostly falls into the last two categories using the specific intersectional fairness notions we discussed in Section 2. **Subgroup fairness via auditing.** A number of works (Kearns _et al._, 2018; Kim _et al._, 2018; Kim _et al._, 2019; Hebert-Johnson _et al._, 2018) use auditing to learn fair predictors w.r.t a large number of subgroups. This approach involves an auditor, with access to i.i.d. samples \(X\) from an unknown distribution, that assesses the fairness of a predictor using a fairness metric and identifies subgroups with high unfairness. Then a learning algorithm tries to minimize error subject to that fairness constraint. Separately, (Hebert-Johnson _et al._, 2018; Kearns _et al._, 2018) both prove that the task of learning such a fair model is equivalent to auditing an arbitrary predictor w.r.t a class of subgroups \(\mathcal{C}\) which is computationally equivalent to weak agnostic learning of \(\mathcal{C}\). Utilizing this approach, the seminal work of (Kearns _et al._, 2018) proposes a zero-sum game between an _Auditor_ and a _Learner_ for the subgroup fairness notion. In this setting, the zero-sum game is a Fictitious Play using a cost-sensitive classification oracle (Agarwal _et al._, 2018). Instead of auditing during training, (Hebert-Johnson _et al._, 2018) use a post-processing iterative boosting algorithm by combining all \(c\in\mathcal{C}\) until the model is \(\alpha-\)calibrated. Multicaucucracy (Kim _et al._, 2019) extends this approach to learning a multi-accurate predictor that guarantees accurate predictions w.r.t \(\mathcal{C}\). Inspired by these approaches, (Kim _et al._, 2018) propose a variant of stochastic descent gradient that can be leveraged using auditing to post-process a predictor. **Learning beyond surrogate fairness notions.** Techniques discussed so far are strictly based on surrogate fairness notions that are adapted to apply to many subgroups e.g., SP-subgroup fairness is a surrogate of the statistical parity notion. Next, we discuss works that go beyond tailor-made intersectional fairness notions. The approach outlined in (Shui _et al._, 2022) focuses on addressing group sufficiency for many subgroups (including intersectional groups) in ML predictors. Group sufficiency states that given the prediction \(f(x)\), the conditional expectation of the ground-truth label (\(\mathbb{E}[Y|(f(x),A]\)) is similar across different subgroups. They first derive an upper bound of the group sufficiency gap and propose a bi-level optimization approach with a randomized algorithm that generates the output distribution of the predictor. In the lower level, subgroup-specific output distribution is learned using a small sample of each subgroup's labeled data. Then, the final output distribution is updated at the upper level to ensure it is close to all subgroup-specific output distributions. Another work called _GroupFair_[15] proposes Bayes-optimal predictors that are fair for all subgroups w.r.t loss, using a weighted Empirical Risk Minimization (ERM) oracle [1]. Recently, [14] took another step towards a more generalized mitigation approach that does not depend on self-defined fairness notions by capturing linear and non-linear dependence between predictions and intersectional groups using mutual information [13]. Finally, [15], combines and extends previous works [1, 1] to include intersectional cases, by utilizing differential fairness. This involves randomly flipping predictions and a loss function which allows users to find the optimal fairness-accuracy trade-off. ### Intersectional Fairness without Demographics This line of research addresses intersectional biases without using protected attribute information due to privacy laws. Additionally, data sparsity in smaller subgroups and normative concerns about using synthetic data generation techniques [16] make this a compelling approach to tackle intersectional biases. It exploits the correlations between protected attributes and non-protected attributes to approximate the subgroup information. Most of the existing works are in-processing methods. One such line of work [11, 1] aims to maximize the minimum utility for all subgroups by using Rawlsian's Max-Min theory. Unlike parity-based fairness notions, this principle argues in favor of reducing worst-case risk. The fairness objective to minimize the worst-case loss can be formulated as: \[L_{\max}(f)=\max_{s\in S}\mathbb{E}[l(f;X)]. \tag{8}\] where \(\mathbb{E}[l(f;X)]\) is the expected loss for a loss function e.g., log loss. One approach [11] uses _distributionally robust optimization_ (DRO) to minimize the worst case loss for any subgroup. DRO achieves this by minimizing the loss over all distributions that are close to the input distribution. Their approach considers the worst-case loss over all distributions with \(\chi^{2}-\)divergence less than \(r\), where \(r\) is the radius of a chi-squared ball (\(B(P,r)\)) around the input probability distribution \(P\). The DRO function is defined as: \[L_{DRO}(f,r)=\underset{Q\in B(P,r)}{\text{sup}}\mathbb{E}_{\mathbb{Q}}[l(f;X)]. \tag{9}\] Specifically, DRO attempts to reduce the possibly exponential number of subgroups by only considering worst-case distributions that exceed a given size \(\alpha\). A key distinction here is that the objective of the learning algorithm does not depend on \(\alpha\). Consequently, all subgroups have equal representation in the loss function to be minimized. However, this method can potentially optimize noisy outliers, reducing its effectiveness. To address the limitations of DRO, [1] propose an Adversarial Reweighting-based approach that relies on the notion of _computationally-identifiable_ groups [1]. They design a minimax game between a _learner_ and _adversary_: the _learner_ is trained to minimize the expected loss while an adversarial neural network is tasked to learn identifiable regions where the learner has significant errors. Their results show that the regions with high errors correspond to various intersectional groups such as _black-female_. The other recent method [13] based on the Max-Min objective proposes a Pareto-efficient [12] learning algorithm to provide a performance guarantee for unidentified protected class w.r.t. to user-defined minimum group size. Although these works target minimizing the worst-case performance of _any_ unknown subgroup of a minimum size, experimental results show that they also improve fairness for some intersectional groups. ### Discussion One of the main drawbacks of current works is that the majority of them rely on specific surrogate fairness notions that we discussed in Section 2. Furthermore, these works ignore certain subgroups that do not conform to specific statistical requirements, e.g., computationally identifiable, that reinforces fairness gerrymandering. One approach to tackle this can be to learn latent representations of intersectional groups that can be then de-biased using geometric approaches [1]. Intersectional fairness without demographics is a promising direction to mitigate intersectional bias, but the current works are limited to the Max-Min notion. While there are certain applications (e.g., healthcare) where improving the utility of the worst-case groups is an important goal, many other applications can be required by law to ensure parity for all subgroups. Similarly, it is critical to evaluate the effectiveness of these methods on different intersectional groups present in the data. For instance, [1] relies on computational identifiability, which depends on correlations with unprotected attributes. Such methods might fail for intersectional groups that do not have strong demographic signals present in unprotected attributes, hence, failing to protect such groups. Future research can explore learning predictive patterns for underrepresented intersectional groups by leveraging common patterns shared with related groups. For example, Black Females and Black Males might have common structural patterns [16]. ## 4 Applications Most works discussed above are generally focused on classification tasks with i.i.d data. In this section, we review the application of intersectional fairness in other domains of AI. ### Natural Language Processing Numerous works (e.g. [13]) have observed that the societal bias inherent in real-world corpora translates to discrimination in NLP models. More recently, there has been a greater effort to focus on benchmarking and debiasing NLP models along intersectional lines. Benchmark.Several studies have examined bias in sentiment analysis systems, such as [17], and found that such systems discriminate based on intersections of gender and race. A closely related study by [14] across multiple languages confirms these biases. Contextualized word embedding models, including GPT-2 and BERT, have been analyzed for gender and race intersections at sentence level [15] and at contextualized word level [16]. These works report higher discrimination at the intersection of race and gender (e.g., Black females) compared to either group alone. Separately, [13] evaluates BERT for discrimination against people with disabilities along similar intersectional groups. To automatically identify intersectional biases in static word embeddings, [12] introduces Contextualized Embedding Association Test (CEAT) to measure intersectional bias in contextualized settings. Finally, [17] expands upon these works to incorporate intersections of religion, sexuality, and political affiliations to investigate representational and allocational harms concerning occupational stereotypes in language models. To quantify the scope of the intersectional bias problem in NLP, [15] performs a comprehensive evaluation of state-of-the-art NLP models and debiasing strategies for intersectional bias, benchmarking ten downstream tasks and five demographic groups. These studies highlight the importance of considering a diverse set of intersecting groups in discussions around bias in language models, especially user-facing large language models. Mitigation.Relatively few works have focused on debiasing along intersectional dimensions. The earliest work by [2] evaluates two debiasing techniques and shows that debiasing methods based on independent groups are prone to gerrymandering. To address the issue of limited data for intersectional groups, [3] introduces JoSEC, a debiasing approach that leverages the nonlinear geometry of subspace representations to learn intersectional subspace without using predefined word sets. Unlike the linear correlation assumption, they posit that the individual subspaces intersect over a single dimension where the intersectional group subspace resides. ### Ranking Systems Another common application domain is ranking systems. Fair ranking refers to the method of ensuring that ranking and recommender systems are equitable for all parties involved, including users, providers, and the items being ranked [12]. Here we review such works that explore the problem through an intersectional lens. Top-\(K\) Ranking.In the context of fair top-\(k\) selection, [1] examines discrimination along twelve intersections of socioeconomic status, high-school type, and zip code regions for college admissions and proposes an algorithm to select candidates with high utility whilst giving more representation to disadvantaged intersectional groups. Another promising approach [20] uses a causal framework for fair ranking across intersections of gender and race. They compute model-based counterfactuals and rank the resulting scores accordingly. Counterfactual fairness denotes that a prediction is fair if the outcome of an AI system does not change when a single variable is changed and all else remains the same [18]. Fair Rank Aggregation.A similar problem of fair rank aggregation requires combining various rankings to create a consensus ranking, but this can be biased against individual protected attributes like gender, race, and intersectional groups. To resolve this, [1] proposes a group fairness criterion for consensus ranking that ensures fairness for individual groups and their intersections. The unified fairness notion ensures minimal statistical parity difference between pairs of candidate rankings for individual and intersectional groups: \(ARP_{a_{k}}\leq\Delta(\forall a_{k}\in A\) and \(IRP\leq\Delta\), where \(\Delta\) represents desired closeness to statistical parity (zero ensures parity), \(ARP\) and \(IRP\) represent rank parity for individual and intersectional groups, respectively. They use empirical counts to measure unfairness for each group using this metric, which is prone to data sparsity. ### Auditing and Visualization Auditing evaluates the fairness of AI systems after training a predictor. It is useful to detect discrimination against a large number of possibly intersecting subgroups. Auditing can identify such subpopulations and make the model more transparent by highlighting its failures. One such work [13] leverages a decision-tree-based auditing model to identify bias against dark-skinned women in an image dataset. Other works such as [10] and [15] utilize this approach to train fair predictors w.r.t a large number of subgroups. Some works have created visualization tools to detect potentially discriminatory data subsets. FairVis, [1], is a visual analytic tool for experts to utilize their domain knowledge in generating subgroups and augmenting automated detection and mitigation strategies. The tool uses clustering analysis to identify statistically similar subgroups and then computes important features and fairness metrics using entropy. Another such tool [1] identifies intersectional biases encoded in word embeddings. Given a pre-trained word embedding, it computes a bias score (using cosine distance) for each subgroup (e.g., male/female for a binary gender) and predefined word sets. A discriminatory word is considered to be associated with an intersectional group if it strongly associates with each of its individual groups according to the bias score. However, this approach may overlook cases like "Hair Weaves", which are associated with intersectional groups (Black Female) but not individual subgroups (Black or Female). A software engineering approach [12] seeks to ensure adequate representation for relevant intersectional groups within the dataset using coverage. They define intersectional groups using patterns, e.g., {Gender=Female, Race = Black}. Then it requires that intersectional subgroups have a minimum threshold \(\tau\) of instances. Finally, [1] finds intersectional bias in the dataset by dividing it into more granular groups until a subgroup with significant loss is found. ### Discussion Current applications of intersectionality have been focused on NLP and Ranking. Some other promising applications in clude recommender systems [11], graph embeddings [10], computer vision [13], and so on. It is imperative that existing ML systems are holistically evaluated under the intersectional framework to aid in developing inclusive and fair ML systems. In NLP, a limitation of current works is the assumption that demographic information is available. Given increasing regulatory and privacy concerns, more research is needed to understand potential correlations in data that can be leveraged to tackle intersectional biases. ## 5 Datasets and Evaluation Metrics Available Datasets.Data scarcity is a big challenge for intersectional fairness as the number of dimensions increase. In Table 1, we summarize some popular datasets with adequate intersectional groups, across different AI domains. We hope our consolidated summary provides researchers with convenient access to datasets with rich subgroup information. Evaluation Metrics.Most studies use the intersectional notions they define for the learning algorithm as the evaluation metrics. Beyond that, worst-case classification accuracy and AUC are broadly used when demographic information is unavailable. These worst-case metrics have also been adopted in NLP and Image classification tasks. ## 6 Summary and Open Problems In this survey, we review recent advances in the fairness of ML systems from an intersectional perspective. Intersectionality poses unique challenges that traditional bias mitigation algorithms and metrics cannot effectively address. We review different definitions of group fairness, present a taxonomy of intersectional fairness notions and mitigation methods, and review the literature on intersectionality in other AI domains. Next, we briefly discuss open problems and potential future research directions. Data Sparsity.The lack of representative data for marginalized subgroups is a significant challenge. Alternative approaches that do not rely on demographic information may be employed, but these methods do not guarantee that bias against missing subgroups will be addressed. Therefore, a concerted effort to create more inclusive datasets is needed. Selecting subgroups.Most works propose fairness notions that guarantee fairness only for a limited number of subgroups that are considered statistically meaningful (computationally feasible). This approach fails to protect minority subgroups that do not conform to these statistical requirements. Relying solely on such computational methods reinforces fairness gerrymandering [14]. Hence, it is crucial to involve diverse stakeholders to ensure that the needs and perspectives of different intersectional groups are met. Generalized mitigation approaches.Existing works on mitigating intersectional bias propose learning algorithms based on specific surrogate fairness notions. These cannot be generalized to other predictors to be used as plug-in mitigation tools. Learning latent representations for intersectional groups so debiased data can be used with any predictor and for any classification task, is a potential direction. Intersectional fairness beyond parity.Current research overlooks the under-representation of intersectional groups by solely focusing on achieving parity [14]. While it is useful, unequal distribution may be fairer in certain cases. For instance, equalizing hiring rates cannot fix the under-representation of Black females in tech. Therefore, more research on non-distributive intersectional fairness is needed. Generating test cases for auditing.Generating test cases to audit predictors for intersectional biases is another important direction. With the added complexity of intersectionality, it would be beneficial to evaluate previous testing tools [2] and design new tools to test user-facing models for intersectional bias. This can help identify intersectional subgroups against which the predictor is discriminatory. Beyond mitigation.To effectively address intersectional bias in ML systems, it's crucial to understand its propagation throughout the ML development cycle, from data collection to algorithms. Exploring causal approaches to understanding intersectional bias is one such interesting direction. Evaluating fairness notions.Intersectional notions proposed for handling biases have mostly been explored theoretically, with little evidence of their effectiveness on real-world datasets, especially in evaluating the subgroups they fail to protect. Though there are no simple solutions for dealing with intersectional biases in ML, we must measure and benchmark such biases to tackle this problem effectively. ## Acknowledgements This material is based upon work supported by the Cisco Research Gift Grant. \begin{table} \begin{tabular}{l l l} \hline \hline **Type** & **Dataset** & **Demographics** \\ \hline \multirow{4}{*}{Tabular} & Adult & Gender, Age, Race \\ & Student & Gender, Age, Alcohol, Relationship \\ & Law School & Gender, Age, Race, Income \\ & Compass & Gender, Race \\ \hline \multirow{4}{*}{NLP} & Psychometrics & Gender, Age, Race, Income, Education \\ & MTC & Gender, Age, Race \\ & FIPI & Gender, Age, Race, Income, Education \\ & MBTI & Gender, Age \\ \hline \multirow{2}{*}{Ranking} & MEPS & Gender, Race, Age \\ & MovieLens & Gender, Age, Occupation \\ \hline \multirow{2}{*}{Image} & CelebA & Gender, Age, Race \\ & UTKFace & Gender, Age, Ethnicity \\ \cline{1-1} & PPB & Gender, Race \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of popular datasets across different AI domains that contain multiple intersectional groups. Law School [12] and Compass [13], are also used in Ranking. Adult [14], Student [21], Multilingual Twitter Corpus (MTC) [15], Five Item Personality Inventory (FIPI) and Myers-Briggs Type Indicator (MBTI) [16], MovieLens [12], MEPS [14], CelebA [15], UTKFace [10], PPB [1].
2301.12233
A Comprehensive Investigation of Metals in the Circumgalactic Medium of Nearby Dwarf Galaxies
Dwarf galaxies are found to have lost most of their metals via feedback processes; however, there still lacks consistent assessment on the retention rate of metals in their circumgalactic medium (CGM). Here we investigate the metal content in the CGM of 45 isolated dwarf galaxies with $M_*=10^{6.5-9.5}~M_\odot$ ($M_{\rm 200m}=10^{10.0-11.5}~M_\odot$) using {\it HST}/COS. While H I (Ly$\alpha$) is ubiquitously detected ($89\%$) within the CGM, we find low detection rates ($\approx5\%-22\%$) in C II, C IV, Si II, Si III, and Si IV, largely consistent with literature values. Assuming these ions form in the cool ($T\approx10^4$ K) CGM with photoionization equilibrium, the observed H I and metal column density profiles can be best explained by an empirical model with low gas density and high volume filling factor. For a typical galaxy with $M_{\rm 200m}=10^{10.9}~M_\odot$ (median of the sample), our model predicts a cool gas mass of $M_{\rm CGM,cool}\sim10^{8.4}~M_\odot$, corresponding to $\sim2\%$ of the galaxy's baryonic budget. Assuming a metallicity of $0.3Z_\odot$, we estimate that the dwarf galaxy's cool CGM likely harbors $\sim10\%$ of the metals ever produced, with the rest either in more ionized states in the CGM or transported to the intergalactic medium. We further examine the EAGLE simulation and show that H I and low ions may arise from a dense cool medium, while C IV arises from a diffuse warmer medium. Our work provides the community with a uniform dataset on dwarf galaxies' CGM that combines our recent observations, additional archival data and literature compilation, which can be used to test various theoretical models of dwarf galaxies.
Yong Zheng, Yakov Faerman, Benjamin D. Oppenheimer, Mary E. Putman, Kristen B. W. McQuinn, Evan N. Kirby, Joseph N. Burchett, O. Grace Telford, Jessica K. Werk, Doyeon A. Kim
2023-01-28T15:48:43Z
http://arxiv.org/abs/2301.12233v2
# A Comprehensive Investigation of Metals in the Circumgalactic Medium of Nearby Dwarf Galaxies ###### Abstract Dwarf galaxies are found to have lost most their metals via feedback processes; however, there still lacks consistent assessment on the retention rate of metals in their circumgalactic medium (CGM). Here we investigate the metal content in the CGM of 49 isolated dwarf galaxies with \(M_{*}=10^{6.5-9.5}\ M_{\odot}\) (\(M_{200\rm m}=10^{10.0-11.5}\ M_{\odot}\)) using _HST_/COS spectroscopy. While H i (Ly\(\alpha\)) is ubiquitously detected (89%) within the CGM, we find low detection rates (\(\approx 5-21\%\)) in C ii, C iv, Si ii, Si iii, and Si iv, largely consistent with literature values. Assuming these ions form in the cool (\(T\approx 10^{4}\) K) CGM with photoionization equilibrium, the observed H i and metal column density profiles can be best explained by an empirical model with low gas density and high volume filling factor. For a typical galaxy with \(M_{200\rm m}=10^{10.9}\ M_{\odot}\) (median of the sample), our model predicts a cool gas mass of \(M_{\rm CGM,cool}\sim 10^{8.4}\ M_{\odot}\), corresponding to \(\sim 2\%\) of the galaxy's baryonic budget. Assuming a metallicity of \(0.3Z_{\odot}\), we estimate that the dwarf galaxy's cool CGM only harbors \(\sim 10\%\) of the metals ever produced, with the rest either in warmer phases yet to be detected, or transported to the intergalactic medium. We further examine the EAGLE simulation and show that H i and low ions may arise from a dense cool medium, while C iv from a diffuse warmer medium. Our work provides the community a uniform dataset on dwarf galaxies' CGM that combines our recent observations, additional archival data and literature compilation, which can be used to test various theoretical models of dwarf galaxies. Circumgalactic medium (1879); Dwarf galaxies (416); Metal line absorbers(1032) 0000-0002-4882-8865]Yong Zheng (Zheng ) 00000-0002-4880-7886]Yakov Faerman 0000-0002-4883-0888]Benjamin D. Oppenheimer 0000-0002-4883-0888]Mary E. Putman 0000-0002-0703-0888]Kristen B. W. McQuinn 0000-0002-0703-0888]Evan N. Kirby 0000-0002-4883-0888]Joseph N. Burchett 0000-0002-0703-0888]O. Grace Telford 0000-0002-0703-0888]Jessica K. Werk 0000-0002-4883-0888]Doyeon A. Kim ## 1 Introduction The circumgalactic medium (CGM) is a large gaseous envelope surrounding a galaxy. It contains the imprints of outflows from feedback processes within a galaxy's disk, and is an important potential source of new star formation fuel for the galaxy. Numerous observations of the halos of Milky-Way (MW) mass galaxies have found that the CGM contains a significant amount of baryons and metals (Putman et al., 2012; Tumlinson et al., 2017; Peroux and Howk, 2020). On the other hand, studies on the CGM of dwarf galaxies (with stellar masses of \(M_{*}\lesssim 10^{9.5}\ \mathrm{M_{\odot}}\)) have so far been limited to a few works (e.g., Bordoloi et al., 2014; Burchett et al., 2016; Johnson et al., 2017; Zheng et al., 2020; Qu and Bregman, 2022, see below for more detail). A systematic investigation remains to be conducted. Dwarf galaxies have relatively shallow potential wells, and indeed the stellar mass-stellar metallicity relation for galaxies shows that low-mass galaxies do not retain metals as well as their higher-mass counterparts (e.g., Gallazzi et al., 2005; Kirby et al., 2011, 2013). The gas phase abundance in relation to stellar mass also shows this trend of decreasing metal retention in the interstellar medium (ISM) with decreasing mass (e.g., Tremonti et al., 2004; Lee et al., 2006; Andrews and Martini, 2013; McQuinn et al., 2015). These results suggest that most of the metals produced throughout a dwarf galaxy's star formation history now reside beyond the central regions of the galaxy: either in the galaxy's CGM or in the intergalactic medium (IGM). Indeed, hydrodynamic simulations of dwarf galaxies have demonstrated the efficiency of stellar feedback in redistributing baryons and metals into the CGM and IGM (e.g., Shen et al., 2014; Muratov et al., 2017; Christensen et al., 2016, 2018; Wheeler et al., 2019; Agertz et al., 2020; Rey et al., 2020; Mina et al., 2021; Andersson et al., 2022). For example, Christensen et al. (2018) find that only \(\sim\)20-40% of the metals (by mass) are retained in the ISM of dwarf galaxies with \(M_{*}\lesssim\)10\({}^{9}\) M\({}_{\odot}\), while \(\sim\)10-55% resides in the CGM. Meanwhile, recent simulations of 8 dwarf galaxies by Mina et al. (2021) suggest that the metals in the simulated CGM are likely to be too diffuse to be easily detected. For a range of ions (H i, Si ii, C iv, O vi), they find low column densities of CGM gas as a function of impact parameter, with values typically lower than the published dwarf galaxy CGM literature (see below). Recent years have seen emerging efforts to observationally search for metals in the CGM of dwarf galaxies at \(z\lesssim 0.3\) (see references in Table 1). The COS-Dwarfs program studies C iv and H i absorption in the CGM of 43 galaxies with \(M_{*}=\)10\({}^{8-9.9}\) M\({}_{\odot}\) at \(z\lesssim 0.1\)(Bordoloi et al., 2014, 2018, hereafter B14+B18). They detect C iv out to \(\sim 0.5\) virial radius at a sensitivity limit of 50-100 mA as measured in C iv 1548A equivalent width (EW). A power-law fit to the observed data shows that C iv's EW drops quickly as a function of impact parameter (\(b\)). In a sample of 195 galaxy-QSO pairs at \(z<0.176\), Liang & Chen (2014, hereafter LC14) find low detection rates in C iv as well as in other ions (Si ii, Si iii, C ii, and C iv) while reporting ubiquitous H i detections in Ly\(\alpha\) 1215A (see also Wilde et al., 2021 for an extensive study on CGM H i absorbers over \(M_{*}\sim 10^{7-11}\) M\({}_{\odot}\)). However, note that the majority of LC14's galaxy-QSO pairs are not focused on dwarf galaxies, and most pairs are probed with spectra at low signal-to-noise ratio (SNR). Johnson et al. (2017, hereafter J17) examine 18 star-forming field dwarf galaxies with \(M_{*}\approx\)10\({}^{7.9-9.2}\) M\({}_{\odot}\) and study absorption in H i, Si ii, Si iii, Si iv, C iv, and O vi. Their work echoes B14+B18's and LC14's results that the detection rates of Si ii, Si iii, Si iv and C iv are very low and drop with increasing \(b\). However, they report a 50% detection of O vi in these field dwarf galaxies within the virial radii, suggesting that the dwarf galaxies' CGM may be dominated by gas with high ionization states. Similarly, Tchernyshyov et al. (2022) also find a high detection rate of O vi in their sample of over 100 dwarf galaxies and the O vi column densities increase with host galaxies' stellar masses. In the Local Group, multiple attempts to find metals in dwarf galaxies' CGM have yielded mixed results (Richter et al., 2017; Zheng et al., 2019, 2020; Qu & Bregman, 2022). Richter et al. (2017) find no detections of metals in 19 nearby dwarfs, most likely due to the fact that the galaxies probed are mainly spheroidal type and thus contain little gas, and the sightlines are at large impact parameters (\(b\gtrsim 0.5R_{\rm 200m}\)). Zheng et al. (2020) (hereafter Z20) observe 6 QSOs at 0.05-0.5 virial radii from the dwarf galaxy IC1613 and find significant detections toward most sightlines (see also a tentative detection in WLM in Zheng et al., 2019). Recently, Qu & Bregman (2022, hereafter QB22) examine the CGM of 3 dwarf galaxies (Sextans A, Sextans B, and NGC 3109) in loose associations, but only detect one C iv absorber toward Sextans A at \(b=21\) kpc (0.2 virial radius). QB22 explore analytical CGM models as established in Qu & Bregman (2018, 20), and find that a multi-temperature CGM model with photoionization, cooling and feedback can best explain the non-detections of C iv. The mixed results discussed above present an ambiguous picture of whether dwarf galaxies retain a significant amount of metals in their CGM. Furthermore, as shown in Figure 1 and Table 1, it is not straightforward to directly compare various studies due to the different mass ranges, impact parameters, and QSO spectral quality used in the existing literature, let alone different methods to compute galaxy and absorber properties (e.g., stellar mass, ion column density, virial radius). To mitigate these issues, in this work we conduct a comprehensive analysis of the metal content in the CGM of dwarf galaxies with \(M_{*}=10^{6.5-9.5}\) M\({}_{\odot}\) using data from our recent HST/COS observations, additional archival data, and a thorough compilation of relevant literature values from B14+B18, LC14, J17, Z20, and QB222 (see Table 1) with consistent quality control. The choice of imposing a mass threshold at \(M_{*}=10^{9.5}\) M\({}_{\odot}\) is to include massive dwarfs similar to the Large Magellanic Cloud (\(M_{*}=\)10\({}^{9.2}\) M\({}_{\odot}\); McConnachie, 2012) while excluding higher-mass galaxies such as the dwarf spiral M33 (\(M_{*}=\)10\({}^{9.5-9.8}\) M\({}_{\odot}\); Corbelli, 2003). We note that our final sample does not include either the LMC or M33 because they are not sufficiently isolated (see SS2.1). In this work, we define a galaxy's virial radius, \(R_{\rm 200m}\), as the radius within which the average density is 200 times the mean matter density of the Universe at \(z=0\), following the definition used by the COS-Dwarfs survey (B14). we adopt \(\Omega_{\rm m}=0.308\), \(\Omega_{\rm b}=0.0487\), and \(H_{0}=67.8\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Planck Collaboration et al., 2016), and assume a Kroupa (2001) initial mass function (IMF) for relevant quantities, unless otherwise specified. This paper is organized as follows. SS2 describes the _HST_/COS data, relevant spectral analyses, and dwarf sample selection. SS3 shows the results of ion radial column density profiles. Then, in SS4 and SS5, we look into the gas properties of the CGM of dwarf galaxies from theoretical perspectives. Lastly, in SS6, we compare the CGM ion mass estimates from this work with previous observational values reported in the literature. We conclude in SS7. ## 2 Data Sample & Measurements In the following, we refer to our data sample as the Full Sample (see Table 1), which comprises 60 dwarf-QSO pairs that include: (i) 26 new pairs either from our recent _HST_ programs (#HST-GO-15156, PI Zheng; #HST-GO-15227, PI Burchett; #HST-GO-16301, PI Putman) or a thorough search of the Barbara A. Mikulski Archive for Space Telescopes (MAST) for available QSOs (SS2.1), and (ii) 34 additional pairs compiled from existing literature ( SS2.2). The Full Sample is highlighted as filled symbols in Figure 1 and tabulated in Table 2. Overall in this work we probe a unique parameter space of low stellar mass (\(M_{*}=10^{6.5-9.5}\)) and small impact parameter (\(b/R_{\rm 200m}\)=0.05-1.0). ### 26 Pairs from New Observations or HST Archive In our recent programs (#15156, #15227, #16301), we observed a total of 20 QSOs in the vicinity of 11 nearby dwarf galaxies using _HST_/COS G130M and G160M gratings. To supplement the sample, we conducted a thorough archival search in MAST for additional QSO sightlines that were publicly available as of 2022 March 31. We looked for QSO sightlines around isolated dwarf galaxies within 8 Mpc from the Sun as cataloged in Karachentsev et al. (2013, hereafter K13). A dwarf galaxy is deemed "isolated" with the following criteria: (1) it does not have neighboring galaxies within a distance of \(\delta d_{\rm neigh}\)=100 kpc; (2) its systemic velocity is more than \(|\delta v|=150\) km s\({}^{-1}\) from other galaxies; (3) it is not a satellite of either the MW or M31, meaning that it is farther from the MW or M31 than the virial radius of the corresponding galaxy; and (4) there is detection of H i gas (via H i 21cm) in the galaxy, suggesting that the galaxy may still have an intact CGM. Setting \(\delta d_{\rm neigh}\)=100 kpc is to ensure that the inner CGM of two dwarf galaxies do not overlap, given that the median \(R_{\rm 200m}\) for dwarf galaxies in our sample is 136 kpc (see SS2.3). We set a velocity threshold of \(|\delta v|=150\) km s\({}^{-1}\) because it is nearly twice the escape velocity allowed by a \(M_{*}\sim\)10\({}^{8}\) M\({}_{\odot}\) galaxy at 0.5\(R_{\rm 200m}\) and mitigates contamination from other dwarf galaxy halos in velocity space. Note that a dwarf galaxy meeting these criteria may still reside in a loose association, as is the case for Sextans A and B (see QB22). Overall, Criteria (1)-(4) ensure minimal ambiguity of absorber origins in both position and velocity space when there is detection near a dwarf galaxy. Our initial search following the criteria above results in 248 potential UV sightlines near 56 low-mass isolated \begin{table} \begin{tabular}{c c c c l l l} \hline \multicolumn{1}{c}{References} & \(z_{\rm gal}\) & \(M_{*}({\rm M}_{\odot})\) & \(b/R_{\rm 200m}\) & Ions & Selection Criteria & Pairs Adopted \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \hline New Observations & \multirow{2}{*}{\(\sim\)0} & \multirow{2}{*}{\(10^{6.5-9.5}\)} & \multirow{2}{*}{0.08–1.0} & C ii, C iv, & No known galaxies within & \multirow{2}{*}{26} \\ or Archival Data & & & & Si ii, Si iii, Si iv & \(|\delta d|\)=100 kpc \& \(|\delta v|\)=150 km s\({}^{-1}\) & \\ \hline QB22 (NGC3109, & \multirow{2}{*}{\(\sim\)0} & \multirow{2}{*}{\(10^{7.4-8.3}\)} & \multirow{2}{*}{0.2–0.7} & C ii, C iv, O i, & \multirow{2}{*}{\(>\)1.3Mpc \& \(>\)300km s\({}^{-1}\) from MW} & \multirow{2}{*}{3 (6)} \\ Sextans A \& B) & & & Si ii, Si iii, Si iv & \(\sim\)2Mpc \& \(>\)600km s\({}^{-1}\) from M31 & \\ \hline Z20 (IC1613) & \multirow{2}{*}{\(\sim\)0} & \multirow{2}{*}{\(10^{8}\)} & \multirow{2}{*}{0.05–0.5} & C ii, C iv & On the outskirts of LG, no known & \multirow{2}{*}{3 (6)} \\ & & & & Si ii, Si iii, Si iv & galaxies within 400 kpc & \\ \hline \multirow{3}{*}{J17} & \multirow{2}{*}{0.09–0.3} & \multirow{2}{*}{\(10^{7.7-9.2}\)} & \multirow{2}{*}{0.1–1.7} & H i, C iv, O vi, & No \(L\geqslant 0.1L_{*}\) galaxies within & \multirow{2}{*}{11 (18)} \\ & & & & Si ii, Si iii, Si iv & \(|\delta d|\)=500 kpc \& \(|\delta v|\)=300 km s\({}^{-1}\) & \\ \hline B14+B18 & \(\leqslant\)0.1 & \(10^{8-9.9}\) & 0.06–1.1 & H i, C iv & No known galaxies within 300 kpc & 12 (43) \\ \hline LC14 & \(\leqslant\)0.176 & \(10^{5.2-11.1}\) & 0.2–6.0 & H i, C ii, C iv, & No known galaxies within & \multirow{2}{*}{5 (195)} \\ & & & Si ii, Si iii, Si iv & \(|\delta d|\)=500 kpc \& \(|\delta v|\)=500 km s\({}^{-1}\) & \\ \hline \hline \multicolumn{1}{c}{**Full Sample**} & \multirow{2}{*}{0.0–0.3} & \multirow{2}{*}{\(10^{6.5-9.5}\)} & H i, C ii, C iv & All the above combined, with & \multirow{2}{*}{60} \\ \multicolumn{1}{c}{(this work)} & & & Si ii, Si iii, Si iv & \(M_{*}\leqslant 10^{9.5}\)M\({}_{\odot}\), (ii) SNR\(\geqslant 8\), \& \(b\leqslant R_{\rm 200m}\) & \\ \hline \end{tabular} Note. – Col. (1): References: QB22 for Qu & Bregman (2022), Z20 for Zheng et al. (2020), J17 for Johnson et al. (2017), B14 for Bordoloi et al. (2014) (C iv), B18 for Bordoloi et al. (2018) (H i), and LC14 for Liang & Chen (2014). The last row summarizes the Full Sample we use in this work, which consists of pairs from recent observations, archival data, and those adopted from the literature. Col. (2): Galaxy redshift. Col. (3): Galaxy stellar mass range; when applicable, we have corrected the corresponding values to Kroupa (2001) IMF (see §2.2, §2.3). Col. (4): Impact parameters probed by QSO sightlines. Col. (5): List of ions included in each reference. Col. (6): Selection criteria of nearly isolated dwarf galaxies. Col. (7): Dwarf-QSO pairs adopted in this work that meet the following criteria: (i) \(M_{*}\leqslant 10^{9.5}\) M\({}_{\odot}\), (ii) SNR \(\geqslant 8\), and (iii) \(b\leqslant R_{\rm 200m}\). Numbers in parentheses indicate the total dwarf-QSO pairs included in each reference. See Figure 1 and §2 for more details. \end{table} Table 1: Compilation of Data Sample and Literature References galaxies. We further limit the data sample to QSOs with SNR\(\geqslant 8\) per resolution element (see below for definition of SNR). The choice of SNR\(\geqslant 8\) is to ensure sufficient data remained after the cut, while allowing for a consistent sensitivity floor among all data points included in this work. We also implement the same SNR cut in our literature compilation (see SS2.2). We emphasize that the consistency in sensitivity allows us to use censored data (e.g., non-detections with upper limits) to assess the metal contents in the dwarf galaxy halos with minimal bias (see SS3.2). We follow the same procedures as used in Zheng et al. (2019) and Z20 to process the _HST_/COS QSO spectra, including co-addition, continuum normalization, and upper-limit estimates of ion \(\log N\) and EW values based on the apparent optical depth (AOD) method (Savage & Sembach, 1996) for non-detections and Voigt profile fitting for detections. Below we briefly summarize the key aspects of the data reduction, but refer the reader to Zheng et al. (2019) and Z20 for more details. Co-addition & SNRFor archival sightlines included in the Hubble Spectroscopic Legacy Archive (HSLA; Peeples et al., 2017), we use HSLA's coadded spectra. For new observations or archival spectra with additional epochs of observations not included in the HSLA, we download and coadd the spectra from all epochs using an IDL package coadd_x1d.pro(Danforth et al., 2010). Z20 conduct a detailed comparison between HSLA and coadd_x1d.pro and show that these two methods yield consistent coadded spectra. After co-addition, the spectra are binned by 3 pixels to increase the SNR while remaining Nyquist-sampled with 2 pixels per resolution element (COS Data Handbook Soderblom, 2021). We adopt HSLA's definition of SNR, which is estimated by first calculating local SNRs averaged over 10A windows in absorption-line-free regions every 100A from 1150A Figure 1: Parameter space (\(\log M_{*}\) vs. \(b/R_{\rm 200m}\)) probed by this work and existing literature. The space enclosed within the vertical and horizontal dash lines (\(b/R_{\rm 200m}=0.05-1.0\) & \(\log{\rm M_{*}}\leq 9.5\)) indicates the unique parameter space explored in this work. The histograms on the top and the right show the distribution of \(b/R_{\rm 200m}\) and \(\log M_{*}\), respectively. Throughout this work, the same symbols are used to consistently represent data from different references: square for B14, diamond for J17, pentagon for LC14, thick X for QB22, star for Zheng et al. (2019), left triangle for Z20, and circle for new pairs added from recent observations or archival data. For illustration purpose, symbol colors may vary from figure to figure. to 1750A. Then the median SNR of the local SNRs is adopted as the SNR of the whole spectrum. Continuum Normalization, AOD Measurements & Voigt profile fits: We conduct continuum normalization for the coadded spectra using an open-source Python package Linetools(Prochaska et al., 2016), and focus on a set of lines typically observed in galaxies' CGM, including Si ii 1190/1193/1260/1526 A, Si iii 1206 A, Si iv 1393/1402 A, C ii 1334 A, and C iv 1548/1550 A. Unlike dwarf galaxies at slightly higher redshift (e.g., \(z\sim 0.1-0.3\); see SS2.2), H i Ly\(\alpha\) absorption is unavailable at \(z\sim 0\) due to strong contamination from the MW ISM's Ly\(\alpha\) absorption. So in this work the H i data points are from literature references with H i Ly\(\alpha\) measurements (see Table 1). For metal ions, we look for absorption features in QSO spectra within \(\pm 50\) km s\({}^{-1}\) of the systemic velocity of the host galaxy. For non-detections, as is the case for QSO 2MASXJ14292507+4518318 near DDO 190 (top panel in Figure 2), we calculate the \(3\sigma\) upper limit in column density \(\log N\) and equivalent width (EW) for the corresponding ions using the AOD method. When there is a detection, we examine the absorber's surrounding environment in greater detail to first look for potential contamination from interloping absorbers at higher \(z\) or from nearby galaxies that may pass the isolation criteria outlined above. We also examine H i 21cm emission data from HI4PI (H14PI Collaboration et al., 2016) and GALFA-H i (Peek et al., 2018) to look for potential contamination from nearby H i clouds at similar velocities. Only when all potential contaminating sources are ruled out do we consider the detected absorber associated with the corresponding dwarf galaxy. In the bottom panel of Figure 2, we show an example of ion absorbers toward QSO SDSSJ095915.65+050355.1 that are confirmed to reside in the CGM of Sextans B at \(b=8\) kpc. In this case, we perform Voigt profile fitting using the ALIS package1. Note that we run Voigt profile fits for all transition lines available, while in Figure 2 only the strongest lines are shown. Footnote 1: [https://github.com/rcooke-ast/ALIS](https://github.com/rcooke-ast/ALIS) In total, we find 26 new dwarf-QSO pairs with robust detections or upper limits from our recent _HST/COS_ observations and archival data. These pairs are shown in red filled circles in Figure 1 and listed in Table 2 with Reference of "New/Arx.". In Table 3, we tabulate the results of \(\log N\) measurements, where non-detections are quoted in \(3\sigma\) upper limits. Most of our measurements are non-detections. We do not include the EW values in Table 3 as they are not used in relevant analyses; but the data can be found on our github repository (yzhengfit/zheng_dwarfcgm_survey) for interested readers. All the _HST_/COS spectra used in this section (i.e., our new observations or additional archival search) can be found in MAST: 10.17909/ve0k-ps78. In addition to the 26 new pairs, we find a few other absorber detections that are unlikely to be associated with the CGM of our dwarf Figure 2: **Top**: Example ion spectra with non-detections for DDO190. Non-detections represent the majority of the sightlines in our data sample. We estimate the equivalent width (EW) and AOD column density over \(\pm 50\) km s\({}^{-1}\) (gray shades) from the systemic velocity of the host dwarf (vertical red lines) and indicate the \(3\sigma\) upper limits. We do not measure the absorption in C ii 1334 because it is most likely a contamination given that there is no corresponding detection in Si ii which is at similar ionization state. **Bottom**: Detections of metal absorbers in Sextans B. Voigt profile fits are included where there are significant (\(\geq 3\sigma\)) detections. We do not use C ii 1334 because it is blended with a CII* 1335 line from the MW. galaxies based on close inspection of their surrounding environments. We briefly discuss these absorbers and their potential associations in Appendix A, but do not include them in our following analysis. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline PID & Galaxy & \(\log M_{\star}\) & \(M_{\star,{\rm ref}}\) & \(\log M_{\rm 200m}\) & QSO & SNR & \(b\) & \(R_{\rm 200m}\) & Reference \\ & & (\(\log\rm M_{\odot}\)) & & (\(\log\rm M_{\odot}\)) & & & (kpc) & (kpc) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline 01 & KKH086 & 6.48 & Dale09 & 10.01 & SDSS-J135726.27+043541.4 & 18.9 & 36.0 & 67.8 & New/Arx. \\ 02 & GR8 & 6.67 & Dale09 & 10.08 & PGC-1440438 & 10.9 & 23.1 & 71.5 & New/Arx. \\ 03 & GR8 & 6.67 & Dale09 & 10.08 & SDSSJ130223.12+140609.0 & 9.4 & 32.9 & 71.6 & New/Arx. \\ 04 & DDO187 & 6.73 & Dale09 & 10.10 & SDSS-J141038.39+230447.1 & 10.3 & 46.9 & 72.6 & New/Arx. \\ 05 & UGCA292 & 6.88 & Dale09 & 10.16 & 2MASS-J12421031+3214268 & 8.2 & 60.9 & 76.1 & New/Arx. \\ 06 & UGC08833 & 6.95 & Dale09 & 10.20 & SDSSJ135341.03+361948.0 & 18.2 & 30.3 & 78.6 & New/Arx. \\ 07 & UGC08833 & 6.95 & Dale09 & 10.20 & CSO1022 & 11.0 & 32.3 & 78.5 & New/Arx. \\ 08 & PGC039646 & 7.11 & K13 & 10.28 & MS1217.0+0700 & 11.3 & 34.4 & 83.4 & New/Arx. \\ 09 & PGC039646 & 7.11 & K13 & 10.28 & PG1216+069 & 22.5 & 27.6 & 83.4 & New/Arx. \\ 10 & DDO181 & 7.32 & Dale09 & 10.39 & PG-1338+416 & 16.6 & 37.3 & 90.7 & New/Arx. \\ 11 & DDO099 & 7.32 & Dale09 & 10.39 & SDSSJ114646.00+371511.0 & 9.3 & 84.0 & 90.8 & New/Arx. \\ 12 & Sextans A & 7.44 & Dale09 & 10.46 & RXSJ09565-0452 & 17.4 & 91.1 & 95.7 & New/Arx. \\ 13 & UGC06541 & 7.51 & Dale09 & 10.49 & MRK1447 & 11.7 & 44.0 & 98.0 & New/Arx. \\ 14 & Sextans B & 7.56 & Dale09 & 10.52 & SDSS-J100535.24+013445.7 & 10.5 & 100.2 & 100.3 & New/Arx. \\ 15 & Sextans B & 7.56 & Dale09 & 10.52 & SDSSJ095915.65+050355.1 & 10.8 & 8.1 & 100.3 & New/Arx. \\ 16 & DDO190 & 7.65 & Dale09 & 10.57 & 2MASXJ14292507+4518318 & 11.6 & 56.6 & 104.2 & New/Arx. \\ 17 & DDO190 & 7.65 & Dale09 & 10.57 & QSO-B1411+4414 & 30.0 & 100.3 & 104.2 & New/Arx. \\ 18 & DDO190 & 7.65 & Dale09 & 10.57 & PG1415+451 & 12.9 & 70.8 & 104.2 & New/Arx. \\ 19 & UGC08638 & 7.69 & Dale09 & 10.59 & SDSS-J133833.06+251640.6 & 10.8 & 39.8 & 105.8 & New/Arx. \\ 20 & NGC4163 & 7.71 & Dale09 & 10.60 & SDSSJ121114.56+365739.5 & 9.7 & 40.9 & 106.8 & New/Arx. \\ 21 & UGC07485 & 7.89 & K13 & 10.69 & PG1222+216 & 19.1 & 35.0 & 114.3 & New/Arx. \\ 22 & NGC5477 & 8.08 & Dale09 & 10.79 & SDSSJ140732.25+550725.6 & 11.6 & 85.2 & 123.4 & New/Arx. \\ 23 & UGC07639 & 8.32 & Dale09 & 10.92 & SDSSJ123335.07+475800.4 & 10.2 & 94.9 & 136.4 & New/Arx. \\ 24 & NGC5408 & 8.34 & K13 & 10.93 & PKS1355-41 & 17.5 & 89.0 & 137.4 & New/Arx. \\ 25 & ESO269-058 & 8.62 & F10 & 11.08 & UVQSJ130808.98+455417.9 & 8.6 & 76.1 & 154.1 & New/Arx. \\ 26 & NGC4144 & 8.72 & Dale09 & 11.13 & PG-1206+459 & 20.0 & 64.3 & 160.3 & New/Arx. \\ 27 & Sextans A & 7.44 & Dale09 & 10.46 & MARK 1253 & 8.6 & 63.3 & 95.7 & QB22 \\ 28 & Sextans A & 7.44 & Dale09 & 10.46 & PG 1011-040 & 29.7 & 23.0 & 95.7 & QB22 \\ 29 & Sextans B & 7.56 & Dale09 & 10.52 & PG 1001+054 & 15.6 & 27.1 & 100.2 & QB22 \\ 30 & IC1613 & 8.00 & M12 & 10.75 & 2MASX J01022632-0039045 & 8.0 & 37.7 & 119.7 & Z20 \\ 31 & IC1613 & 8.00 & M12 & 10.75 & LBQS-0101+0009 & 7.5 & 22.9 & 119.7 & Z20 \\ 32 & IC1613 & 8.00 & M12 & 10.75 & LBQS-0100+0205 & 7.6 & 6.0 & 119.7 & Z20 \\ 33 & D9 & 7.73 & J17 & 10.61 & PG-1522+101 & 12.8 & 84.0 & 107.5 & J17 \\ 34 & D1 & 7.93 & J17 & 10.71 & PKS0637-752 & 24.8 & 16.0 & 116.1 & J17 \\ 35 & D2 & 8.13 & J17 & 10.82 & PKS0637-752 & 24.8 & 21.0 & 126.3 & J17 \\ 36 & D4 & 8.23 & J17 & 10.87 & PG1001+291 & 20.3 & 56.0 & 131.2 & J17 \\ 37 & D7 & 8.33 & J17 & 10.92 & PKS0405-123 & 45.3 & 72.0 & 136.4 & J17 \\ 38 & D8 & 8.53 & J17 & 11.03 & Q1545+210 & 10.2 & 79.0 & 148.4 & J17 \\ \hline \end{tabular} \end{table} Table 2: Properties of Dwarf-QSO Pairs \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline PID & Galaxy & \(\log M_{\star}\) & \(M_{\star,\rm ref}\) & \(\log M_{200\rm m}\) & QSO & SNR & \(b\) & \(R_{200\rm m}\) & Reference \\ & & (\(\log\rm M_{\odot}\)) & & (\(\log\rm M_{\odot}\)) & & & (kpc) & (kpc) & \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} Note: Col (1): PID = ID for each dwarf-QSO pair. Col (2): Dwarf galaxy name. Cols (3) & & (4): Dwarf galaxy stellar mass and the corresponding reference, with Dale09=3.6\(\mu\)m flux from Dale et al. (2009) and converted to \(M_{\star}\) with the adopted distance listed in Table 4; F10=Ks band mag from Fingerhut et al. (2010) and converted to \(M_{\star}\) with the adopted distance; K13\({}^{**}\)=Ks mag from Karachentsev et al. (2013) and converted to \(M_{\star}\) with adopted distances; M12=\(M_{\star}\) mass from McConnachie (2012), rescaled with adopted distance. Col (5): Dwarf galaxy halo mass based on the SMHIM relation from Munshi et al. (2021), see §2.3. Col (6): QSO Name. Col (7): QSO signal-to-noise ratio per resolution element. Col (8): QSO impact parameter. Col (9): Virial radius, defined as the radius within which the mean density is 200 times the matter matter density of the Universe at \(z=0\). Col (10): Reference from which we adopt the corresponding dwarf-QSO pair that meets our selection criteria (see §2). \end{table} Table 3: Column Density Measurements \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline PID & QSO & logN(H i) & logN(C ii) & logN(C iv) & logN(Si ii) & logN(Si iii) & logN(Si iv) & Reference \\ & & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline 08 & MS1217.0+0700 & - & \(\leq\)13.22 & - & \(\leq\)12.30 & \(\leq\)12.31 & \(\leq\)12.74 & New/Arx. \\ 09 & PG1216+069 & - & \(\leq\)13.13 & - & \(\leq\)12.08 & \(\leq\)12.09 & \(\leq\)12.43 & New/Arx. \\ 10 & PG-1338+416 & - & - & - & \(\leq\)12.21 & - & - & New/Arx. \\ 11 & SDSSJ114646.00+371511.0 & - & - & - & \(\leq\)12.44 & \(\leq\)12.38 & \(\leq\)12.79 & New/Arx. \\ 12 & RXSJ09565-0452 & - & - & - & \(\leq\)12.38 & \(\leq\)12.17 & \(\leq\)12.48 & New/Arx. \\ 13 & MRK1447 & - & - & \(\leq\)13.13 & \(\leq\)12.33 & \(\leq\)12.23 & \(\leq\)12.64 & New/Arx. \\ 14 & SDSSJ100535.24+013445.7 & - & - & \(\leq\)13.02 & \(\leq\)12.42 & \(\leq\)12.39 & \(\leq\)12.65 & New/Arx. \\ 15 & SDSSJ095915.65+050355.1 & - & - & 13.63\(\pm\)0.05 & \(\leq\)12.50 & 12.72\(\pm\)0.06 & 12.45\(\pm\)0.13 & New/Arx. \\ 16 & 2MASXJ14292507+4518318 & - & \(\leq\)13.21 & \(\leq\)12.30 & \(\leq\)12.22 & \(\leq\)12.71 & New/Arx. \\ 17 & QSO-B1411+4414 & - & \(\leq\)12.69 & \(\leq\)12.80 & \(\leq\)11.93 & \(\leq\)11.72 & \(\leq\)12.29 & New/Arx. \\ 18 & PG1415+451 & - & \(\leq\)13.13 & \(\leq\)13.00 & \(\leq\)12.33 & \(\leq\)12.22 & \(\leq\)12.74 & New/Arx. \\ 19 & SDSSJ-133833.06+251640.6 & - & - & \(\leq\)13.21 & \(\leq\)12.25 & \(\leq\)12.35 & \(\leq\)12.82 & New/Arx. \\ 20 & SDSSJ121114.56+365739.5 & - & - & \(\leq\)13.16 & \(\leq\)12.38 & \(\leq\)12.19 & \(\leq\)12.55 & New/Arx. \\ 21 & PG1222+216 & - & \(\leq\)13.13 & \(\leq\)12.78 & \(\leq\)12.20 & \(\leq\)12.16 & - & New/Arx. \\ 22 & SDSSJ140732.25+550725.6 & - & \(\leq\)13.25 & - & \(\leq\)12.21 & \(\leq\)12.16 & \(\leq\)12.65 & New/Arx. \\ 23 & SDSSJ123335.07+475800.4 & - & \(\leq\)13.28 & - & \(\leq\)12.28 & - & \(\leq\)12.84 & New/Arx. \\ 24 & PKS1355-41 & - & \(\leq\)13.17 & - & \(\leq\)12.14 & \(\leq\)12.16 & \(\leq\)12.47 & New/Arx. \\ 25 & UVQSJ130808.98-455417.9 & - & - & - & \(\leq\)12.42 & \(\leq\)12.51 & \(\leq\)12.76 & New/Arx. \\ 26 & PG-1206+459 & - & - & - & \(\leq\)12.10 & \(\leq\)11.95 & - & New/Arx. \\ 27 & MARK 1253 & - & - & \(\leq\)13.20 & \(\leq\)12.10 & \(\leq\)12.40 & \(\leq\)12.80 & QB22 \\ 28 & PG 1011-040 & - & - & 13.04\(\pm\)0.08 & \(\leq\)11.90 & \(\leq\)11.90 & \(\leq\)12.30 & QB22 \\ 29 & PG 1001+054 & - & - & \(\leq\)13.10 & \(\leq\)12.20 & \(\leq\)12.10 & \(\leq\)12.60 & QB22 \\ 30 & 2MASX J01022632-0039045 & - & 14.52\(\pm\)0.04 & 13.78\(\pm\)0.05 & 13.53\(\pm\)0.03 & 13.38\(\pm\)0.06 & 13.02\(\pm\)0.07 & Z20 \\ 31 & LBQS-0101+0009 & - & 14.21\(\pm\)0.05 & 13.64\(\pm\)0.07 & 13.19\(\pm\)0.06 & 13.30\(\pm\)0.05 & \(\leq\)12.79 & Z20 \\ 32 & LBQS-0100+0205 & - & \(\leq\)13.74 & 13.57\(\pm\)0.09 & \(\leq\)13.03 & 12.96\(\pm\)0.06 & 13.00\(\pm\)0.07 & Z20 \\ 33 & PG-1522+101 & 13.04\(\pm\)0.07 & - & \(\leq\)12.87 & \(\leq\)12.26 & \(\leq\)11.86 & \(\leq\)12.84 & J17 \\ 34 & PKS0637-752 & 15.70\(\pm\)0.40 & - & 13.73\(\pm\)0.04 & \(\leq\)12.26 & 13.14\(\pm\)0.03 & - & J17 \\ 35 & PKS0637-752 & 15.06\(\pm\)0.02 & - & - & \(\leq\)11.96 & 12.48\(\pm\)0.05 & \(\leq\)12.53 & J17 \\ 36 & PG1001+291 & 14.10\(\pm\)0.01 & - & \(\leq\)13.18 & \(\leq\)11.96 & - & \(\leq\)12.71 & J17 \\ 37 & PKS0405-123 & 13.94\(\pm\)0.01 & - & \(\leq\)12.57 & \(\leq\)11.96 & - & \(\leq\)12.53 & J17 \\ 38 & Q1545+210 & \(\leq\)12.44 & - & \(\leq\)13.05 & \(\leq\)12.26 & \(\leq\)11.86 & - & J17 \\ 39 & PKS0637-752 & 14.32\(\pm\)0.03 & - & - & \(\leq\)11.96 & - & \(\leq\)12.53 & J17 \\ 40 & HB89-0232-042 & 13.88\(\pm\)0.01 & - & \(\leq\)13.53 & \(\leq\)12.26 & \(\leq\)12.16 & \(\leq\)13.23 & J17 \\ 41 & PG-1522+101 & 13.63\(\pm\)0.02 & - & \(\leq\)13.42 & \(\leq\)11.96 & - & \(\leq\)12.53 & J17 \\ 42 & LBQS-1435-0134 & 12.74\(\pm\)0.14 & - & \(\leq\)12.87 & \(\leq\)11.96 & \(\leq\)11.86 & \(\leq\)12.53 & J17 \\ 43 & LBQS-1435-0134 & 14.00\(\pm\)0.01 & - & \(\leq\)13.42 & \(\leq\)11.96 & \(\leq\)11.86 & \( \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & RA & Dec & \(v_{\rm LSR}\) & \(d\) & \(d_{\rm ref}\) & \(\log{\rm M_{HI}}\) & \(M_{\rm HI,ref}\) & logSFR & SFR\({}_{\rm ref}\) & Reference \\ & (deg) & (deg) & (km s\({}^{-1}\)) & (Mpc) & & (\(\log{\rm M_{\odot}}\)) & & (log(\(\rm M_{\odot}\) yr\({}^{-1}\))) & & \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline KKH086 & 208.64 & 4.24 & 295.1 & 2.58 & D09 & 6.10 & H18 & -4.30 & L11 & New/Arx. \\ GR8 & 194.67 & 14.22 & 223.1 & 2.08 & D09 & 7.03 & H18 & -2.87 & L11 & New/Arx. \\ DDO187 & 213.99 & 23.06 & 171.0 & 2.21 & D09 & 7.13 & H18 & -3.22 & L11 & New/Arx. \\ UGCA292 & 189.67 & 32.77 & 314.8 & 3.85 & T16 & 7.49 & K13\({}^{\dagger}\) & -2.73 & L11 & New/Arx. \\ UGC08833 & 208.70 & 35.84 & 231.7 & 3.19 & T16 & 7.21 & H18 & -3.07 & L11 & New/Arx. \\ PGC039646 & 184.81 & 6.29 & 671.7 & 4.51 & K13\({}^{\ast}\) & 6.45 & H18 & -3.39 & K13\({}^{\dagger\dagger}\) & New/Arx. \\ DDO099 & 177.72 & 38.88 & 255.9 & 2.65 & T16 & 7.74 & B08 & -2.49 & L11 & New/Arx. \\ DDO181 & 204.97 & 40.74 & 224.2 & 3.14 & D09 & 7.37 & K13\({}^{\dagger}\) & -2.65 & L11 & New/Arx. \\ Sextans A & 152.75 & -4.69 & 317.3 & 1.44 & T16 & 7.95 & H12 & -2.12 & L11 & New/Arx. \\ UGC06541 & 173.37 & 49.24 & 254.2 & 4.23 & T16 & 7.04 & K13\({}^{\dagger}\) & -2.31 & L11 & New/Arx. \\ Sextans B & 150.00 & 5.33 & 294.0 & 1.43 & T16 & 7.57 & H18 & -2.58 & L11 & New/Arx. \\ DDO190 & 216.18 & 44.53 & 162.0 & 2.84 & J09 & 7.65 & SI02 & -2.43 & L11 & New/Arx. \\ UGC08638 & 204.83 & 24.78 & 285.4 & 4.29 & T16 & 7.27 & H18 & -2.46 & L11 & New/Arx. \\ NGC4163 & 183.04 & 36.17 & 167.7 & 2.88 & D09 & 7.22 & H18 & -2.64 & L11 & New/Arx. \\ UGC07485 & 186.09 & 21.16 & 963.9 & 7.94 & K13\({}^{\ast}\) & 7.12 & H18 & -2.72 & K13\({}^{\dagger\dagger}\) & New/Arx. \\ NGC5477 & 211.39 & 54.46 & 323.1 & 6.76 & T16 & 8.60 & K13\({}^{\dagger}\) & -1.71 & L11 & New/Arx. \\ UGC07639 & 187.47 & 47.53 & 392.5 & 7.14 & R05 & 7.62 & K13\({}^{\dagger}\) & -2.17 & L11 & New/Arx. \\ NGC5408 & 210.84 & -41.38 & 502.9 & 5.32 & T16 & 8.48 & K13\({}^{\dagger}\) & -1.05 & K08 & New/Arx. \\ ESO269-058 & 197.64 & -46.99 & 397.7 & 3.75 & T16 & 7.37 & K13\({}^{\dagger}\) & -2.60 & L11 & New/Arx. \\ NGC4144 & 182.50 & 46.46 & 271.5 & 4.61 & J09 & 8.36 & K13\({}^{\dagger}\) & -1.67 & L11 & New/Arx. \\ IC1613 & 16.20 & 2.13 & -236.4 & 0.76 & T16 & 7.66 & H18 & -2.23 & L11 & Z20 \\ D9 & 231.10 & 9.98 & z=0.139 & - & - & - & - & - & J17 \\ D1 & 98.94 & -75.27 & z=0.123 & - & - & - & - & - & J17 \\ \hline \end{tabular} Note: Col (1): PID = dwarf-QSO pair ID as used in Table 2. Col (2): QSO name. Cols (3)–(8): ion column densities (\(\log N\)). For non-detections, 3\(\sigma\) upper limits are indicated. For those non-detection values from the corresponding references where 1\(\sigma\) or 2\(\sigma\) are provided, we have converted their values to 3\(\sigma\) instead to be consistent with the rest of the measurements. Additionally, in cases where only EW values are provided in the corresponding references, we have calculated \(\log N\) from EW assuming optically thin for the lines of interest based on equation 3 in Savage & Sembach (1996). See §2.2. Col (9): Reference for each set of measurements. \end{table} Table 4: Properties of Dwarf Galaxies \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ PID} & QSO & logN(H i) & logN(C ii) & logN(C iv) & logN(Si ii) & logN(Si iii) & logN(Si iv) & Reference \\ & & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & (cm\({}^{-2}\)) & \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline 52 & SDSSJ135712.61+17044.1 & \(\leq\)12.74 & - & \(\leq\)13.11 & - & - & - & B14+B18 \\ 53 & SDSSJ134206.56+05023.8 & \(>\)14.65\(\pm\)0.02 & - & \(\leq\)13.53 & - & - & - & B14+B18 \\ 54 & SDSSJ133053.27+311930.5 & \(>\)14.65\(\pm\)0.02 & - & 14.27\(\pm\)0.03 & - & - & - & B14+B18 \\ 55 & PG1049-005 & \(>\)14.43\(\pm\)0.03 & - & \(\leq\)13.16 & - & - & - & B14+B18 \\ 56 & 3C273 & 13.86\(\pm\)0.01 & \(\leq\)12.57 & \(\leq\)12.48 & \(\leq\)11.56 & \(\leq\)11.70 & \(\leq\)11.71 & LC14 \\ 57 & PG1121+422 & 13.97\(\pm\)0.01 & \(\leq\)13.05 & \(\leq\)12.94 & - & - & \(\leq\)12.44 & LC14 \\ 58 & SBS1122+594 & 14.26\(\pm\)0.01 & 13.72\(\pm\)0.05 & 14.16\(\pm\)0.01 & - & 13.14\(\pm\)0.02 & 13.27\(\pm\)0.03 & LC14 \\ 59 & PG1211+143 & 14.20\(\pm\)0.00 & \(\leq\)12.87 & \(\leq\)12.75 & \(\leq\)11.56 & \(\leq\)11.63 & \(\leq\)12.23 & LC14 \\ 60 & PG0003+158 & - & \(\leq\)12.92 & \(\leq\)12.95 & \(\leq\)11.91 & \(\leq\)11.70 & - & LC14 \\ \hline \end{tabular} Note: Col (1): PID = dwarf-QSO pair ID as used in Table 2. Col (2): QSO name. Cols (3)–( ### 34 Additional Pairs Compiled from Literature To further enlarge our data sample, we adopt additional _HST_/COS measurements from the literature, including a total of 34 pairs from [22, 20, 17, 18, 19, 20], collectively. The information regarding each work has been summarized in Table 1 and Section 1, the adopted pairs are shown in Figures 1 and 3, and relevant information is tabulated in Table 2. Literature Compilation Rules: to ensure comparable results, we only include CGM measurements whose QSO spectra have SNR\(\geq 8\). Because different works may define a spectrum's SNR differently (e.g., local SNR near a line of interest vs. averaged SNR over a wide wavelength range), we re-calculate the SNR for each QSO spectrum as described in SS2.1. Then, \(M_{*}\) from the literature is converted to Kroupa IMF from either Salpeter (as used in [18]) or Chabrier IMF (as used in [17] and [20]) following the analysis in [22]. Only dwarf galaxies with \(M_{*}\leq\)10\({}^{9.5}\) M\({}_{\odot}\) are selected. Lastly, we adopt QSO sightlines with \(b/R_{200\rm m}\leq 1\), with \(R_{200\rm m}\) and \(M_{\rm h}\) recalculated from the adopted \(M_{*}\) using the stellar mass-halo \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Galaxy & RA & Dec & \(v_{\rm LSR}\) & \(d\) & \(d_{\rm ref}\) & log M\({}_{\rm HI}\) & \(M_{\rm HI,ref}\) & logSFR & SFR\({}_{\rm ref}\) & Reference \\ & (deg) & (deg) & (km s\({}^{-1}\)) & (Mpc) & & (log M\({}_{\odot}\)) & & (log(M\({}_{\odot}\) yr\({}^{-1}\))) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline D2 & 98.94 & -75.27 & z=0.161 & - & - & - & - & - & - & J17 \\ D4 & 151.01 & 28.92 & z=0.138 & - & - & - & - & - & - & J17 \\ D7 & 61.96 & -12.20 & z=0.092 & - & - & - & - & - & - & J17 \\ D8 & 236.94 & 20.86 & z=0.095 & - & - & - & - & - & - & J17 \\ D5 & 98.93 & -75.27 & z=0.144 & - & - & - & - & - & - & J17 \\ D3 & 38.78 & -4.04 & z=0.296 & - & - & - & - & - & - & J17 \\ D12 & 231.10 & 9.99 & z=0.240 & - & - & - & - & - & - & J17 \\ D13 & 219.44 & -1.80 & z=0.116 & - & - & - & - & - & - & J17 \\ D6 & 219.46 & -1.78 & z=0.184 & - & - & - & - & - & - & J17 \\ 316\_200 & 164.90 & 14.74 & z=0.010 & - & - & - & - & -0.90 & - & B14+B18 \\ 124\_197 & 47.66 & -0.86 & z=0.026 & - & - & - & - & -1.40 & - & B14+B18 \\ 172\_157 & 142.30 & 46.70 & z=0.017 & - & - & - & - & -0.20 & - & B14+B18 \\ 87\_608 & 150.60 & 59.74 & z=0.011 & - & - & - & - & -1.00 & - & B14+B18 \\ 135\_580 & 147.00 & 9.97 & z=0.010 & - & - & - & - & -1.00 & - & B14+B18 \\ 257\_269 & 122.18 & 46.31 & z=0.024 & - & - & - & - & -0.60 & - & B14+B18 \\ 329\_403 & 28.82 & -8.85 & z=0.013 & - & - & - & - & -0.80 & - & B14+B18 \\ 322\_238 & 205.58 & 38.54 & z=0.012 & - & - & - & - & -1.80 & - & B14+B18 \\ 93\_248 & 209.37 & 17.07 & z=0.026 & - & - & - & -0.40 & - & B14+B18 \\ 210\_241 & 205.49 & 5.03 & z=0.025 & - & - & - & -0.50 & - & B14+B18 \\ 70\_57 & 202.74 & 31.33 & z=0.034 & - & - & - & -0.60 & - & B14+B18 \\ 316\_78 & 162.95 & -0.84 & z=0.039 & - & - & - & -0.30 & - & B14+B18 \\ SDSSJ122815.96+014944.1 & 187.07 & 1.83 & z=0.003 & - & - & - & - & - & LC14 \\ SDSSJ112418.74+420323.1 & 171.08 & 42.06 & z=0.025 & - & - & - & -1.17 & - & LC14 \\ SDSSJ112644.33+590926.0 & 171.68 & 59.16 & z=0.004 & - & - & - & -0.98 & - & LC14 \\ SDSSJ12413.94+140330.4 & 183.56 & 14.06 & z=0.064 & - & - & - & - & - & LC14 \\ SDSSJ000545.07+160853.3 & 1.44 & 16.15 & z=0.037 & - & - & - & -0.76 & - & LC14 \\ \hline \end{tabular} Note: Cols (1)–(4): Galaxy names, RA, Dec, \(v_{\rm LSR}\) or z of dwarf galaxies. Cols (5-6): distances and the corresponding references when available: T16=Tully et al. (2016), D09=Dalcanton et al. (2009), J09=Jacobs et al. (2009), R05=Rekola et al. (2005), K13\({}^{*}\)=Karachentsev et al. (2013)’s distance estimated based on the Tully-Fisher relation. Cols (7)–(8): galaxy H i mass in the ISM and the corresponding reference when available: B08=H i flux from Begum et al. (2008) and converted to log M\({}_{\rm HI}\) with adopted distance; H18=H i flux from Haynes et al. (2019) (ALFALFAI) and converted to log M\({}_{\rm HI}\) with adopted distance; H12=Hunter et al. (2012) and mass rescaled with adopted distances; K13\({}^{\dagger}\)=H i flux from Karachentsev et al. (2013) and converted to log M\({}_{\rm HI}\) with adopted distance; SI02=H i flux from Stil & Israel (2002) and converted to log M\({}_{\rm HI}\) with adopted distance. Cols (9)–(14): star formation rate and the corresponding reference when available: L11=FUV mag (Galactic extinction corrected) from Lee et al. (2011), converted to SFR with adopted distance and assuming Kroupa (2001) IMF; K08=H\(\alpha\) luminosity from Kennicutt et al. (2008) (no FUV mag available), and converted to SFR assuming Kroupa (2001) IMF; K13\({}^{\dagger}\)=FUV mag from Karachentsev et al. (2013), corrected for Galactic extinction using their Eq 8, and converted to SFR assuming Kroupa (2001) IMF. See more detail in §2 and Appendix 2.3. \end{table} Table 4: (continued) mass relation shown in SS2.3. Below we include additional details on the specific treatment to each literature sample beyond what is already shown in Table 1 and Section 1. Note that we do not include the dwarf galaxy sample from Richter et al. (2017) as most of the dwarfs are spheroidal type and thus do not contain gas. We also do not consider the dwarf galaxies from Burchett et al. (2016), most of which are in group/cluster environments; the one isolated dwarf galaxy in their sample had been included in the B14+B18 sample. For the 12 dwarf-QSO pairs from the COS-Dwarfs survey (B14), we obtain the C iv \(\log N\) measurements from B14 and H i Ly\(\alpha\)\(\log N\) values from B18. For each ion measurement, we convert the original \(2\sigma\) upper limits to \(3\sigma\) values to be consistent with the rest of our data sample. For LC14's sample, as most of their galaxy-QSO pairs are either with high \(M_{*}\), large \(b\), and/or low SNR (see Figure 1), we find only 5 pairs that meet our literature compilation rules. Since only EW values are given in LC14, we convert EW to \(\log N\) assuming the absorbers are optically thin, which is reasonable given that most absorbers are either weak or non-detections. For non-detections, we convert their quoted \(2\sigma\) upper limits to \(3\sigma\) for consistency. We adopt 11 pairs from J17. We convert their \(2\sigma\) non-detection EW values to \(3\sigma\) and then convert the EW to \(\log N\) assuming optically thin. For detections in galaxies D1 and D2 in their sample, we adopt J17's Voigt profile fit results. In cases where multiple absorbers are reported along a given sightline, we calculate the total \(\log N\) of all available components in each ion. Within the Local Group, we include Z20's measurements of 3 QSOs at 0.05-0.5\(R_{\rm 200m}\) from IC 1613, which is on the outskirts of the LG with no known galaxies within 400 kpc. We exclude the rest of the 3 QSOs in Z20's sample at \(\sim 0.6-0.7R_{\rm 200m}\) to avoid potential contamination from the Magellanic Stream (MS) in the foreground that may skew the column density profiles. We also do not include Zheng et al. (2019)'s tentative detection in WLM where potential contamination from the MS may affect the result here too. We further include 3 QSO measurements (out of 6) from QB22 at 0.2-0.7\(R_{\rm 200m}\) from Sextans A, Sextans B, and NGC 3109. The values for the non-detections toward these QSOs are recalculated over \(\pm 50\) km s\({}^{-1}\) velocity intervals (Qu; priv. comm.). To summarize, our literature compilation yields 34 high-quality dwarf-QSO pairs with \(M_{*}\leqslant\)10\({}^{9.5}\) M\({}_{\odot}\), \(b/R_{\rm 200m}\leqslant 1.0\), and SNR\(\geq 8\). These 34 literature pairs, in combination with the 26 new pairs described in SS2.1, form the Full Sample of 60 dwarf-QSO pairs in this work. ### Determination of Galaxy Properties There are 49 unique dwarf galaxies among the 60 dwarf-QSO pairs in our Full Sample. The galaxies' properties are tabulated in Tables 2 and 4. Here we summarize the range and median galaxy property values of these 49 dwarf galaxies, which have a stellar mass range of \(M_{*}=10^{6.5-9.5}\) M\({}_{\odot}\) with a median value at \(\langle M_{*}\rangle=10^{8.3}\) M\({}_{\odot}\), a halo mass range of \(M_{\rm 200m}=10^{10.0-11.5}\) M\({}_{\odot}\) with a median value at \(\langle M_{\rm 200m}\rangle=10^{10.9}\) M\({}_{\odot}\), an H i gas mass range of \(M_{\rm HI}=10^{6.1-8.6}\) M\({}_{\odot}\) with a median value of \(\langle M_{\rm HI}\rangle=10^{7.4}\) M\({}_{\odot}\), a SFR range of \(\dot{M}_{*}=10^{-4.3}\) to \(10^{-0.2}\) M\({}_{\odot}\) yr\({}^{-1}\) with a median value of \(\langle M_{*}\rangle=10^{-1.96}\) M\({}_{\odot}\) yr\({}^{-1}\), and a virial radius range of \(R_{\rm 200m}=68-213\) kpc with a median value of \(\langle R_{\rm 200m}\rangle=136\) kpc. Below we describe the approach to obtain \(M_{*}\) and \(M_{\rm 200m}\) which are the most important properties in this work, and defer the discussion on other properties to Appendix B. For dwarf galaxies from LC14, B14, or J17, we adopt the quoted \(M_{*}\) from the corresponding works and convert the values to Kroupa (2001) IMF (see SS2.2). For a majority of galaxies within the Local Volume, we calculate their \(M_{*}\) from _Spitzer_\(3.6\mu\)m fluxes from the Local Volume Survey (Dale et al., 2009). Since the \(3.6\mu\)m mostly traces infrared light from old stellar population, it is not sensitive to internal extinction of a galaxy's ISM as compared to young star populations. Therefore, the \(M_{*}\) value derived based on \(3.6\mu\)m best represents most of the mass in a galaxy. Instead of directly using \(M_{*}\) values from the Local Volume Survey catalog (Cook et al., 2014), we re-calculate \(M_{*}\) based on the galaxies' \(3.6\mu\)m fluxes and the distances we adopt in this work for consistency. We assume a mass to light ratio of \(M_{*}/L_{\rm 3.6\mu m}=0.5\) M\({}_{\odot}/\)L\({}_{\odot}\)(Cook et al., 2014). When _Spitzer_ photometry is unavailable, we adopt \(M_{*}\) from McConnachie (2012), with values rescaled based on our newly adopted distances. If a galaxy is not included in either Cook et al. (2014) or McConnachie (2012), nor can we find an appropriate value in the literature, we compute \(M_{*}\) from the galaxy's \(K\)s band magnitude from the 2MASS sky survey (Jarrett et al., 2003; Karachentsev et al., 2013). For \(K\)s band, we assume a mass to light ratio of 0.6 (McGaugh & Schombert, 2014). To derive a galaxy's halo mass from \(M_{*}\), one needs to assume a stellar mass-halo mass (SMHM) relation. The SMHM relation at low mass is found to be stochastic (Rey et al., 2019; Sales et al., 2022; McQuinn et al., 2022), and the scatter in \(M_{*}\) increases with decreasing halo mass (Garrison-Kimmel et al., 2017; Munshi et al., 2021). Among the literature works that expand the SMHM relation to as low as \(M_{*}\sim 10^{6-7}\) M\({}_{\odot}\) where a power-law fit of \(M_{*}\propto M_{\rm h}^{\alpha}\) is often assumed, it has been shown that the power law becomes steeper toward lower masses with \(\alpha\) ranging from 1.4 to 3.5 (e.g., -14, 2014, 2017; Read et al., 2017; Jethwa et al., 2018; Nadler et al., 2020; Munshi et al., 2021). In this work we adopt a broken power-law relation from Munshi et al. (2021): \[\log_{10}M_{*}=\log_{10}M_{*,0}+\alpha(\frac{M_{200\rm c}}{M_{200\rm c,0}}) \tag{1}\] where \(M_{*,0}=10^{6.9}\) M\({}_{\odot}\), and \(M_{200\rm c}\) is the halo mass enclosed within a virial radius inside which the average halo density is 200 times the critical density of the Universe at \(z=0\). The power-law index is \(\alpha=2.8\) when \(M_{200\rm c}\leq M_{200\rm c,0}\) and \(\alpha=1.9\) when \(M_{200\rm c}>M_{200\rm c,0}\), with \(M_{200\rm c,0}=10^{10}\) M\({}_{\odot}\). Because in this work we adopt a "200m" definition for our virial radius (i.e., 200 times the mean matter density instead of critical density), we convert the \(M_{200\rm c}\) values derived from Eq. 1 to \(M_{200\rm m}\) using \(\log_{10}M_{200\rm m}=\log_{10}M_{200\rm c}+0.15\) dex. The conversions are estimated by comparing the \(M_{200\rm c}\) and \(M_{200\rm m}\) values of a suite of simulated dwarf galaxies from an EAGLE high-resolution simulation volume (Oppenheimer et al., 2018) that we elaborate on in SS5. ## 3 Result: Ion Distributions in the CGM ### Ion Covering Fractions \(C_{\rm f}\) In Figure 3, we show the distribution of QSO sightlines with respect to host dwarf galaxies in our Full Sample. Those sightlines with detections of C iv are highlighted with magenta edges. Then, in Figure 4, we show the relation of \(\log N\) vs. \(b/R_{200\rm m}\) for H i (via Ly\(\alpha\) absorption), C ii, C iv, Si ii, Si iii, and Si iv. Except for H i, which is ubiquitously detected in the halos of low-mass dwarf galaxies, the rest of the ions typically show non-detections unless the sightlines are at small impact parameters (i.e., \(b\lesssim\)0.5\(R_{200\rm m}\)), consistent with findings from B14+B18, LC14, and J17. Similar to the COSMDwarfs survey (B14), our typical detection threshold for ion absorbers is EW\(\approx\)50-100 mA at 3\(\sigma\). At EW\(\geq\)100 mA, we find detection rates (or covering fraction \(C_{\rm f}\)) of 89%(24/27) for H i, 18%(3/17) for C ii, 21%(9/42) for C iv, 5%(2/44) for Si ii, 18%(7/39) for Si iii, and 10%(4/41) for Si iv in our Full Sample. Our measurements show that the metals as probed by Si ii, Si iii, Si iv, C ii, and C iv in the outer CGM of dwarf galaxies are too diffuse to be detected with _HST_/COS at a column density limit of \(\log N_{\rm ion}\sim 12.5-13.5\) at 3\(\sigma\). When compared to literature values, our C iv detection rate is lower than what is found in the COS-Dwarfs survey (B14), which report \(C_{\rm f}\)(C iv)\(\approx\)40% at EW\(\geq\)100 mA. Similarly, LC14 find \(C_{\rm f}\approx\)30-50% for C iv with EW\(\geq\)100 mA at \(b<0.7R_{200\rm m}\). When taking into account the difference in the data samples (see Figure 1), we find that the higher C iv detection rates by B14 and LC14 are likely because of the inclusion of higher-mass galaxies with \(M_{*}\geq 10^{9.5}\) M\({}_{\odot}\) which contribute to the majority of their detections. Our C iv detection rate is similar to that found by Burchett et al. (2016), which show that low-mass galaxies (\(M_{*}\leq 10^{9.5}\) M\({}_{\odot}\)) in their sample show very little detection of metal absorbers, with a covering fraction of \(\sim 9^{+12}_{-6}\)% (for 11 galaxies). Figure 3: **Left:**\(M_{*}\) vs. \(b/R_{200\rm m}\) for the 60 dwarf-QSO pairs in our Full Sample as described in §2 and Table 1. The data points here correspond to the filled symbols shown in Figure 1. The dotted lines indicate the median impact parameter (\(b/R_{200\rm m}\)) =0.45) and median stellar mass (\(\langle M_{*}\rangle=\)10\({}^{8.3}\) M\({}_{\odot}\)) in the Full Sample. **Middle:** Relative projected positions of QSOs with respect to host dwarf galaxies, normalized by the corresponding virial radii. **Right:** Same as the middle panel, but in kpc. In each panel, we use magenta-outlined symbols to highlight the detections of C iv, while the rest are for non-detections. Overall, at EW\(\geq\) 100 mÅ (3\(\sigma\)), we find low detection rates of 21% (9/42) in C iv, 18% (3/17) in C ii, 10 % (4/41) in Si iv, and 18% (7/39) in Si iii, and 5% (2/44) in Si ii, all of which occur within 0.5\(R_{200\rm m}\). Meanwhile, H i (via Ly\(\alpha\) absorption) is ubiquitously observed throughout the CGM with a detection rate of 89% (24/27). See §3.1 for more detail. For other absorbers, LC14 estimate covering fractions of nearly 100% for H i and generally \(\sim 40-60\)% for Si ii, Si iii, Si iv, and C ii at \(b/R_{\rm 200m}\sim\)(0.25-0.6), as evaluated at a detection threshold of EW\(\geq\)50 mA at \(2\sigma\). However, in the outer halo (\(b/R_{\rm 200m}\gtrsim\)0.6-1.1), their ions' covering fractions drop below 14% except for H i (96%) and C iv (38%). Their high detection rates at \(b/R_{\rm 200m}\sim\)(0.25-0.6) are likely caused by (1) a less restrictive detection threshold when evaluating the detections (i.e., EW\(\geq\)50 mA at \(2\sigma\)), and (2) contributions from absorbers detected at small impact parameters in galaxies with higher masses (\(M_{*}>10^{9.5}\) M\({}_{\odot}\)). Lastly, among the 18 star-forming dwarf galaxies in J17's sample, \(C_{\rm f}<\)17% is detected with Si ii within \(R_{\rm 200m}\) at EW\(>\)100 mA (\(2\sigma\) threshold), 10% for Si iii, \(<\)19% for Si iv, 23% for C iv, and 94% for H i, respectively. Generally we find good consistency in covering fractions between ours and J17's given that both samples cover a similar low-mass range (Figure 1). In all, our measurements show that in the CGM of dwarf galaxies with \(M_{*}=\)10\({}^{6.5-9.5}\) M\({}_{\odot}\), H i (via Ly\(\alpha\) 1215 A line) is ubiquitously detected (89%) at a column density level of \(\log N_{\rm HI}\approx 13-16\). On the other hand, ions such as Si ii, Si iii, Si iv, C ii, and C iv are typically found with low detection rates of \(C_{\rm f}\approx 5-21\)%, and the detections of metal absorbers only occur in the inner CGM (\(b\lesssim 0.5R_{\rm 200m}\)). On the outskirts of dwarf galaxies' CGM (\(b\gtrsim 0.5R_{\rm 200m}\)), the column densities of these ions are too low to be detected at HST/COS's sensitivity of \(\log N(3\sigma)\sim 12.5-13.5\) at SNR\(\geq 8\). ### Ion Column Density Profiles: \(\log N\) vs. \(b/R_{\rm 200m}\) To parameterize the relation of \(\log N\) vs. \(b/R_{\rm 200m}\), which will be used for further understanding the ion Figure 4: Ion column densities vs. \(b/R_{\rm 200m}\) for H i, C ii, C iv, Si ii, Si iii, and Si iv. Filled symbols indicate detections, filled symbols with upward arrows for saturations (lower limits), and open symbols with downward arrows for non-detections (\(3\sigma\) upper limits). Note that the y-axis scale varies from panel to panel. The data points are color-coded into 3 halo mass bins: \(M_{\rm 200m}=10^{10-10.5}\) M\({}_{\odot}\) (blue), \(M_{\rm 200m}=10^{10.5-11}\) M\({}_{\odot}\) (orange), and \(M_{\rm 200m}=10^{11-11.5}\) M\({}_{\odot}\) (green). We fit the \(\log N\)-\(b/R_{\rm 200m}\) relation of each ion as a power law based on a Bayesian linear regression algorithm that accounts for the upper/lower limits as censored data (see §3.2). The gray curves show 100 random draws from the posterior distributions, and the black solid curve in each panel indicates the 50th percentile solution. The title of each panel shows the 50th percentile solution and the errors in the coefficients indicates the values of the 50th–16th and the 84th–50th differences. Overall, we find ion column density decreases with \(b/R_{\rm 200m}\); except for H i that is ubiquitously detected in the CGM, other ions are only detected within \(\sim\)0.5\(R_{\rm 200m}\). distribution in dwarf galaxies' CGM (see SS4), we fit the data points in Figure 4 assuming a power-law relation: \[N=N_{0}(b/R_{\rm 200m})^{k}\quad, \tag{2}\] where \(N\) is the column density profile for the ion of interest, including H i, C ii, C iv, Si ii, Si iii, and Si iv. \(N_{0}\) is an ion's column density evaluated at \(b=R_{\rm 200m}\), and \(k\) the power-law index. We adopt a power-law form because it has been widely used in previous CGM studies (e.g. Thom et al., 2012; Werk et al., 2013; Bordoloi et al., 2014; Keeney et al., 2017), and it is consistent with theoretical ion radial profiles predicted from simulations (see Figure 8 and SS5). Additionally, it provides the simplest form to fit for a dataset in log-log space with minimal number of parameters, providing a clean way to parameterize the dataset where most of the information is hidden in non-detections (see below). In a log-log space, Eq. 2 turns into a linear form: \(\log N=\log N_{0}+kx\), where \(x\equiv\log(b/R_{\rm 200m})\). We adopt a censored regression algorithm2 implemented with PyMC3(Salvatier et al., 2016) that treats the non-detection upper limits and saturation lower limits appropriately. The dataset is split into three groups: detections, non-detections (\(3\sigma\) upper limits), and saturations (lower limits). We set up priors for \(k\) and \(\log N_{0}\) and likelihood functions for each group following the instructions in the aforementioned PyMC3 algorithm. Specifically, the likelihood function for the detection group is calculated as the joint probability of the observed data given the set of parameters (\(k\), \(\log N_{0}\)), while the likelihood functions for censored data with upper or lower limits are computed by integrating the probability to the measured limits. For clarity and readability, we defer the details of our algorithm setup to Appendix C and discuss the results of our censored regression below. Footnote 2: see a python notebook: Bayesian regression with truncated or censored data, by Benjamin T. Vincent In Figure 4, we show the 50th percentile solution of our PyMC3 runs in solid black curve and also 100 random draws from the posterior distributions. From the H i panel, it is straightforward to see that when most data points are detections, our power-law parameterization and PyMC3 solutions well predict not only the \(\log N\) vs. \(b/R_{\rm 200m}\) trend but also the scatter in the data points. This means that the priors and the likelihood functions we set have been able to sufficiently capture the information in the data. In cases where a majority of the data points are non-detections, as is the case for C ii, C iv, Si ii, Si iii, and Si iv, we find that the PyMC3 curves often occupy the space below the \(3\sigma\) upper limit values, especially at \(b/R_{\rm 200m}>0.5\) where there are no detections. This is reasonable because the \(3\sigma\) upper-limit data points indicate the thresholds below which the actual values would occur with 99.7% probability, if with sufficient sensitivity. In other words, given the assumption of a power-law relation, the PyMC3 curves predict the trend of \(\log N\) at \(b/R_{\rm 200m}>0.5\) where the metal content of the dwarf galaxies' CGM are mostly unavailable to observers at the current sensitivity of _HST_/COS. The upper limit constraints from our data and PyMC3 analysis suggest that future UV spectrographs with sensitivity one order of magnitude better are most likely to be needed to probe diffuse metals in the CGM of dwarf galaxies, especially at low masses (see SS4 and SS5 for theoretical predictions on CGM metal properties in dwarf galaxies). When compared to literature values, we find that our power-law fit to H i shows a similar slope (\(k=-2.5\pm 0.5\)) to those H i distributions in the CGM of MW-mass galaxies (\(k=-2.7\pm 0.3\); Keeney et al., 2017). For other ions, when available, we find steeper slopes in our fits than typically reported. For example, the COSMDwarfs survey (B14) characterized their EW(C iv) vs. \(b\) distribution with a \(k=-1\) power law, while our algorithm finds a steeper slope of \(k=-1.5\pm 0.3\) when fitting for the EW-\(b/R_{\rm 200m}\) relation (not shown here). The discrepancy is most likely because we take into account the non-detections of C iv as censored data in our fits, while B14's fit may be biased toward detected values in the inner CGM. Another example is the \(\log N_{\rm SiIII}\) vs. \(b\) fit from the COS-Halos survey (Werk et al., 2013), where a slope of \(k=-1.11\pm 0.29\) was found for detected absorbers, while our Si iii fits in Figure 4 show a steeper slope of \(k=-1.9\pm 0.3\). Similarly, the different treatment of non-detections likely contributes to the discrepancy here, although we note that the COS-Halos survey targets MW-mass galaxies which may harbor a CGM with ion properties different from lower-mass galaxies. Because of the relatively small sample size, we do not conduct detailed analyses on how each ion's \(\log N\) vs. \(b\) profile may depend on galaxy masses or star formation rates. As shown in Figure 4, when we split the Full Sample into 3 halo mass bins, there is little difference in the profiles. The lack of difference here is most likely caused by the small number of data points in each bin, especially in the lowest mass bin. As shown in SS5, when a large sample of simulated dwarf galaxies (207) from the EAGLE simulation are considered, the H i and low-to-intermediate ion column densities are found to increase with galaxy masses. Similarly, for higher ions like O vi, Tchernyshyov et al. (2022) show that the O vi column densities increase as a function of \(M_{*}\) at a given impact parameter in the inner CGM when a larger sample of dwarf galaxies (\(\sim\)100-150) are considered. Therefore, a larger sample of high-quality sightlines in the CGM of dwarf galaxies, especially at small impact parameters, is needed for further investigation on the mass-dependency of the ion column density profiles. ## 4 An empirical model for dwarf galaxies' CGM In SS3, we present power-law fits to the observed column density profiles, which parameterize the ions' projected distributions and can be compared to different CGM studies. These fits are performed for each ion individually, and avoid assumptions about the physical conditions in the CGM. Building upon the power-law fits, in this section, we aim to construct a physically motivated empirical model that relates the observed ion absorption to the underlying spatial distribution and ionization states of the gas in dwarf galaxies' CGM. Given that the galaxies in our Full Sample span a wide mass range from \(M_{*}=10^{6.5}\) M\({}_{\odot}\) to \(10^{9.5}\) M\({}_{\odot}\), as a first approximation, we construct an empirical model for a typical dwarf galaxy with \(M_{\rm 200m}=10^{10.9}\) M\({}_{\odot}\) (\(M_{*}=10^{8.3}\) M\({}_{\odot}\), \(R_{\rm 200m}=136\) kpc), which are the median values among the Full Sample. The model can be applied in future work to different halo masses or individual galaxies. This section is structured as follows. We first describe the model setup and parameters in SS4.1, and use the observed H i\(\log N\) profile to constrain the parameters in SS4.2. Then, in SS4.3, we show how the H i-constrained model parameters predict metal ion column densities. In SS4.4, we discuss the implications of the empirical model for the estimated gas and metal masses in the CGM, and for the baryon and metal budget of dwarf galaxies. ### Model Setup The observed ion distributions in the dwarf galaxies' CGM are likely due to a delicate balance between the underlying density distribution, volume filling factor, and ionization structures in different gas phases. For example, the ion column densities shown in Figure 4 have different profiles as functions of the impact parameter, as evidenced by the different fitted slopes, suggesting variation in the gas ionization state with radius. In the following, we describe the assumptions and parameters of our empirical model, addressing the gas properties. The measurements reported in this study are of H i and low to intermediate metal ions, and we assume they trace the cool, photoionized phase of the CGM at \(T\approx 10^{4}\) K. We set the gas spatial distribution in our model through two functions: (1) the hydrogen number density \(n_{\rm H}\), and (2) the volume filling fraction, defined as the local volume fraction occupied by the cool gas clouds, \(f_{\rm V}\equiv dV_{\rm cool}/dV<1\), where \(dV_{\rm cool}\) and \(dV=4\pi r^{2}dr\) are the cool gas and total volume of a shell at a given radius, respectively. We assume spherical symmetry and model both \(n_{\rm H}\) and \(f_{\rm V}\) as power-law functions of the radial distance from the halo center: \[n_{\rm H}(r)=n_{\rm H,0}\left(\frac{r}{r_{0}}\right)^{\alpha}\ \,\ \ f_{\rm V}(r)=f_{\rm V,0}\left( \frac{r}{r_{0}}\right)^{\beta}\ \ \, \tag{3}\] where \(\alpha,\beta<0\), and \(n_{\rm H,0}\) and \(f_{\rm V,0}\) are the hydrogen number density and volume filling fraction evaluated at \(r=r_{0}\), respectively. Here \(r_{0}\) is a reference radius that is set at \(r_{0}=b_{\rm min}\approx 6\) kpc, which is the minimum impact parameter among the dwarf-QSO pairs in the Full Sample. We note that throughout this work the notation \(r\) is referred to as a gas cloud's radial distance from its host galaxy in 3D, while \(b\) indicates the corresponding 2D projected radial distance (i.e., impact parameter) as viewed along a line of sight. We assume the gas temperature to be constant with radius, and adopt \(T=10^{4}\) K, which is set by heating and cooling equilibrium with the metagalactic radiation field (Haardt & Madau, 2012; Werk et al., 2016). Given the local gas density and temperature, we calculate the gas ionization state using Cloudy 17.00 (Ferland et al., 2017). Overall the model setup in Eq. 3 gives a total of four free parameters (\(n_{\rm H,0}\), \(f_{\rm V,0}\), \(\alpha\), \(\beta\)). By integrating the \(n_{\rm H}(r)\) and \(f_{\rm V}(r)\) profiles to \(R_{\rm 200m}\), we can write down the total cool CGM mass as \[M_{\rm CGM,cool}\approx\frac{4\pi m_{\rm p}}{(3+\alpha+\beta)X_{\rm H}}n_{ \rm H,0}f_{\rm V,0}r_{0}^{3}\left(\frac{R_{\rm 200m}}{r_{0}}\right)^{3+\alpha+ \beta}\, \tag{4}\] where we used \(X_{\rm H}=0.74\) for the hydrogen mass fraction, accounting for the contribution of helium. In practice, we take \(M_{\rm CGM,cool}\) instead of \(n_{\rm H,0}\) as one of the four free parameters, which has broader implication on baryon and metal masses in the CGM (see SS4.4). As stated above, our model is anchored at a typical halo mass of \(M_{\rm 200m}=10^{10.9}\) M\({}_{\odot}\), which corresponds to a cosmological baryonic mass budget of \(M_{\rm bar}=M_{\rm 200m}\Omega_{b}/\Omega_{m}\approx 10^{10.1}\) M\({}_{\odot}\). ### Constraining the Model with Observed H i Profile We first use the observed H i column densities to constrain the empirical model. We focus on H i because it is ubiquitously detected throughout the halos (see Figure 4), resulting in a well-constrained column density profile. Furthermore, the H i distribution does not depend on assumptions regarding gas metallicity. Based on Eq. 3, the H i column density through the halo projected at an impact parameter \(b\) is calculated as \[N_{\rm HI}(b)=2\int_{r=b}^{R_{\rm 200m}}f_{V}(r)n_{\rm H}(r)f_{\rm HI}(n_{ \rm H})ds\ \ \, \tag{5}\] where \(s=\sqrt{r^{2}-b^{2}}\) is the path length along the line of sight within a galaxy's halo, and \(f_{\rm HI}(n_{\rm H})\) is the H i ion fraction which is a function of the gas density \(n_{\rm H}(r)\). In our model, we vary the parameters such that the model \(N_{\rm HI}\) profile, given by Eq. 5, matches the 50th-percentile power-law fit of H i from SS3.2 and Figure 4. In the left panel of Figure 5, we show how the hydrogen volume density \(n_{\rm H}(r)\) at \(r=0.1R_{\rm 200m}\) changes due to different combinations of \(M_{\rm CGM,cool}\) and \(\alpha\). In particular, we highlight two models: one with high \(M_{\rm CGM,cool}\) but low \(\alpha\) (steep density profile, hereafter Model A), and another with low \(M_{\rm CGM,cool}\) and high \(\alpha\) (flatter density profile, hereafter Model B). Both models reproduce the observed H i column density profile (left panel, Fig. 6). To demonstrate how the model parameters translate to gas distributions, we plot the radial profiles of \(n_{\rm H}(r)\), \(n_{\rm HI}(r)\), and \(f_{\rm V}(r)\) for Models A (black curves) and B (purple curves) in the middle panel of Figure 5. Model A has a steep \(n_{\rm H}\) profile (black dashed curve), leading to low densities at large radii (\(n_{\rm H}\sim 10^{-5}\) cm\({}^{-3}\)). These are compensated by the high volume filling factors (black dotted curve, \(f_{\rm V}\sim 0.1-1\)), giving a large \(M_{\rm CGM}\). The low \(n_{\rm H}\) densities lead to high ionization and low neutral H i fractions (black solid curve), with which the high \(f_{\rm V}\) produces the observed H i column densities. Model B, in contrast, has higher gas densities (purple dashed curve) but much lower volume filling factors (purple dotted curve), leading to low \(M_{\rm CGM}\). The high gas densities result in higher H i fractions (purple solid curve) in order to produce the observed H i. In the right panel we plot the radial profiles of C ii (solid curves) and C iv (dashed curves) volume densities assuming a constant metallicity of \(Z^{\prime}=0.3~{}Z_{\odot}\) (see detail in SS4.3), which provide another view on the gas ionization state. The C ii densities in Model B (purple solid curve) are significantly higher, due to the model's higher gas densities (\(n_{\rm H}\)) and lower ionization (high \(n_{\rm HI}\)). The C iv densities are more similar between the two models. However, as we will see in SS4.3, Model A predicts much higher C iv column densities thanks to its high volume filling factor \(f_{V}\) and a larger fraction of carbon in higher ionization state. ### Predicted Metal Column Densities We now address the predictions of Models A and B for the column densities of low to intermediate ions, and compare them to the observed values. At an impact parameter \(b\), the line of sight metal column density is \[N_{\rm X,i}(b)=2\int_{r=b}^{R_{\rm 200m}}f_{V}(r)n_{\rm H}(r)Z^{\prime}a_{X}f_ {\rm X,i}(n_{\rm H})ds\quad, \tag{6}\] where \(a_{X}\) is the elemental abundance of element \(X\) relative to hydrogen, and \(f_{\rm X,i}(n)\) the ionization fraction of ion X\({}_{i}\) as a function of the gas density \(n_{\rm H}(r)\). In this study we assume a constant metallicity (\(Z^{\prime}\)) with radius, and the metal column density scales linearly with \(Z^{\prime}\). As suggested by the metal ion volume densities in the right panel of Figure 5, even with the same presumed metallicity and predicted H i column densities (left panel of Figure 6), the different gas ionization states in Models A and B yield different metal column density profiles, which we discuss below. The middle and right panels in Figure 6 show the predicted C ii and C iv column densities assuming Figure 5: Empirical model gas distributions. **Left**: \(M_{\rm CGM,cool}\) vs. \(\alpha\) parameter space, with the contours showing the hydrogen volume densities \(n_{\rm H}\) evaluated at \(r=0.1R_{\rm 200m}\) (see Eq. 3). Two representative models (A and B) are highlighted as a black circle and a purple square, respectively, both of which reproduce the observed H i column density profile (left panel, Figure 6). The shaded gray area indicates \(f_{\rm V,0}>1\) which is prohibited in our model. **Middle and Right**: Radial distributions of gas and ion volume densities and volume filling factor for Model A (thick black lines) and Model B (thin purple lines). The middle panel shows the H i density (\(n_{\rm HI}\), solid lines), total hydrogen density (\(n_{\rm H}\), dashed), and volume filling factor (\(f_{\rm V}\), dotted). The right panel shows the C ii (solid line) and C iv gas density (dashed). The gas ionization state is set by the local volume density. Model A, with lower gas densities, has higher ratios of \(n_{\rm CIV}/n_{\rm CII}\) than Model B. \(Z^{\prime}=0.3\) Z\({}_{\odot}\), respectively. The choice of \(Z^{\prime}=0.3\) Z\({}_{\odot}\) is to anchor the subsequently derived CGM mass value at a similar metallicity as commonly assumed or estimated in the CGM literature (e.g. Prochaska et al., 2017). As we will show in SS5, it is also consistent with the metallicity derived for \(T\sim 10^{4}\) K gas in the CGM of simulated dwarf galaxies from the EAGLE simulation. The middle panel shows that both models predict very low column densities in C ii, especially at large impact parameters, consistent with the non-detections we see in the data. In the right panel, we show that Model A with low CGM densities \(n_{\rm H}\) and high volume filling factors \(f_{\rm V}\) produces C iv column densities that are consistent with the detected values in the inner CGM, and the model suggests low C iv column densities at \(b\gtrsim 0.5R_{\rm 200m}\), also consistent with the upper limits that we observe. In contrast, Model B with high \(n_{\rm H}\) densities but low \(f_{\rm V}\) values predicts C iv column densities that are significantly below the measured columns at \(b\lesssim 0.5R_{\rm 200m}\), but are consistent with the upper limits at larger projected distances. While not shown here, our models also predict Si ii, Si iii, and Si iv column density profiles with values lower than the observed upper limits. Note that in our empirical models, ion column densities scale linearly with metallicity, so assuming a higher or lower metallicity than \(Z^{\prime}=0.3\)Z\({}_{\odot}\) would result in changes in predicted column densities accordingly. In all, while the two models produce the same H i column densities, Model A with low densities (\(n_{\rm H}\)) and high volume filling factors (\(f_{\rm V}\)) leads to a cool CGM that is more ionized than that of Model B. These two model setups demonstrate the flexibility of the empirical model in predicting different scenarios for CGM ionization states, which can be applied to galaxies with different halo masses in future work. ### Inferences on CGM Baryon and Metal Masses As we have seen in the previous section, Model A (low gas density, high volume filling factor) is more consistent with the measured C iv columns at small impact parameters, whereas Model B under-predicts them. Thus, in this section we focus on Model A and discuss its properties in more detail. We infer the total masses of H i and low-to-intermediate ions in dwarf galaxies' CGM based Model A, and tabulate them in Table 5. Note that the masses estimated here are for the cool phase of the CGM at \(T\approx 10^{4}\) K, and the masses are integrated over a spherical halo volume from \(0.1R_{\rm 200m}\) to \(R_{\rm 200m}\) for a typical dwarf galaxy with \(M_{\rm 200m}=10^{10.9}\) M\({}_{\odot}\). The gas density and volume filling profiles in Model A are given by \[\begin{split} n_{\rm H}(r)&=8.9\times 10^{-5} \times(r/0.1R_{\rm 200m})^{-1.35}\\ f_{\rm V}(r)&=0.33\times(r/0.1R_{\rm 200m})^{-0.60} \quad.\end{split} \tag{7}\] The total hydrogen mass in Model A is \(M_{\rm CGM,H}\approx 10^{8.3}\) M\({}_{\odot}\), which corresponds to a total cool CGM mass of \(M_{\rm CGM,cool}=M_{\rm CGM,H}/X_{\rm H}\approx 10^{8.4}\) M\({}_{\odot}\). When compared to the total baryon budget of a typical dwarf galaxy with \(M_{\rm 200m}=10^{10.9}\) M\({}_{\odot}\), this corresponds to \(f_{\rm CGM,cool}=M_{\rm CGM,cool}/(M_{\rm 200m}\Omega_{b}/\Omega_{m})\sim 2\%\). The low Figure 6: Predicted H i, C ii and C iv column densities from Model A (thick black line) and Model B (thin purple line), in comparison with observed values. In our model, the cool CGM is photoionized by the metagalactic radiation (Haardt & Madau, 2012; Werk et al., 2016) and the gas temperature is set at \(T=10^{4}\) K (§4.1). The H i profiles of Models A and B (left panel) are identical by construction to match the observed values (§4.2). The C ii and C iv column densities are estimated at \(Z^{\prime}=0.3\)Z\({}_{\odot}\); we note that Si ii, Si iii, and Si iv are not shown here, but display similar radial profiles (§4.3). Model A has low gas density but high volume filling factor, while Model B is constructed with high gas density but low volume filling factor (Figure 5). In general, the low gas density (\(n_{\rm H}\sim 10^{-4}\) cm\({}^{-3}\)) in Model A leads to more C iv (C ii) being formed (removed) by photoionization, resulting in higher C iv column densities that are more consistent with observed values, especially in the inner CGM. fraction indicates that the total gas mass in the cool phase CGM only accounts for \(\sim\)2% of the total galactic baryonic mass. Lastly, as shown in Table 5, we find the total H i mass in the typical dwarf galaxy's CGM to be \(M_{\rm HI}=10^{4.7}\) M\({}_{\odot}\), which means the hydrogen ionization fraction is \(f_{\rm HI,ModelA}=M_{\rm CGM,HI}/M_{\rm CGM,H}\sim 3\times 10^{-4}\). When considering the metals, the total carbon and silicon masses in the CGM as predicted by Model A are \(M_{\rm CGM,C}\approx 10^{5.3}\) M\({}_{\odot}\) and \(M_{\rm CGM,Si}\approx 10^{4.8}\) M\({}_{\odot}\). Using global chemical yields inferred from the EAGLE simulation (see SS5), we estimate that the total amount of carbon (silicon) yielded from star formation (i.e., Type II Supernova, AGB stars) is \(y_{\rm C}\sim 1.0\%\) (\(y_{\rm Si}\sim 0.32\%\)) of the present day stellar mass3. Therefore, the total amount of carbon (silicon), including those still locked in stars, in the ISM, CGM and beyond is \(M_{\rm tot,C}=M_{*}\times y_{\rm C}\approx 10^{6.3}\) M\({}_{\odot}\) (\(M_{\rm tot,Si}=M_{*}\times y_{\rm Si}\approx 10^{5.8}\) M\({}_{\odot}\)). This indicates that only \(\sim\)10% (\(=M_{\rm CGM,C}/M_{\rm tot,C}\)) of the total amount of carbon still resides in the CGM in the cool \(T=10^{4}\) K phase. And a similar CGM mass fraction is found for silicon. Our estimates on the total metal mass fraction in the CGM is consistent with simulations of dwarf galaxies, which generally find that the CGM for dwarfs at \(M_{*}\sim 10^{7-9.5}\) M\({}_{\odot}\) retain \(\sim\)10-40% of the metals generated by stars (Muratov et al., 2017; Christensen et al., 2018). Footnote 3: We do not calculate the stellar yields from dwarf galaxies that have lower metallicities and later star formation histories than the typical galaxy; therefore this represents only a rough calculation. When considering that less than 5-10% of metals are retained in the ISM and stars of dwarf galaxies (Kirby et al., 2011, 2013; McQuinn et al., 2015; Zheng et al., 2019), our CGM mass estimate suggests that dwarf galaxies at present day only retain a total of \(\sim\)15-20% of metals in their stars, ISM, and CGM (cool phase); the remaining \(\sim\)80-85% either has been transported to the IGM or is in a much warmer CGM phase that is yet to be detected. ## 5 Examining a Volume Limited Sample of Dwarfs from the EAGLE Simulation The empirical modeling as described in SS4 provides a straightforward way to parameterize the gas density distributions and ionization states in dwarf galaxies' CGM. In this section we use a volume-limited sample of dwarf galaxies from the EAGLE Simulation Project (Schaye et al., 2015) to briefly explore how the inclusion of feedback processes may impact the CGM (and subsequently variation of gas temperature as the galaxy evolves). Modern cosmological hydrodynamical simulations tune their feedback prescriptions to reproduce an array of observed properties, mainly regarding galaxies as opposed to cosmic gas reservoirs (i.e., CGM or IGM). The EAGLE Simulation Project tunes their stellar and supermassive black hole feedback prescriptions to reproduce the observed galactic stellar mass function as well as several other galaxy properties (Crain et al., 2015). At the masses of dwarf galaxies, the main tuned constraint is the slope of the galactic stellar mass function, which has an observed slope of \(dn/dM_{*}\sim M_{*}^{-1.4}\)(Baldry et al., 2012) at \(M_{*}=10^{8-9}\) M\({}_{\odot}\). Given that the slope of dark matter halo mass function in \(\Lambda\)CDM (Cold Dark Matter) cosmology is proportional to \(M_{*}^{-2}\), this implies a steady decline in the efficiency of gaseous baryons being converted to stars going down the mass scale. In fact, even before the establishment of CDM as the baseline cosmology, observations of declining dwarf galaxy surface brightnesses and metallicities toward lower masses (e.g., Skillman et al., 1989; Tremonti et al., 2004; Lee et al., 2006; Andrews and Martini, 2013; Kirby et al., 2013) motivated theoretical arguments that dwarf galaxies are different from their more massive counterparts and that gas loss via stellar-driven superwinds may be required (e.g., Dekel and Silk, 1986; Sales et al., 2022). Our estimates on the baryonic and metal contents of dwarf galaxies' CGM (SS4.4) also support the idea that outflows driven by star formation activities are efficient at transporting gas mass and metal mass out of dwarf galaxies and into the CGM and beyond. In fact, our \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Ion} & Model A & EAGLE & B14 & J17 & T22 \\ & (\(\log{\rm M}_{\odot}\)) & (\(\log{\rm M}_{\odot}\)) & (\(\log{\rm M}_{\odot}\)) & (\(\log{\rm M}_{\odot}\)) & (\(\log{\rm M}_{\odot}\)) \\ \hline H i & 4.7 & (5.1, 6.3, 7.7) & \(-\) & \(-\) & \(-\) \\ Si ii & 1.3 & (1.1, 3.4, 5.2) & \(-\) & \(\lesssim 4\) & \(-\) \\ Si iii & 2.5 & (1.9, 3.6, 4.9) & \(-\) & 4.4 & \(-\) \\ Si iv & 3.1 & (1.7, 3.0, 4.2) & \(-\) & \(\lesssim 4.4\) & \(-\) \\ C ii & 2.6 & (1.9, 4.0, 5.7) & \(-\) & \(-\) & \(-\) \\ C iv & 4.5 & (3.2, 4.2, 5.0) & 5.3 & 4.8 & \(-\) \\ O vi & 5.1 & (4.1, 4.8, 5.5) & \(-\) & 5.8 & 5.7 \\ \hline \end{tabular} Note: Total ion masses in logarithmic values. For Model A (SS4.4), the ion masses are estimated for the cool phase at \(T\approx 10^{4}\) K. For the EAGLE simulation (§5), the ion masses are for all phases (cool-warm, Fig. 7), and the three values are for the three halo mass bins, \(M_{\rm 200m}=10^{10.0-10.5}\), \(10^{10.5-11.0}\), \(10^{11.0-11.5}\) M\({}_{\odot}\), respectively. We also include mass estimates from B14 (Bordoloi et al., 2014), J17 (Johnson et al., 2017), and T22 (Tcherbysbyov et al., 2022) when available; their masses have been rescaled to our adopted median \(\langle R_{\rm 200m}\rangle\) value (see §6). All masses are based on a cylindrical geometry to be consistent with observational values derived from ion column densities projected along given lines of sight and integrated over a galaxy’s surface area (i.e, \(M\propto N_{\rm ion}R^{2}\)). See Fig. 10 for a comparison among these values. \end{table} Table 5: CGM Ion Masses within \(\langle R_{\rm 200m}\rangle\) mass estimates find that the cool phase of the CGM (\(T\approx 10^{4}\) K) surrounding dwarf galaxies only contains \(\sim 10\%\) of the metals that were ever produced. To aid in the physical interpretation of these CGM reservoirs, we use an EAGLE high-resolution (12.5 Mpc)\({}^{3}\) simulation volume that follows non-equilibrium ionization and cooling in diffuse gas. The simulation, introduced in SS2.2 of Oppenheimer et al. (2018), follows 376\({}^{3}\) fluid and dark matter particles with a gas mass resolution of \(2.2\times 10^{5}\) M\({}_{\odot}\). The non-equilibrium module (Richings et al., 2014) tracks 136 ionization states across 11 elements, but is found not to deviate significantly from equilibrium assumptions when assuming a constant UV background (Oppenheimer et al., 2018). As an example, for silicon, the simulation self-consistently follows all 15 ion states, despite us observing only 3 states available in UV (Si ii, Si iii, & Si iv). Since most of our observed dwarf galaxies are in relatively isolated environments without massive nearby halos (see SS2), we select from the simulation dwarf galaxies that are centrals of their halos, and that have no contaminating galaxies within impact parameters of 150 kpc with \(M_{*}\) of greater than 10% of the targeted galaxy's. The selected galaxy halos are projected along three Cartesian axes (\(x\), \(y\), \(z\)), discarding any axis with contaminating galaxies in their projected CGM. This selection is also effective at discarding dwarf galaxies in denser environments outside the virial radii of more massive galaxies. A total of 207 simulated galaxies and 445 projections are included in the following analysis. Dividing the sample into the same three halo mass bins as shown in Figure 4, we examine 126 halos with \(M_{\rm 200m}=10^{10.0-10.5}\) M\({}_{\odot}\), 53 halos with \(10^{10.5-11.0}\) M\({}_{\odot}\), and 28 halos with \(10^{11.0-11.5}\) M\({}_{\odot}\). We first examine the phase diagrams (\(n_{\rm H}\) vs. \(T\)) of gas particles and metals in the CGM of the selected EAGLE dwarf galaxies. While the \(n_{\rm H}\) vs. \(T\) distributions of individual halos vary from low to high halo masses, collectively we find that over the mass range of dwarf galaxies that we investigate, the CGM gas and metals predominately reside at a temperature of \(T\sim 10^{4-5}\) K, close to the virial temperature at the corresponding halo mass. In Figure 7,we show the averaged phase diagrams of gas (left) and metals (right) for a sample of 53 EAGLE dwarf galaxies with \(M_{\rm 200m}=10^{10.5-11.0}\) M\({}_{\odot}\), encompassing the median halo mass (\(\langle M_{\rm 200m}\rangle=10^{10.9}\) M\({}_{\odot}\)) of our observational sample (i.e., the Full Sample). Overall both the gas and metals in the CGM show bimodal distributions with a warm "diffuse" phase at \(T\sim 10^{4.5-4.8}\) K and gas densities peaking at \(n_{\rm H}\sim 10^{-5}\) cm\({}^{-3}\), and a cool "condensed" phase with \(T\sim 10^{4.0}\) K and \(n_{\rm H}\sim 10^{-3}-10^{-1}\) cm\({}^{-3}\). The CGM mass in the warm diffuse phase is more than 7 times higher than that in the cool condensed phase; however the metal mass is more Figure 7: Phase diagrams (\(n_{\rm H}\) vs. \(T\)) for gas (left panel) and metals (right panel) in the CGM of dwarf galaxies from the EAGLE simulation (see §5). The values in the phase diagrams are averaged over the CGM of 53 dwarf galaxies with halo masses of \(M_{\rm 200m}=10^{10.5-11.0}\) M\({}_{\odot}\), encompassing the medium halo mass of the dwarf galaxies in our observational sample. The color bar in the left panel indicates the masses of gas particles, while the color bar in the right panel shows the corresponding metal masses. The insert blue histograms on the x and y axes show the marginalized mass-weighted density \(n_{\rm H}\) and temperature distributions, respectively. Overall, we find bimodal distributions in both the gas and metals in dwarf galaxies’ CGM. equitably distributed between the two phases. When compared to the empirical model (SS4), the cool condensed phase in EAGLE is similar in density to Model B, while the warm diffuse phase is more similar to that of Model A with \(n_{\rm H}\sim 10^{-5}\) cm\({}^{-3}\). However, when we consider the gas metallicity, the cool condensed phase is found with \(Z\sim 0.2\ Z_{\odot}\), while the warm diffuse phase is more metal-poor with \(Z\sim 0.04\ Z_{\odot}\). This difference highlights the fact that while the simulation produces a multiphase CGM, the empirical model presented here is constructed for a cool phase with \(T\approx 10^{4}\) K. In a follow-up study, we will extend the empirical model to describe the multiphase CGM of dwarf galaxies. We then examine the projected column densities of H i and low-to-intermediate ions as a function of impact parameter for the selected dwarf galaxies over the same three halo mass bins in Figure 8. For each halo mass bin we show in solid lines the 50th percentile \(\log N_{\rm ion}\) value at a given \(b\) from all available dwarf halos, while the shaded regions encompass the 16th-84th percentile ranges. Here we focus on the column density profiles of H i, C ii, and C iv, but note that Si ii, Si iii, and Si iv exhibit similar radial profiles. We also note that while we are selecting based on halo masses, the mean stellar masses in the three halo mass bins are \(10^{6.8}\), \(10^{8.0}\), and \(10^{8.9}\) M\({}_{\odot}\), which is in agreement with the SMHM relation from Munshi et al. (2021) that we adopt in SS2.3. Overall, the left panel in Figure 8 shows that the EAGLE simulation reproduces well both the profile shape and the magnitudes of the H i column densities. Although in observations we do not find obvious \(\log N_{\rm HI}\) variations among the 3 halo mass bins at a given \(b\), the simulations show that lower mass halos tend to have lower \(\log N_{\rm HI}\), especially at small impact parameters. This is not surprising given the smaller baryonic mass reservoirs available to lower mass halos. We also note that even though the simulation matches the overall distribution of the observed H i, it appears unable to fully reproduce the scatter in \(\log N_{\rm HI}\). Specifically, when considering the lower and upper limits, the \(\log N_{\rm HI}\) dispersion in the high (green) mass bin is about a factor of 2 higher than the simulated values at \(\gtrsim 0.5b/R_{200\rm m}\). The lack of mass dependence in the observed data points and the larger scatters are likely due to the small sample size, while in the simulation we combine column density profiles over a large sample of dwarf galaxy halos. In the middle and right panel of Figure 8, we further compare the simulated C ii and C iv column density profiles to their observed counterparts, respectively. For both C ii and C iv, the simulation slightly underestimates the ion column densities inside \(\sim 0.5R_{200\rm m}\). At \(b\gtrsim 0.5R_{200\rm m}\), the simulation predicts column density values either consistent with or lower than the upper limits. Similar trends can be seen when examining Si ii, Si iii, and Si iv. Overall, the simulation agrees with the observations (as well as the empirical model in SS4) that except in the innermost impact parameter (\(\lesssim 0.5R_{200\rm m}\)) or in the halos of higher mass dwarf galaxies (e.g., \(M_{200\rm m}=10^{11-11.5}\) M\({}_{\odot}\)), there exist almost no detectable metal absorbers in the CGM of dwarf galaxies over the mass range we probe. Another insight from the simulation is the total gas mass that we can directly probe via UV absorption lines at \(z\sim 0-0.3\), which we show in Figure 9. For each ele Figure 8: Comparison between observed ion column density profiles and predicted curves from the EAGLE simulation. Data symbols are the same as in Figure 4. The solid lines in each panel indicate the median values while the patches enclose the 16th-84th percentile distribution. Both observations and simulations are color-coded into 3 halo mass bins with \(M_{200\rm m}=10^{10-10.5}\), \(10^{10.5-11}\), \(10^{11-11.5}\) M\({}_{\odot}\). Overall, the EAGLE simulation predicts H i column densities consistent with the observed values, while for other low-to-intermediate ions (C ii, C iv, Si ii, Si iii, Si iv) the simulation predicts almost no detectable metal ion absorbers in the CGM of dwarf galaxies except in the innermost impact parameter. See §5 for more detail. ment, we calculate the total ion mass in the CGM available through a set of common UV transition lines, and compare that with the total gas mass of the element. For example, for hydrogen, the observable gas mass in UV (\(M_{\rm H,UV}\approx 10^{6.3}\) M\({}_{\odot}\)) is mainly detected through H i Ly\(\alpha\) absorption when the line is redshifted into the far UV range at appropriate redshifts. This indicates an ionization correction of \(f_{\rm HI}\sim M_{\rm H,UV}/M_{\rm H}\sim 6\times 10^{-4}\), similar to the hydrogen ionization fraction we derive from Model A (\(\sim 3\times 10^{-4}\); SS4.4). Furthermore, Figure 9 implies the existence of a significant metal reservoir that is mostly missed by the ions we survey in UV. For example, for the EAGLE dwarf galaxies with \(M_{\rm 200m}=10^{10.5-11.0}\) M\({}_{\odot}\), the total mass probed by UV-observable silicon ions (i.e., Si ii, Si iii, and Si iv combined) accounts for \(\sim\)8% of the total silicon mass, with the remaining mass in higher ionization states. This indicates that the EAGLE galaxies' CGM has a warmer and more diffuse component containing higher ionization silicon species not observed in the UV. For more massive, \(\sim 10^{12}\) M\({}_{\odot}\) halos, Oppenheimer et al. (2018, their fig. 8) find that while the low silicon ions dominate the inner CGM silicon content, high ions that are not detected in the UV overwhelmingly dominate the volume beyond 50 kpc. Similarly, carbon has fewer ions, allowing C ii and C iv to trace \(\sim\)6% of this element, but a significant gap is missing with C iii being unobservable in our survey. The single ion O vi available in the far UV traces oxygen with \(\sim\)6% of the total mass arising from the primarily photo-ionized O vi in the dwarf regime as discussed by Oppenheimer et al. (2016). From the simulation perspective, the higher detection rates of O vi absorbers as observed by J17 and Tchernyshyov et al. (2022) owe to a smoother distribution of this ion arising from a more diffuse phase combined with the higher abundance of oxygen relative to other metals. In contrast to the empirical Models A and B, ions in the CGM of the EAGLE dwarf galaxies arise from multiple phases (see Figure 7). C iv originates from the warm diffuse gas with densities primarily below \(n_{\rm H}=10^{-4}\) cm\({}^{-2}\), while C ii comes from the cool condensed gas with densities above \(n_{\rm H}=10^{-3}\) cm\({}^{-2}\), as does Si ii and H i. The simulated C iv appears to reflect the lower density characteristics of Model A, but H i, C ii, and Si ii would arise from the separate condensed version of Model B that may have higher density in the inner CGM with a steeper radial decline for the volume filling factor. We will explore further the comparisons between empirical models and simulations in future work. ## 6 Comparison of Dwarf CGM Mass Among Various Sources In the previous sections, we construct an empirical model (SS4) and examine a suite of simulated halos from the EAGLE simulation (SS5) to understand the physical properties of gas and metals in the CGM of dwarf galaxies. Both sections provide CGM gas and metal mass estimates. In the following, we compare these mass estimates with observational constraints from the literature to provide a comprehensive picture on the baryon and metal budgets of dwarf galaxies. In Figure 10, we compare the total ion masses within \(\langle R_{\rm 200m}\rangle\) between this work and previous observational estimates by B14, J17, and Tchernyshyov et al. (2022). While in this work we only study H i and low-to-intermediate ions (i.e., C ii, C iv, Si ii, Si iii, and Si iv), for completeness we also include the O vi mass estimates for dwarf galaxies of similar masses from J17 and Tchernyshyov et al. (2022). For their low-mass sample of \(M_{*}=10^{8-9.5}\) M\({}_{\odot}\), B14 find a total carbon mass of \(\gtrsim 0.4\times 10^{6}\) M\({}_{\odot}\) within an impact parameter of 110 kpc, assuming an ionization fraction of \(f_{\rm CIV}=0.3\) for C iv (see their table 2). To make a consistent comparison, we convert their carbon mass back to C iv mass as \(M_{\rm CIV}=M_{\rm C}\times f_{\rm CIV}(\langle R_{\rm 200m}\rangle/110\) kpc\()^{2}\sim 10^{5.3}\) M\({}_{\odot}\), where the scaling factor (\(\langle R_{\rm 200m}\rangle/110\) kpc)\({}^{2}\) is used to scale the mass value to our median virial radius. When compared to the estimated C iv mass of \(\approx 10^{4.5}\) M\({}_{\odot}\) from Model A (or \(10^{4.2}\) from EAGLE's middle halo mass bin), we find that B14's value is roughly a factor of 6 (12 for EAGLE) higher. This is likely because B14's estimate Figure 9: Element mass observable in UV at \(z\sim 0-0.3\) (hatched region) compared to the element total mass integrated from 10 kpc to \(R_{\rm 200m}\), estimated for EAGLE dwarf galaxies with \(M_{\rm 200m}=10^{10.5-11.0}\) M\({}_{\odot}\). For hydrogen, “observable in UV” is defined as those gas in the form of H i (Ly\(\alpha\)). For carbon, it means C ii and C iv; for silicon, it means Si ii, Si iii, and Si iv; and for oxygen, it means O vi. We find that at the median halo mass of our sample, the ions that are observable in UV only trace a small fraction of the corresponding metal mass in the dwarf galaxies’ CGM. is based on their detected column densities, which are skewed toward higher mass estimates. J17's ion mass values are based on their absorber detections from their galaxies D1 and D2, which indicate a total of \(\approx\)10\({}^{4}\) M\({}_{\odot}\) in Si iii, 3\(\times\)10\({}^{4}\) M\({}_{\odot}\) for C iv, 3\(\times\)10\({}^{5}\) M\({}_{\odot}\) for O vi, \(<\)10\({}^{4}\) M\({}_{\odot}\) for Si iv, and \(<\)5\(\times\)10\({}^{3}\) M\({}_{\odot}\) for Si ii within a virial radius of 90 kpc. After scaling their ion mass values from the assumed virial radius of 90 kpc to our median \(\langle R_{\rm 200m}\rangle\), we find that their estimated ion masses are generally higher than those from Model A and EAGLE. J17's higher mass estimates are likely because they only consider galaxies with detected absorbers. For completeness, here we also include O vi mass measurements from Tchernyshyov et al. (2022) that examines the relation between CGM O vi column density and host galaxies' stellar mass, star formation rate, as well as impact parameters of the absorbers from the galaxies. We adopt O vi halo mass from their lowest mass bin (\(M_{*}=10^{7.8-8.5}\) M\({}_{\odot}\)) that covers our median stellar mass range. After scaling the mass from \(R_{\rm 200c}\) to \(R_{\rm 200m}\), we find a total O vi mass of \(\sim 10^{5.7}\) M\({}_{\odot}\). Figure 10 shows that the mass estimates of high ions (i.e., C iv, O vi) from previous observations, the empirical Model A, and the EAGLE simulation are generally in agreement with each other within a factor of a few. For weakly ionized species such as C ii, Si ii, Si iii, and Si iv, Model A and the EAGLE simulation's low and median halo mass bins both predict much lower values with ion column densities too low to be detected observationally (Figures 6 and 8). It is worth noting that the EAGLE simulation predicts a range of CGM ion masses when different halo mass bins are considered. This indicates that the ion masses (and the corresponding ionization states) in the CGM of dwarf galaxies may vary with their halo masses. However, we note that the comparison in Figure 10 should be interpreted with caution for the following reasons. First, the CGM mass values from B14 and J17 are likely to be overestimated since only detected absorbers are used in their calculations. Then, we reflect on the assumptions made on the CGM of dwarf galaxies when constructing the empirical model in SS4. We note that the model focuses on the cool CGM, and does not address gas in warmer phases. Meanwhile, the EAGLE simulation shows that the warm gas phase dominates the CGM baryonic mass (Figure 7), and has a significant contribution to the column densities of ions that are highly ionized (e.g., O vi). However, the relatively low resolution of the EAGLE simulation limits its interpretive power because it has been realized in recent years that a higher resolution in the CGM leads to the production of more gas clouds in cool phases and small sizes (van de Voort et al., 2019; Peeples et al., 2019; Hummels et al., 2019; Suresh et al., 2019). In a follow-up paper, we will further develop the empirical model to consider dwarf galaxies' CGM with temperature profiles regulated by the extragalactic UV background as well as feedback from star-formation activities in the galaxies. ## 7 Conclusion We investigate the baryonic and metal content in the CGM of a sample of 49 low-mass, isolated, and gas-rich dwarf galaxies within 8 Mpc of the Sun and at \(z<0.3\) using _HST_/COS. Our sample includes 60 dwarf-QSO pairs that cover a unique parameter space of \(b/R_{\rm 200m}=0.05-1.0\) and \(M_{*}=10^{6.5-9.5}\) M\({}_{\odot}\) that has rarely been explored in previous work (Figure 1, Table 1, & SS2). The median properties of the dwarf galaxies in our Full Sample are \(\langle M_{*}\rangle=10^{8.3}\) M\({}_{\odot}\), \(\langle M_{\rm 200m}\rangle=10^{10.9}\) M\({}_{\odot}\), \(\langle R_{\rm 200m}\rangle=136\) kpc, \(\langle M_{\rm HI}\rangle=10^{7.4}\) M\({}_{\odot}\), and \(\langle\rm SFR\rangle=10^{-1.96}\) M\({}_{\odot}\) yr\({}^{-1}\). The main findings of this work are summarized as follows. Figure 10: CGM ion mass vs. ionization potential (with logarithmic scaling) from Model A (red circle; §4.4), the EAGLE simulation (teal hexagon; §5), B14’s COS-Dwarfs (gray square), J17 (gray diamond), and Tchernyshyov et al. (2022, gray cross). The corresponding mass values can be found in Table 5. For the EAGLE simulation values, the lower and upper bounds show the ion masses for the low (\(M_{\rm 200m}=10^{10.0-10.5}\) M\({}_{\odot}\)) and high (\(M_{\rm 200m}=10^{11.0-11.5}\) M\({}_{\odot}\)) halo mass bins, respectively; while the hexagon symbols indicate the values for the middle halo mass bin (\(M_{\rm 200m}=10^{10.5-11.0}\) M\({}_{\odot}\)). The range of ion masses from EAGLE suggest that the CGM gas and metal content in dwarf galaxies may vary with halo masses, which cannot be inferred directly from observations or empirical models given the limitation in sample sizes and absorber detection rates. See more detail in §6. At a sensitivity of EW\(\geq\)100 mA at \(3\sigma\), we find ubiquitous detections of H i (via Ly\(\alpha\) 1215A line) in the CGM of dwarf galaxies with a detection rate of 89% (24/27) (see SS3.1). The H i gas is typically detected at a column density of \(\log N_{\rm HI}=13-16\). On the other hand, metal ions are generally detected at much lower rates, with 18% (3/17) in C ii, 21% (9/42) in C iv, 5% (2/44) in Si ii, 18% (7/39) in Si iii, and 10 % (4/41) in Si iv. All the metal detections occur within \(\sim 0.5R_{\rm 200m}\), largely consistent with existing literature values. We note that the low ion detection rates occur despite the high-quality QSO sightlines used in this work, some of which reaching SNR \(\sim 20-100\) (see Table 2). This suggests that for dwarf galaxies at the low mass range probed in this study (\(M_{*}=10^{6.5-9.5}\) M\({}_{\odot}\)), the metals in the galaxies' CGM, especially those at \(b/R_{\rm 200m}\gtrsim 0.5\), may be too diffuse to be detected by _HST_/COS. We construct an empirical model for the cool phase of the dwarf galaxies' CGM (T\(\approx\)10\({}^{4}\) K; SS4), parametrizing the gas density and volume filling fraction as power-law functions of the radius, and assuming photoionization equilibrium. For the median halo mass \(\langle M_{\rm 200m}\rangle=10^{10.9}\) M\({}_{\odot}\) in our sample, we present two parameter combinations (Models A and B) that match the observed H i column density profile with different CGM masses and gas densities (Figures 5 and 6). Assuming a metallicity of \(Z^{\prime}=0.3\ Z_{\odot}\), Model A is more consistent with the measured metal columns, and has a cool gas mass of \(M_{\rm CGM,cool}\sim 10^{8.4}\) M\({}_{\odot}\), which accounts for \(\sim 2\%\) of the baryon budget of the median halo mass. When considering metals in the cool CGM, we find a total of \(\sim 10^{5.3}\) M\({}_{\odot}\) in carbon and \(\sim 10^{4.8}\) M\({}_{\odot}\) in silicon. This corresponds to \(\sim\)10% of the metals ever been produced throughout the dwarf galaxy's star formation history. We further examine a volume-limited sample of dwarf galaxies in the EAGLE simulation to understand how dwarf galaxies' CGM may be impacted when considering feedback processes (SS5). In general, we find that the selected EAGLE dwarf galaxies are able to reproduce the observed H i and ion column density profiles. When considering the mass distribution in different phases, we find the EAGLE dwarf galaxies' CGM at preferably warmer temperature (\(T\sim 10^{4.5-4.8}\) K; Figure 7) and that only \(\sim 6-8\%\) of the element masses (e.g., silicon, carbon, and oxygen) can be observed in UV at \(z\sim 0-0.3\). The remaining element masses are mainly in high ionization states that are often not available in UV (Figure 9). However, this conclusion is tempered by the low resolution of the EAGLE simulation, which might artificially suppress the production of cool gas. Lastly, we compare the ion mass estimates from the empirical Model A and the EAGLE simulation to observational values from the literature in Figure 10. In general, we find good agreement in masses for high ions C iv and O vi (within a factor of a few). On the other hand, Model A and the EAGLE simulation predict much lower mass values for Si ii, Si iii, Si iv, and C ii, which are too diffuse to be easily detected with _HST_/COS. Much work remains to be done to fully understand the CGM of dwarf galaxies from both observational and theoretical perspectives. Overall, our analyses presented in this paper suggest that: (1) dwarf galaxies' CGM only harbors \(\sim 10\%\) of the metals in the cool \(T\approx 10^{4}\) K phase, with the rest either in warmer phases yet to be detected or have been lost to the IGM; (2) at the current sensitivity of _HST_/COS which is the prime instrument for UV absorption line studies at \(z\sim 0\), only the inner CGM of dwarf galaxies can be well probed; (3) a larger dwarf galaxy sample size, especially at \(M_{*}<10^{8}\) M\({}_{\odot}\), is needed to better illustrate how CGM properties scale with host galaxy properties; and (4) more sophisticated empirical models as well as dwarf galaxy simulations with higher resolution are necessary to better understand the physical processes governing the CGM of dwarf galaxies. Y.Z. thanks Dan Weisz for his advice and mentorship, and thanks her office mate, Alessandro Savino, and many staff at UC Berkeley and Miller Institute for their support during the completion of this manuscript. Y.Z. thanks Zhijie Qu for providing updated measurements on their QSO sightlines adopted in this work, thanks Rongmon Bordoloi for sharing H i Ly\(\alpha\) measurements, and thanks Kirill Tchernyshyov and Nathan Sandford for discussions on treatment of censored data using PyMC3. This research has made use of NASA's Astrophysics Data System. This work is based on observations made with the NASA/ESA Hubble Space Telescope (program ID: #16301, #15156, and #15227). Support for HST-GO-16301, HST-GO-15156, and HST-GO-15227 was provided by NASA through a grant from the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. E.N.K. acknowledges support from the National Science Foundation under Grant No. AST-2233781. Y.F. acknowledges support from NASA award 19- ATP19-0023. J.K.W. acknowledges support from NSF-CAREER 2044303. Facilities:_ Hubble Space Telescope/Cosmic Origins Spectrograph, Mikulski Archive for Space Telescopes (MAST) Astropy (The Astropy Collaboration et al., 2018), Numpy (Harris et al., 2020), Matplotlib (Hunter, 2007), CLOUDY (Ferland et al., 2017), IDL, PyMC3 (Salvatier et al., 2016) Data Availability: _HST_/COS spectra can be found on HSLA and in MAST: 10.17909/ve0k-ps78. Measurements and galaxy properties used in this work (as well as the broader literature search) and relevant codes can be found at: yzhenggit/zheng_dwarfcgm_survey.
2301.04451
Heterogeneous Tri-stream Clustering Network
Contrastive deep clustering has recently gained significant attention with its ability of joint contrastive learning and clustering via deep neural networks. Despite the rapid progress, previous works mostly require both positive and negative sample pairs for contrastive clustering, which rely on a relative large batch-size. Moreover, they typically adopt a two-stream architecture with two augmented views, which overlook the possibility and potential benefits of multi-stream architectures (especially with heterogeneous or hybrid networks). In light of this, this paper presents a new end-to-end deep clustering approach termed Heterogeneous Tri-stream Clustering Network (HTCN). The tri-stream architecture in HTCN consists of three main components, including two weight-sharing online networks and a target network, where the parameters of the target network are the exponential moving average of that of the online networks. Notably, the two online networks are trained by simultaneously (i) predicting the instance representations of the target network and (ii) enforcing the consistency between the cluster representations of the target network and that of the two online networks. Experimental results on four challenging image datasets demonstrate the superiority of HTCN over the state-of-the-art deep clustering approaches. The code is available at https://github.com/dengxiaozhi/HTCN.
Xiaozhi Deng, Dong Huang, Chang-Dong Wang
2023-01-11T13:15:54Z
http://arxiv.org/abs/2301.04451v1
# Heterogeneous Tri-stream Clustering Network ###### Abstract Contrastive deep clustering has recently gained significant attention with its ability of joint contrastive learning and clustering via deep neural networks. Despite the rapid progress, previous works mostly require both positive and negative sample pairs for contrastive clustering, which rely on a relative large batch-size. Moreover, they typically adopt a two-stream architecture with two augmented views, which overlook the possibility and potential benefits of multi-stream architectures (especially with heterogeneous or hybrid networks). In light of this, this paper presents a new end-to-end deep clustering approach termed Heterogeneous Tri-stream Clustering Network (HTCN). The tri-stream architecture in HTCN consists of three main components, including two weight-sharing online networks and a target network, where the parameters of the target network are the exponential moving average of that of the online networks. Notably, the two online networks are trained by simultaneously (i) predicting the instance representations of the target network and (ii) enforcing the consistency between the cluster representations of the target network and that of the two online networks. Experimental results on four challenging image datasets demonstrate the superiority of HTCN over the state-of-the-art deep clustering approaches. The code is available at [https://github.com/dengxiaozhi/HTCN](https://github.com/dengxiaozhi/HTCN). Data clustering, Image clustering, Deep clustering, Deep neural network, Contrastive learning ## 1 Introduction Data clustering is the process of grouping data samples into multiple clusters in an unsupervised manner, which is a fundamental task in a variety of applications [1; 2; 3]. The traditional clustering algorithms typically focus on some low-level information and lack the representation learning ability, which may lead to sub-optimal performance when dealing with some complex high-dimensional data like images. In recent years, the deep learning has gained tremendous progress [4; 5; 6], which has also been exploited for tackling the clustering task, giving rise to the rapid development of the deep clustering algorithms [7; 8; 9; 10; 11]. For example, Xie et al. [7] presented a deep clustering method called Deep Embedded Clustering (DEC), which simultaneously learns representations and cluster assignments with an objective loss based on Kullback-Leibler (KL) divergence. Guo et al. [8] extended DEC by incorporating the reconstruction loss (via autoencoder) to preserve local structures. Ji et al. [10] sought to learn invariant information of data by maximizing the mutual information between paired samples. More recently, the contrastive learning has emerged as a promising technique for exploiting sample-wise (or augmentation-wise) contrastiveness to improve the deep clustering performance. Van Gansbeke et al. [12] presented the Semantic Clustering by Adopting Nearest neighbors (SCAN) method, which first adopts contrastive learning to learn discriminant features and then performs semantic clustering with the \(K\)-nearest neighbors exploited. Dang et al. [13] matched local-level and global-level nearest neighbors to further improve clustering performance. Li et al. [14] presented the Contrastive Clustering (CC) method to perform feature learning and clustering with simultaneous instance-level and cluster-level contrastive learning. Despite significant success, these contrastive deep clustering methods [12; 13; 14] are mostly faced with two limitations. On the one hand, they typically requires both positive sample pairs and negative sample pairs during their contrastive learning process, which rely on a relatively large batch-size (for sufficient negative pairs) and may bring in a heavier computational burden. On the other hand, these prior works generally adopt a two-stream architecture (with two weight-sharing augmented views), which neglect the possibility of going beyond the two-stream architecture to utilize three or even more streams of networks (with heterogeneous or hybrid structures). Recently Grill et al. [15] presented the Bootstrap Your Own Latent (BYOL) method, which adopts an asymmetric two-stream architecture (with an online network and a target network) and conducts the contrastive learning without negative pairs, where the online network is trained by predicting the feature representations of the target network. Though the requirement for negative sample pairs is remedied, BYOL still complies with the two-stream architecture and also lacks the ability of directly learning the clustering structure. It remains a challenging problem how to incorporate contrastive learning into multiple streams of heterogeneous networks while alleviating the dependence on negative sample pairs for strengthened deep clustering performance. In light of this, this paper presents a novel deep clustering approach termed Heterogeneous Tri-stream Clustering Network (HTCN), which leverages three streams of heterogeneous networks for simultaneous cluster-level and instance-level contrastive learning without requiring negative sample pairs (as illustrated in Fig. 1). Inspired by BYOL [15], we design a novel tri-stream architecture with three augmented views, corresponding to two online networks and a target network, respectively. Note that the online network and the target network are heterogeneous, which differ from each other in the network structure and the updating mechanism. The two online networks share the same parameters, while the parameters of the target network are the exponential moving average of that of the online networks. Here, the exponential moving average is a type of moving average that places a greater weight and significance on the most recent data samples [15]. Each online network is associated with an instance predictor and a cluster predictor, which produce the instance-level representations and the cluster-level representations, respectively. Different from the online networks, the target network utilizes a cluster predictor to generate the cluster-level representations while producing the instance-level representations by the projector directly. The incorporation of an instance predictor in the online networks is meant to prevent the potential collapse where the networks produce the same feature representations for most samples. Then we train the two online networks by (i) predicting the target network's representation of the same image via the mean squared error (MSE) loss (for the instance-level contrastive learning) and (ii) enforcing the consistency between the predicted cluster distributions of the two online networks and that of the target network via the information noise contrastive estimation (InfoNCE) [16] loss (for the cluster-level contrastive learning). Experiments conducted on four image datasets demonstrate the superiority of our approach over the state-of-the-art deep clustering approaches. For clarity, the contributions of this work are summarized below. * A heterogeneous tri-stream architecture is designed, where two online networks and a target network are jointly leveraged for instance-level and cluster-level contrastive learning. * A novel deep clustering approach termed HTCN is proposed, which utilizes three augmented views for contrastive learning without requiring negative sample pairs. * Experimental results on four image datasets confirm the advantegeous clustering performance of our HTCN approach over the state-of-the-art deep clustering approaches. The rest of the paper is organized as follows. The related works on deep clustering and self-supervised learning are reviewed in Section 2. The proposed HTCN framework is described in Section 3. The experiments are reported in Section 4. Finally, Section 5 concludes the paper. ## 2 Related Work In this section, we will introduce the related works on deep clustering and self-supervised learning. ### Deep Clustering Traditional clustering methods such as \(K\)-means [17] and spectral clustering (SC) [18] have achieved promising results in handling low-dimensional data, but they may result in sub-optimal performance when faced with high-dimensional data (e.g., images and videos) due to the lack of the representation learning ability. To address this, the deep learning based clustering methods, referred to as the deep clustering methods, have recently achieved significant success, which leverage the power of feature learning of deep neural networks for the clustering task [7; 8; 9; 10; 12; 13; 14; 19; 20; 21; 22; 23; 24; 25; 26; 27]. Previous deep clustering methods can be divided into two main categories, namely, the one-stage methods and the two-stage methods. The goal of the one-stage approach is to perform feature representation learning and clustering assignment simultaneously. Xie et al. [7] proposed a Deep Embedding Clustering (DEC) method, which jointly optimizes feature learning and clustering with a KL-divergence loss. Caron et al. [21] iteratively clustered the learned features with \(K\)-means and regarded the cluster assignments as supervisory signals to optimize the network. Li et al.[14] presented a Contrastive Clustering (CC) method that performs contrastive learning at instance-level and cluster-level for deep clustering. Besides the one-stage methods, some researchers have also made considerable efforts to the two-stage clustering methods. Van Gansbeke et al.[12] proposed a two-stage clustering method called Semantic Clustering by Adopting Nearest neighbors (SCAN), which first learns the semantic features via contrastive learning and then utilizes the features for clustering in the next stage. To extend SCAN, Dang et al. [13] designed a Nearest Neighbor Matching (NNM) method, which selects both local and global nearest neighbors to optimize the network, where the neighbors are forced to be close to each other. ### Self-supervised Learning Self-supervised learning has recently emerged as a powerful technique with the ability to learn representation from raw data without human supervision, in which the contrastive learning methods [28; 29; 30; 31] have been a representative and promising category. The goal of contrastive learning is to minimize the distance between positive sample pairs while maximizing the distance between negative sample pairs in a self-supervised manner, where positive pairs and negative pairs are defined through data augmentations. In particular, some researchers maintained a memory bank [28; 29] that contains large amounts of representations of negative samples to achieve high performance. However, these methods that utilize memory banks to store and update representations may be computationally expensive. To address the problems with memory banks, He et al. [30] proposed a Momentum Contrast (MoCo) method that trains an encoder by the momentum update mechanism maintaining a long queue of negative examples. Following the MoCo method, Chen et al. [31] proposed a Simple framework for Contrastive LeaRning (SimCLR) method which carefully designs the strategy of data augmentation and a non-linear transformation head. In addition, the clustering based methods [32; 33] adopt a clustering approach to group similar features together, which address the issue that every sample is considered as a discrete class in previous works. More recently, some self-supervised learning methods that only rely on positive pairs and directly predict the output of one augmented view from another augmented view [15; 34; 35] have been developed, among which a representative method is the BYOL method [15]. The BYOL method [15] adopts an asymmetric two-stream architecture, which, however, lacks the ability to learn the clustering structure directly and also overlooks the opportunities and potential benefits of going beyond the two-stream architecture to three or more streams of networks (even with heterogeneous or hybrid structures) to further enhance the contrastive learning and clustering performance. ## 3 Proposed Framework ### Framework Overview This paper presents a heterogeneous tri-stream network architecture termed HTCN for contrastive deep clustering (as illustrated in Fig. 1), which goes beyond the traditional two-stream architecture to explore the constrastive network in a multi-stream manner. Also, HTCN doesn't require negative sample pairs, which makes it more resilient to different batch-size. Specifically, HTCN consists of three main components, including two online networks and a target network. The online networks and the target network are respectively parameterized by different sets of weights, where the parameters of the target network are an exponential moving average of that of the online networks. Given a batch of \(N\) images, we perform three types of augmentations on each image, denoted as \(x_{i}\) with \(i\in[1,N]\), to generate \(3\cdot N\) augmented (or distorted) images, denoted as \(\{x_{1}^{a},\ldots,x_{N}^{b},x_{1}^{b},\ldots,x_{N}^{b},x_{1}^{c},\ldots,x_{N }^{c}\}\). The backbones (i.e., \(f_{\theta}\) and \(f_{\xi}\)) and projectors (i.e., \(g_{\theta}\) and \(g_{\xi}\)) are adopted to extract features from the distorted images via \(z_{i}^{a}=g_{\theta}(f_{\theta}(x_{i}^{a}))\), \(z_{i}^{b}=g_{\xi}(f_{\xi}(x_{i}^{b}))\) and \(z_{i}^{c}=g_{\theta}(f_{\theta}(x_{i}^{c}))\). Then the instance predictors transform \(z_{i}^{a}\) and \(z_{i}^{c}\) to \(y_{i}^{a}\) and \(y_{i}^{c}\), respectively, while the cluster predictors transform \(z_{i}^{a}\), \(z_{i}^{b}\) and \(z_{i}^{c}\) to \(\tilde{q}_{i}^{a}\), \(\tilde{q}_{i}^{b}\) and \(\tilde{q}_{i}^{c}\), respectively. Note that, similar to the asymmetric architecture of BYOL, the target network is not associated with an instance predictor, and the representations generated by its projector are used to guide the instance-level learning of the two online networks. The row space of the feature matrix learned by the projector or the instance predictor is expressed as the instance-level representations, while the column space of the feature matrix learned by the cluster predictor is expressed as the cluster-level representations. The instance-level representations are utilized to enforce the instance-level contrastive learning with an MSE loss optimized, while the cluster-level representations are utilized to enforce the cluster-level contrastive learning with an InfoNCE loss optimized. Finally, the instance-level and cluster-level contrastive losses are simultaneously utilized to optimize the tri-stream network. ### Instance-level Contrastiveness Our HTCN approach simultaneously performs feature learning and clustering without requiring negative sample pairs. In instance-level contrastive learning, we aim to train the two online networks by predicting the instance representations of target network. Specifically, let \(y_{i}^{a}\) and \(z_{i}^{b}\) be the instance representations of \(x_{i}\) in the first online network and the target network, respectively. The instance-level contrastive loss between them is defined as \[\mathcal{L}_{a,b,i}=\|\overline{y_{i}^{a}}-\overline{z_{i}^{b}}\|_{2}^{2}=2-2 \cdot\frac{\langle y_{i}^{a},z_{i}^{b}\rangle}{\|y_{i}^{a}\|_{2}\cdot\|z_{i}^{b }\|_{2}}, \tag{1}\] Figure 1: Illustration of the proposed HTCN framework. The tri-stream network consists of two weight-sharing online networks and a target network, where the parameters of the target network is an exponential moving average of that of the online networks. Instance predictors and cluster predictors are incorporated in the three networks, after which the MSE loss and the InfoNCE loss are utilized for instance-level contrastive learning and cluster-level contrastive learning, respectively. The network architecture can be trained in an end-to-end manner, where the final clustering is obtained via the cluster predictor of the target network. where \(\overline{y_{i}^{a}}\) and \(\overline{z_{i}^{b}}\) are the normalized representations. Thus the loss between the first and second views can be expressed as \[\mathcal{L}_{a,b}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{a,b,i} \tag{2}\] Similar to BYOL [15], the exchange of the online and target views is performed during each training step. Also, we utilize another online network to predict the representations produced by the target network, whose loss is defined as \[\mathcal{L}_{b,c,i}=\|\overline{y_{i}^{c}}-\overline{z_{i}^{b}}\|_{2}^{2}= 2-2\cdot\frac{\langle y_{i}^{c},z_{i}^{b}\rangle}{\|y_{i}^{c}\|_{2} \cdot\|z_{i}^{b}\|_{2}},i\in[1,N], \tag{3}\] \[\mathcal{L}_{b,c}= \frac{1}{N}\sum_{i=1}^{N}\mathcal{L}_{b,c,i}, \tag{4}\] Therefore, the instance-level contrastive loss among the three streams of networks is defined as \[\mathcal{L}_{instance}=\mathcal{L}_{a,b}+\mathcal{L}_{b,c}. \tag{5}\] ### Cluster-level Contrastiveness The cluster predictor maps the representations produced by the projector to \(M\)-dimensional probability vectors, where \(M\) is the number of clusters. These probability vectors, whose \(i\)-th element denotes how likely the image belongs to the \(i\)-th cluster, can be interpreted as the soft label. Let \(q^{a},q^{b},q^{c}\in\mathbb{R}^{N\times M}\) be the feature matrices produced by the cluster predictors of the three networks, respectively. Each column of the feature matrix denotes an \(N\)-dimensional cluster representation, denoted as \(q_{i}^{k}\), while the each row denotes a \(M\)-dimensional probability vector, denoted as \(\tilde{q}_{i}^{k}\) (for \(k\in\{a,b,c\}\)). For a cluster representation \(q_{i}^{a}\), we regard \(q_{i}^{a}\) and \(q_{i}^{b}\) as a positive cluster pair, and the other \(2\cdot M-2\) pairs (in the first and second views) as the negative cluster pairs. The pair-wise similarity is defined as \[s(q_{i}^{a},q_{j}^{b})=\frac{\langle q_{i}^{a},q_{j}^{b}\rangle}{\|q_{i}^{a} \|\|q_{j}^{b}\|},\ \ i,j\in[1,M] \tag{6}\] Then the InfoNCE loss for \(q_{i}^{a}\) is computed by \[\ell_{i}^{a}=-\log\frac{\exp(s(q_{i}^{a},q_{j}^{b})/\tau)}{\sum_{j=1}^{M}[\exp (s(q_{i}^{a},q_{j}^{a})/\tau)+\exp(s(q_{i}^{a},q_{j}^{b})/\tau)]}, \tag{7}\] where \(\tau\) is the temperature parameter. After traversing all cluster representations, the cluster-level contrastive loss between the first and second augmented views can be obtained as \[\hat{\mathcal{L}}_{a,b}=\frac{1}{2M}\sum_{i=1}^{M}(\ell_{i}^{a}+\ell_{i}^{b})-H(Q), \tag{8}\] where \(H(Q)\) is the entropy of the cluster-assignment probability, which helps to avoid a degenerate solution that most images fall into the same cluster and is computed as \[H(Q)= -\sum_{i=1}^{M}[P(q_{i}^{a})\log P(q_{i}^{a})+P(q_{i}^{b})\log P(q_{ i}^{b})], \tag{9}\] \[P(q_{i}^{k})= \sum_{j=1}^{N}\frac{q_{ji}^{k}}{\|q\|_{1}},\ \ k\in\{a,b\} \tag{10}\] For each batch of images, a view pair is formed between each online network and the target network, leading to a total of two view pairs for the cluster-level contrastive learning. Therefore, the cluster-level contrastive loss can be defined as \[\mathcal{L}_{cluster}=\hat{\mathcal{L}}_{a,b}+\hat{\mathcal{L}}_{b,c}. \tag{11}\] ### Overall Loss Function The tri-stream network of HTCN is trained by simultaneously considering the instance-level contrastiveness and the cluster-level contrastiveness. The overall loss function is defined as \[\mathcal{L}=\mathcal{L}_{instance}+\mathcal{L}_{cluster}. \tag{12}\] At each training step, we optimize the overall loss function w.r.t. the online networks' parameters \(\theta\) only, but not the target network's parameters \(\xi\). The parameters of the target is updated as an exponential moving average of that of the online networks. That is \[\theta\leftarrow\text{optimizer}(\theta,\nabla_{\theta}\mathcal{L},\eta), \tag{13}\] \[\xi\leftarrow\alpha\xi+(1-\alpha)\theta. \tag{14}\] where \(\eta\) is the learning rate and \(\alpha\) is the momentum coefficient. After the training, we only keep the target network to perform clustering, which can be obtained in the cluster predictor. ### Implementation Details In HTCN, we use the ResNet34 [36] as the backbone. The projectors and the instance predictors have the same network structure, each of which is a multi-layer perceptron (MLP) with 256-dimensional output units. Each of the cluster predictors is a two-layer MLP, whose output dimension is equal to the desired number of clusters. Three augmented (or distorted) views are generated by applying a family of transformations to each input image. Five types of augmentations are utilized, including ResizedCrop, HorizontalFlip, ColorJitter, Grayscale and GaussianBlur [14]. As each transformation has a probability of being adopted, the distortions of the three streams can thus be randomly decided. During optimization, we use the Adam optimizer and train the model for 1000 epochs. The learning rate is set to 0.0003.The batch size is set to 128. ## 4 Experiments ### Datasets and Evaluation Metrics The experiments are conducted on four widely-used image datasets, namely, CIFAR-100 [37], ImageNet-10 [38], ImageNet-Dogs [38], and Tiny-ImageNet [39]. The statistics of these benchmark datasets are given in Table 1, and some sample images of these datasets are illustrated in Fig. 2. To compare the clustering results of different clustering methods, three evaluation metrics are adopted, including normalized mutual information \begin{table} \begin{tabular}{c c c} \hline \hline Dataset & \#Images & \#Classes \\ \hline CIFAR-100 & 60,000 & 20 \\ ImageNet-10 & 13,000 & 10 \\ ImageNet-Dogs & 19,500 & 15 \\ Tiny-ImageNet & 100,000 & 200 \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the benchmark image datasets. Figure 2: Some examples of the four image datasets. (NMI) [40], clustering accuracy (ACC) [41], and adjusted rand index (ARI) [42]. ### Comparison with State-of-the-Art In this section, we compare the proposed method against four non-deep clustering methods, namely, \(K\)-means [17], Spectral Clustering (SC) [18], Agglomerative Clustering (AC) [43], and Nonnegative Matrix Factorization (NMF) [44], and thirteen deep clustering methods, namely, Auto-Encoder (AE) [45], Denoising Auto-Encoder (DAE) [46], Deep Convolutional Generative Adversarial Networks (DCGAN) [47], DeConvolutional Neural Networks (DeCNN) [48], Aariational Auto-Encoder (VAE) [49], Joint Unsupervised LEarning (JULE) [22], Deep Embedded Clustering (DEC) [7], Deep Adaptive Clustering (DAC) [38], Deep Comprehensive Correlation Mining (DCCM) [50], Gaussian ATtention Network for image Clustering (GATC) [51], PartI-tion Confidence mAximization (PICA) [19], Deep Robust Clustering (DRC) [26] and Contrastive Clustering (CC) [14]. As shown in Table 2, 3 and 4, our HTCN method achieves the best scores on all the four benchmark datasets w.r.t. NMI, ACC, and ARI. Notably, on the ImageNet-Dogs dataset, our HTCN method obtains NMI(%),ACC(%) and ARI(%) scores of 49.4, 49.3, and 35.2, respectively, which significantly outperforms the second best method (i.e., CC) that obtains NMI(%),ACC(%) and ARI(%) scores of 44.5, 42.9, and 27.4. The experimental results in Table 2, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & CIFAR-100 & ImageNet-10 & ImageNet-Dogs & Tiny-ImageNet \\ \hline \(K\)-means [17] & 8.4 & 11.9 & 5.5 & 6.5 \\ \hline SC [18] & 9.0 & 15.1 & 3.8 & 6.3 \\ \hline AC [43] & 9.8 & 13.8 & 3.7 & 6.9 \\ \hline NMF [44] & 7.9 & 13.2 & 4.4 & 7.2 \\ \hline AE [45] & 10.0 & 21.0 & 10.4 & 13.1 \\ \hline DAE [46] & 11.1 & 20.6 & 10.4 & 12.7 \\ \hline DCGAN [47] & 12.0 & 22.5 & 12.1 & 13.5 \\ \hline DeCNN [48] & 9.2 & 18.6 & 9.8 & 11.1 \\ \hline VAE [49] & 10.8 & 19.3 & 10.7 & 11.3 \\ \hline JULE [22] & 10.3 & 17.5 & 5.4 & 10.2 \\ \hline DEC [7] & 13.6 & 28.2 & 12.2 & 11.5 \\ \hline DAC [38] & 18.5 & 39.4 & 21.9 & 19.0 \\ \hline DCCM [50] & 28.5 & 60.8 & 32.1 & 22.4 \\ \hline GATC [51] & 28.5 & 59.4 & 28.1 & - \\ \hline PICA [19] & 31.0 & 80.2 & 35.2 & 27.7 \\ \hline DRC [26] & 35.6 & 83.0 & 38.4 & 32.1 \\ \hline CC [14] & 43.1 & 85.9 & 44.5 & 34.0 \\ \hline **HTCN** & **46.5** & **87.5** & **49.4** & **35.6** \\ \hline \end{tabular} \end{table} Table 2: The NMI(%) scores by different clustering methods (The best score in each column is in **bold**). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & CIFAR-100 & ImageNet-10 & ImageNet-Dogs & Tiny-ImageNet \\ \hline \(K\)-means [17] & 2.8 & 5.7 & 2.0 & 0.5 \\ \hline SC [18] & 2.2 & 7.6 & 1.3 & 0.4 \\ \hline AC [43] & 3.4 & 6.7 & 2.1 & 0.5 \\ \hline NMF [44] & 2.6 & 6.5 & 1.6 & 0.5 \\ \hline AE [45] & 4.8 & 15.2 & 7.3 & 0.7 \\ \hline DAE [46] & 4.6 & 13.8 & 7.8 & 0.7 \\ \hline DCGAN [47] & 4.5 & 15.7 & 7.8 & 0.7 \\ \hline DeCNN [48] & 3.8 & 14.2 & 7.3 & 0.6 \\ \hline VAE [49] & 4.0 & 16.8 & 7.9 & 0.6 \\ \hline JULE [22] & 3.3 & 13.8 & 2.8 & 0.6 \\ \hline DEC [7] & 5.0 & 20.3 & 7.9 & 0.7 \\ \hline DAC [38] & 8.8 & 30.2 & 11.1 & 1.7 \\ \hline DCCM [50] & 17.3 & 55.5 & 18.2 & 3.8 \\ \hline GATC [51] & 17.3 & 55.2 & 16.3 & - \\ \hline PICA [19] & 17.1 & 76.1 & 20.1 & 4.0 \\ \hline DRC [26] & 20.8 & 79.8 & 23.3 & 5.6 \\ \hline CC [14] & 26.6 & 82.2 & 27.4 & 7.1 \\ \hline **HTCN** & **30.5** & **83.9** & **35.2** & **7.6** \\ \hline \end{tabular} \end{table} Table 4: The ARI(%) scores by different clustering methods (The best score in each column is in **bold**). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & CIFAR-100 & ImageNet-10 & ImageNet-Dogs & Tiny-ImageNet \\ \hline \(K\)-means [17] & 13.0 & 24.1 & 10.5 & 2.5 \\ \hline SC [18] & 13.6 & 27.4 & 11.1 & 2.2 \\ \hline AC [43] & 13.8 & 24.2 & 13.9 & 2.7 \\ \hline NMF [44] & 11.8 & 23.0 & 11.8 & 2.9 \\ \hline AE [45] & 16.5 & 31.7 & 18.5 & 4.1 \\ \hline DAE [46] & 15.1 & 30.4 & 19.0 & 3.9 \\ \hline DCGAN [47] & 15.3 & 34.6 & 17.4 & 4.1 \\ \hline DeCNN [48] & 13.3 & 31.3 & 17.5 & 3.5 \\ \hline VAE [49] & 15.2 & 33.4 & 17.9 & 3.6 \\ \hline JULE [22] & 13.7 & 30.0 & 13.8 & 3.3 \\ \hline DAC [38] & 23.8 & 52.7 & 27.5 & 6.6 \\ \hline DCCM [50] & 32.7 & 71.0 & 38.3 & 10.8 \\ \hline GATC [51] & 32.7 & 73.9 & 32.2 & - \\ \hline PICA [19] & 33.7 & 87.0 & 35.2 & 9.8 \\ \hline DRC [26] & 36.7 & 88.4 & 38.9 & 13.9 \\ \hline CC [14] & 42.9 & 89.3 & 42.9 & 14.0 \\ \hline **HTCN** & **47.2** & **90.5** & **49.3** & **16.0** \\ \hline \end{tabular} \end{table} Table 3: The ACC(%) scores by different clustering methods (The best score in each column is in **bold**). 3 and 4 confirm the advantageous clustering performance of HTCN over the baseline methods. ### Influence of the Tri-stream Architecture In the proposed framework, we present a tri-stream architecture which consists of two online networks and a target network. In this section, we test the influence of the three streams of networks. As shown in Table 5, using an online network and a target network leads to better clustering results than using two online networks, while using three streams of networks outperforms both variants of using two streams, which shows the benefits of the heterogeneous tri-stream architecture. ### Influence of Two Types of Contrastive losses In the section, we test the influence of the two types of contrastive losses, i.e., the instance-level contrastive loss and the cluster-level contrastive loss. As shown in Table 6, training with both types of losses can lead to better clustering performance than training with only one of them, which confirm the joint contribution of the instance-level and cluster-level losses in the self-supervised training. ### Influence of the Asymmetric Settings Two symmetry-breaking mechanisms are enforced between the online and target networks [15]. First, an instance predictor is incorporated in each online network, which does not exist in the target network. Second, the so-called stop-gradient is incorporated in the target network, which indicates that this network is not updated using backpropagation. We test the influence of the \begin{table} \begin{tabular}{l c c c} \hline Loss function & NMI & ACC & ARI \\ \hline With instance and cluster losses & 46.5 & 47.2 & 30.5 \\ \hline With only instance loss & 43.3 & 35.6 & 14.6 \\ With only cluster loss & 38.4 & 36.4 & 22.7 \\ \hline \end{tabular} \end{table} Table 6: The clustering performance of HTCN using different loss functions. \begin{table} \begin{tabular}{l c c c} \hline Architecture & NMI & ACC & ARI \\ \hline Tri-stream architecture & 46.5 & 47.2 & 30.5 \\ \hline Dual-stream (Online+Target) & 42.2 & 42.5 & 26.8 \\ Dual-stream (Online+Online) & 39.9 & 40.2 & 24.4 \\ \hline \end{tabular} \end{table} Table 5: The clustering performance of HTCN using different combinations of network architectures. asymmetric settings by removing one of the instance predictor and the stop-gradient. As shown in Table 7, training with both asymmetric settings leads to better performance than training with only one of them. ### Convergence Analysis In this section, we test the convergence of the proposed HTCN method as the number of epochs increases. As shown in Fig. 3, the clustering scores (w.r.t. NMI) of the proposed HTCN method rapidly increase during the first 200 epochs on the benchmark datasets. When going beyond 200 epochs, the increase of epochs still benefits the clustering performance consistently. In this paper, the number of epochs is set to 1000 on all benchmark datasets. ## 5 Conclusion and Future Work The paper develops a new deep clustering approach termed HTCN, which breaks through the conventional two-stream contrastive architecture to explore the rich possibilities in heterogeneous multi-stream contrastive learning and clustering. In HTCN, the two weight-sharing online networks are trained by predicting the instance representations of the target network and enforcing the consistency between the cluster representations of the target and online networks. Thus the tri-stream network architecture can be optimized in an end-to-end manner via simultaneous instance-level and cluster-level contrastive learning. Experimental results on four challenging image datasets have shown the superior performance of our HTCN approach over the state-of-the-art deep \begin{table} \begin{tabular}{l c c c} \hline \hline Asymmetric settings & NMI & ACC & ARI \\ \hline HTCN & 46.5 & 47.2 & 30.5 \\ \hline No predictor & 40.9 & 41.2 & 25.0 \\ No stop-gradient & 39.2 & 39.3 & 23.7 \\ \hline \hline \end{tabular} \end{table} Table 7: The NMI(%), ACC(%), and ARI(%) by HTCN removing different asymmetric settings. Figure 3: Illustration of the convergence of HTCN (w.r.t. its NMI performance) on the four benchmark datasets. clustering approaches. In this paper, we mainly focus on the deep clustering task for images. In the future work, a possible direction is to extend the proposed framework to the deep clustering tasks for more complex data types, such as time series data and document data. ## Declarations * **Funding.** This work was supported by the NSFC (61976097 & 61876193) and the Natural Science Foundation of Guangdong Province (2021A1515012203). * **Conflict of interest.** The authors declare that they have no conflict of interest. * **Ethical approval.** This article does not contain any studies with human participants or animals performed by any of the authors. * **Consent to participate.** Informed consent to participate was obtained from all individual participants included in the study. * **Consent for publication.** Informed consent for publication was obtained from all individual participants included in the study. * **Availability of data and materials.** All datasets used in this paper are publicly-available datasets. * **Code availability.** The code is available at [https://github.com/dengxiaozhi/HTCN](https://github.com/dengxiaozhi/HTCN). * **Authors' contributions.** XD: Conceptualization, Methodology, Writing-Original Draft. DH: Conceptualization, Writing-Review & Editing. CDW: Optimization, Writing-Review & Editing.
2303.05136
A New Heuristic for Rectilinear Crossing Minimization
A new heuristic for rectilinear crossing minimization is proposed. It is based on the idea of iteratively repositioning nodes after a first initial graph drawing. The new position of a node is computed by casting rays from the node towards graph edges. Each ray receives a mark and the one with the best mark determines the new position. The heuristic has interesting performances when compared to the best competitors which can be found in classical graph drawing libraries like OGDF.
François Doré, Enrico Formenti
2023-03-09T09:44:12Z
http://arxiv.org/abs/2303.05136v2
# A New Heuristic for Rectilinear Crossing Minimization ###### Abstract A new heuristic for rectilinear crossing minimization is proposed. It is based on the idea of iteratively repositioning nodes after a first initial graph drawing. The new position of a node is computed by casting _rays_ from the node towards graph edges. Each ray receives a mark and the one with the best mark determines the new position. The heuristic has interesting performances when compared to the best competitors which can be found in classical graph drawing libraries like _OGDF_1[1]. Footnote 1: Open Graph Drawing Framework ([https://ogdf.uos.de](https://ogdf.uos.de)) _Email address:_ [email protected] (Francois Dore) Graph Drawing, Rectilinear Crossing Minimization, Algorithmic Geometry ## 1 Introduction Graph drawing is a living research domain with an impressive number of publications over the years. It is difficult to say when the domain was born and who were the very first pioneers. However, for the questions concerning our paper, one can surely cite the seminal paper of Tutte [2]. In his paper, Tutte proposed an algorithm where all vertices were consecutively placed at the barycenter of the positions of their neighbours, which mimics spring forces. Afterwards, many other algorithms, called force-based models, took over the concept to draw graphs in a _nice_ (and accessorily fast) manner. An extensive compilation of force-directed algorithms can be found in a paper of Kobourov [3]. In these kind of algorithms, the idea is that a graph is assimilated to a sort of particle system in which particles are identical and electrically charged. The nodes of the graph play the role of particles and since all particles are identically charged, they tend to repel each other by Coulomb's law. However, the repulsion motion is contrasted by attracting forces modelled by linear springs between two particles that share an edge. Conventionally, the drawing is _nice_ when the system is at the equilibrium. Nodes are drawn at the position reached by the corresponding particles and straight lines are drawn between nodes connected by an edge in the original graph. Over the years also the aesthetic criteria for graph drawing have evolved. Currently, several criteria are commonly accepted as characterising [4; 5] a _nice_ graph drawings, such as the angular resolution, the distribution of vertices in the plane, or the number of edge crossings. This paper focuses on the last property. Indeed, we aim at finding a drawing which minimizes the number of crossings when the edges are drawn as straight lines. We call this problem the _rectilinear crossing minimization_ problem (RCM problem for short). RCM is a known computationally difficult problem. Indeed, solving RCM for a generic graph is complete for the existential theory of the reals and hence its complexity (in the classical setting) is somewhere between NP and PSPACE[6]. As a consequence, it is a natural research direction to look for heuristics proposing trade-offs between exact solutions and computational time. Some strategies have been developped to iteratively move vertices to other positions in \(\mathbb{R}^{2}\). Radermacher et al. [7; 8] proposed a way, given a rectilinear drawing, to find for any vertex \(v\) its optimal position, keeping all the other vertices fixed. To the best of our knowledge, this algorithm provides the best trade-off between precision and time. Similarly to the Radermacher et al. approach, the main strategy of our algorithm consists in improving an already existing drawing of a graph, step by step, moving a vertex to another position, potentially decreasing the number of crossings. However, contrary to the computation of the optimal place which is rather costly, our goal is to find satisfying positions with cheaper mechanisms. To do so, the idea is to cast rays from a vertex. These rays can either reflect on the edges, or go through them, according to a wisely chosen score function. After several reflections or traversals, the ray ends up in a place which defines a positions where the vertex can be evaluated to move in or not. The insurance of obtaining a satisfactory position comes from a theorem of the dynamical systems which states that, in a rectangular billiard, a trajectory with an irrational angle is dense in the space. For our case, it gives the intuition that our rays can find the optimal position since they can go through any open subset of the space. This algorithm has shown interesting results compared to the best rectilinear drawing algorithms of _OGDF_, the reference library for this domain. It can also easily be tuned with various parameters to privilege either the quality of the results or the computation time. The paper is structured as follows. In the next section, all basic definitions and concepts are introduced. Section 3 introduces and explains the new heuristics, while Section 4 discusses the heuristic parametrization. Convergence and complexity are discussed in Section 5. Finally, experimental results are shown in Section 6 asserting the relevance of the parameters of the algorithm and comparing its performances with its main (available) competitor. ## 2 Definitions A graph \(\mathbf{G}\) is a structure \(\langle V,E\rangle\) where \(V\) is the (finite) set of _vertices_ and \(E\subseteq V\times V\) is the set of _edges_. For any \(v\in V\), \(E_{v}\) is the maximal subset of \(E\) such that if \((a,b)\in E_{v}\), then either \(a=v\) or \(b=v\). An _embedding_\(\Pi\) of \(\mathbf{G}\) in a surface \(\Sigma\) is a representation of \(\mathbf{G}\) in which \(V\) are points in \(\Sigma\) and edges are simple curves over \(\Sigma\) (homeomorphic to \([0,1]\)). Moreover, the representation must be such that (1) endpoints of a curve associated with an edge must coincide with the endpoints of the edge; (2) no curve representing an edge contains more than two vertices; (3) no two curves (representing edges) intersect at a common interior point. A _straight-line drawing_\(\Gamma\) of a graph \(\mathbf{G}\) is an embedding of \(\mathbf{G}\) into \(\mathbb{R}^{2}\) in which condition (3) is relaxed and edges are not associated with generic curves but with straight-line segments. Hence, to describe a straight-line drawing of \(\mathbf{G}\) one just needs to provide a bijective map from \(V\) to \(\mathbb{R}^{2}\). A graph is _planar_ if it admits an embedding in \(\mathbb{R}^{2}\). The Fary's theorem [9] states that for any planar graph there exists a straight-line drawing without crossing edges. On the other hand, if \(\mathbf{G}\) is not planar, then any straight-line drawing will have some crossing edges. Given two edges \((a,b)\) and \((c,d)\) in \(E\) and a drawing \(\Gamma\) of a graph \(\mathbf{G}\), we denote the fact that they cross each other by \((a,b)\times(c,d)\) without explicit reference to the dependency on \(\Gamma\) when the embedding is clear from the context. Therefore the _crossing number_\(cr(e)\) of an edge \(e\in E\) is given by \(|\{e^{\prime}\in E\backslash\left\{e\right\}\mid e\times e^{\prime}\}|\). Intuitively, the crossing number of a vertex is \(cr(v)=\sum_{e\in E_{v}}cr(e)\) and that of a drawing \(\Gamma\) is \(cr(\Gamma)=\frac{1}{2}\cdot\sum_{e\in E}cr(e)\). We also define the function \(\xi(v)\), called the _energy_ of \(v\), as the sum of the squared norms of the Hooke's law forces applied to the endpoints of \((a,b)\in E_{v}\). Recall that this law models spring forces and can be seen as the delta between the actual length of the edge and a desired theoretical one. Similarly to the crossings function, \(\xi(e)\) is the energy of one edge \(e\) and \(\xi(\Gamma)\) is the sum of the energies of all the edges in \(\Gamma\). The _faces_ of an embedding of a graph \(G\) on a surface are the regions that remain when the points representing the vertices and edges of \(G\) are removed from the surface. Remark that this kind of definition makes sense only for graphs for which we have found an embedding. Hence, in the sequel, we prefer the notion of _facet_ which, in a sense, describes the'real' visual faces. Following [10], given a straight-line drawing \(\Gamma\) of a graph \(G\), a _planarization_\(\Gamma^{\prime}\) of \(\Gamma\) can be obtained by replacing consecutively each pair of crossing edges by four new edges attached to an also new false vertex at the position of the old intersection point. The faces of this \(\Gamma^{\prime}\) do not overlap and are called the facets of \(\Gamma\). The _bounding box_\(\mathscr{B}\) of a graph \(G\) is the minimum (_w.r.t._ surface) rectangle, aligned on the \(x\) and \(y\) axes, which contains all of the vertices of \(G\) (considering that vertices have a null radius). We define also \(\mathscr{B}_{\varepsilon}\) as the _expanded bounding box_ of \(G\) with a margin \(\varepsilon\). We can visualize it as a rectangle with the same centroid and the same orientation as \(\mathscr{B}\) but with a width (resp., a height) of length \(w+2\varepsilon\) (resp., \(h+2\varepsilon\)) where \(w\) (resp., \(h\)) is the width (resp., the height) of \(\mathscr{B}\). Figure 1: The facets (a) and the bounding box (b) of a straight-line drawing \(\Gamma\) of \(G\). Finally, we call _ray_ a polyline, expressed as a sequence of points \(p_{0},p_{1},\ldots,p_{r}\) with \(r>0\) and a half-line whose initial point is \(p_{r}\) and its direction vector \(\overrightarrow{d}\). In the sequel, we will say that _a ray intersects an edge_ of a graph \(\mathbf{G}\) if the half-line of the ray intersects it, but not the polyline. The length of the sequence will be called the size of the ray. ## 3 The new heuristic We propose a new heuristic for rectilinear graph drawing called "Ray-based Rectilinear Graph Drawing" (RRGD). RRGD takes in input a finite graph \(\mathbf{G}=\langle V,E\rangle\) and calls Init(G) to get an initial drawing \(\Gamma_{0}\). The possible intial drawings are completely independant of the algorithm, meaning that the result is not sensible to \(\Gamma_{0}\), and will be discussed in Section 6.2. A drawing \(\Gamma\) is represented by a list of pairs \((\mathtt{v},\mathtt{p_{v}})\) where \(\mathtt{v}\) is the name of the node and \(\mathtt{p_{v}}=(\mathtt{v}_{x},\mathtt{v}_{y})\) are its coordinates. The Init function also sets up a bounding box \(\mathscr{B}_{\varepsilon}\) with four dummy vertices, along with four dummy edges. The latter are fixed and will not be moved during the whole execution of the algorithm. The box \(\mathscr{B}_{\varepsilon}\) will help to keep the other vertices in a reasonable frame. Also, the box is set with a margin \(\varepsilon>0\) to give to the real vertices more degrees of freedom to move to the other side of \(\Gamma\) by going around the whole graph using the gap between vertices and the dummy edges. After the initialization, the algorithms enters its main loop which keeps running as long as the Move function finds a better position for at least one vertex; otherwise it stops and a \(\Gamma\) is returned. We will call \(\Gamma_{n}\) the drawing of \(\mathbf{G}\) produced after \(n\) iterations of the main loop. Figure 2: An example of a ray. ### Algorithm explanation Sorting verticesAt each run of the main loop, the vertices of \(\mathbf{G}\) are sorted in descending order according to their position in \(\Gamma\) to treat problematic vertices first. To do so, we define the order \(\leq_{\Gamma}\) as follows: for any pair of vertices \(u,v\in V\), \(u\leq_{\Gamma}v\) if \(cr(u)<cr(v)\) or, in case \(cr(u)=cr(v)\), \(\xi(u)\leq\xi(v)\). The consideration about the vertices energy allows to favor edges of homogeneous sizes. Although the repartition of edge lengths is not the most valued criterion to qualify a drawing as _pleasingly_ looking [5], it is nevertheless often considered when talking about graph drawing. Moreover, this metric is implicitly used in all force-based models since it represents the spring length. ``` 1:\(\Gamma=\texttt{Init}(\texttt{G})\) 2:\(update\gets true\) 3:while\(update\)do 4:\(update\gets false\) 5:\(\Gamma\leftarrow\texttt{Sort}(\Gamma)\) 6:for\((\texttt{v},(\texttt{v}_{x},\texttt{v}_{y}))\in\Gamma\)do 7:\((\texttt{v}\prime_{x},\texttt{v}\prime_{y})\leftarrow\texttt{Move}(\texttt{v},\Gamma)\) 8:if\((\texttt{v}_{x},\texttt{v}_{y})\neq(\texttt{v}\prime_{x},\texttt{v}\prime_{y})\)then 9:\(\Gamma(\texttt{v})\leftarrow(\texttt{v}\prime_{x},\texttt{v}\prime_{y})\) 10:\(update\gets true\) 11:break 12:endif 13:endfor 14:endwhile 15:return\(\Gamma\) ``` **Algorithm 1** RRGD The inner loop and the Move functionThe inner loop spans through the pairs \((\texttt{v},\texttt{p}_{\texttt{v}})\) of \(\Gamma\) calling Move to check if \(\Gamma\) can be improved. Given a vertex \(v\) and a drawing \(\Gamma\), Move builds a list \(L\) of candidate positions by calling CastRay\(R\) times with different angles and returns the minimum (according to \(\leq_{\Gamma}\)) of \(L\). Ray casting.For one initial position \(\mathtt{p}_{\mathtt{v}}\) and one angle \(\theta\), the algorithm consider a half-line matching these parameters. It then computes the intersections points with the real edges of \(\mathbf{G}\) and also with the four dummy ones representing \(\mathscr{B}_{\varepsilon}\). These intersection points are then sorted according to their distance to \(\mathtt{p}_{\mathtt{v}}\) (with the closer ones first) and be processed in this order for the next step. Figure 3.1 shows three rays being cast from one node and intersecting the edges of the graph and its bounding box. Crossing or reflecting.When a ray hits an edge \(e\), it can either pass through or reflect on it according to the _opacity_ of \(e\). This quantity measures the average decrease (or increase) of the crossing number of a node whenever it is moved before or beyond an intersection point. Its value depends on three parameters: the vertex \(v\) which we try to move, the edge \(e\) that the current ray has crossed, and an evaluation position \(p\) of \(v\). We define \(p^{\prime}\) and \(p^{\prime\prime}\) as the points on the ray \(r\) at a distance \(\varepsilon\) from the intersection point of \(r\) and Figure 3: Rays \(r_{0},r_{1}\) and \(r_{2}\) casted from vertex \(v\) and their respective intersection points. \(e\), with \(p\prime\) being the closest to the last added point of the sequence of points of \(r\) (see Figure 4). To compute the opacity, we need first to assign to each edge \(e_{i}\in E_{v}\) a weight \(w_{e_{i}}\) as follows: \[w_{e_{i}}=\begin{cases}\begin{array}{rl}0,&\text{if $e_{i}$ shares a vertex with $e$,}\\ -1,&\text{if $e_{i}$ does not share a vertex with $e$ and $e_{i}$ crosses $e$,}\\ 1,&\text{if $e_{i}$ does not share a vertex with $e$ and does not cross $e$.}\end{array}\end{cases}\] Then, the opacity is the average of the weights, ignoring null values: \[\texttt{Opacity}(v,e,p)=\begin{cases}\sum_{e_{i}\in E_{v}}w_{e_{i}} \\ \frac{|E_{v}^{*}|}{|E_{v}^{*}|},&\text{if $\,|E_{v}^{*}|>0$}\\ 1,&\text{otherwise}\end{cases}\] where \(E_{v}^{*}\) is the subset of \(E_{v}\) for which each edge \(e_{i}\) has \(w_{e_{i}}\neq 0\). Finally, remark that in the special case in which \(e\) is one of the four sides of \(\mathscr{B}_{\varepsilon}\), the opacity is not computed, since the ray reflects on it by default. At the end of the opacity calculation, two "actions" can be taken: Figure 4: An example of weight attribution for the computation of opacity. When trying to move \(v\) from \(p\prime\) to \(p\prime\prime\), all the edges will have opposite weights in \(p\prime\prime\) (this actually holds whatever the edge configuration except for the positive edges, which can remain positive in \(p\prime\prime\) if a neighbor of \(v\) is co-linear with the endpoints of \(e\)). Reflection.: The point \(p\) is added to the sequence of the ray and the direction vector \(\overrightarrow{d}\) is updated as follows. Let \(\theta_{d}\) and \(\theta_{e}\) be respectively the angles of \(\overrightarrow{d}\) and \(e\), then the we construct a new unit vector \(\overrightarrow{d}\) with an angle \(\theta=2*\theta_{e}-\theta_{d}\). Note that since \(\theta\) is taken modulo \(2\pi\), the direction of the edge \(e\) does not matter. Crossing.: The point \(p\) is also added to the sequence of the ray (even though it can be collinear to the two previous points in that sequence). However, the unit vector \(\overrightarrow{d}\) remains unchanged. In this case, there is no need to recompute the intersections points of the ray with the edges, we can only take the next one in the sorted list and recompute its opacity with its associate edge. Note that \(p\) is also added to the sequence of the ray even though it will be co-linear with its predecessor and its successor in the sequence. For now, we only consider that the ray reflects if there are more edge with a 1 than with a \(-1\). We keep doing this routine \(n_{r}\) times to let it visit through a non-negligible portion of \(\Gamma\). Pseudocode.: The ray is casted at position \(\Gamma(v)\) according to an angle \(\theta\) and built point by point. At each run of the main loop (line 4) a new point it is added. Let \(p\) be the current point of the ray that have just been built. The next one is chosen among the intersection points \(p_{i}\) which are given by the half-line exiting from \(p\) with angle \(\theta\) and one of the edges (via calls to the function intersectionHalflineSegment at line 7). Lines 6-11 arrange such candidates on a heap (which uses the Euclidean distance between \(p\) and \(p_{i}\) as key for comparison between points). Remark that after the execution of the lines 6-11, the heap \(H\) is always non empty because of the bounding box \(\mathscr{B}_{\varepsilon}\). Hence the top of the heap can be safely popped (line 13) into \(p^{\prime}\) and \(p^{\prime}\) is added to the ray. At this point (lines 16-21) the algorithm decides if the ray is going to be reflected at \(p^{\prime}\) or it pass through the edge to which \(p^{\prime}\) belongs to. If the heap \(H\) is empty then it means that \(p^{\prime}\) is a point on one of the edges of \(\mathscr{B}_{\varepsilon}\) and hence the ray must reflect. This is also the case if the opacity of the edge (computed by the Opacity function at line 16) is less or equal than zero. Finally, the procedure returns the mid-point of the last segment of the ray. Remark the point returned is chosen in this way to place the node relatively close to the centers of the facets. This also allows to ensure a higher angle resolution in most cases at a minimum cost. Position evaluation.: After repeating the _"cross/reflect"_ routine at most \(n_{r}\) times, we end up with a sequence of length \(n_{r}+1\). We assign to each ray \(r_{i}\) a final point \(p_{r_{i}}\) as the midpoint of the last segment of the constructed polyline. Since \(n_{r}>0\), it is always possible to find one. Then, we compare all the \(p_{r_{i}}\) according to the same two criteria that we used in the "Sorting vertices" section, except that this time, the lowest values are preferred. We eventually move \(v\) only if the best position among the \(p_{r_{i}}\) is better than the initial position of \(v\). If not, we apply the movement function to the next vertex in the list established at the begining of the pass. Note that with the emphasis of the crossing number first for the comparison, a vertex with no crossing but with very long edges will always be preferred. This is not an issue is the sense that there is a good chance that it is precisely these long edges that allow it to have no crossings (see Figure 6 in the appendix). This type of behavior seems to appear quite often. ## 4 Parametrization and refinement ### Consideration of the opacity When a ray cast from a vertex \(v\) is about to cross an edge \(e\), then we compute the opacity of the edge given \(v\), \(e\) and an evaluation position \(p\) placed on the ray at a distance \(\varepsilon\) form the intersection point with \(e\). This opacity will act like a score function to decide if the ray crosses \(e\) or not. We define two ways to take into consideration the result of \(\mathtt{Opacity}(v,e,p)\). #### 4.1.1 Deterministic reflections The first way to consider the opacity, as explained before, is to simply look at its sign. If it is negative, it basically means that more than half of the relevant edges, those which does not share a vertex with \(e\), cross \(e\). In this case, with a negative opacity, we simply take the short-term best outcome and let the ray cross since it decreases the number of crossings. Thus, the ray reflects on \(e\) if \(\mathtt{Opacity}(v,e,p)>0\). For the case where the opacity is null (_i.e._ when there is no relevant edges or when there are as many edges that cross as those that do not), both actions can be defended. We chose to make the ray reflect in this case, to limit the number of positions for a vertex in an already satisfactory position and hence speed up the algorithm a bit. Figure 5: The node \(v\) in \(\Gamma_{i}\) is moved to another position in \(\Gamma_{i+1}\) after a ray has been cast from it (having been reflected 4 times and having crossed an edge 3 times). The ending position of the ray is shown as the black dot in \(\Gamma_{i}\). #### 4.1.2 Randomized reflections In this configuration, the opacity computed acts like a probability to cross or not. It will not directly determine the behavior of the ray but only bias it. We will compute for each intersection of a ray with an edge a random variable \(\chi\) uniformly distributed on the interval \([-1-\varepsilon,1+\varepsilon]\). The ray will reflect if \(\chi<\texttt{Opacity}(v,e,p)\), the higher the opacity is, the lower the chances to cross \(e\) are. Note that thanks to \(\varepsilon\) in the interval, both outcomes are always possible, even if all the edges have the same weight and \(|\texttt{Opacity}(v,e,p)|=1\). This will give the algorithm a chance to avoid local optima and find global ones. ### Energy delta and Prohibition window We introduce now two parameters that each, in their own way, offer a balance between the computational time and the quality of the final drawing. Firstly, the parameter \(\Delta\xi\) is the amount of energy decrease necessary for a vertex \(v\) to move to another location \(p\) if \(p\) does not improve the crossing number. This threshold is mandatory to avoid some cases where the algorithm enters a loop, where the succession of application of the node movement functions from a drawing \(\Gamma\) leads eventually to itself. Secondly, the parameter \(\Upsilon\) defines, for each vertex \(v\) that has been moved, the number of iterations required before it can be moved again. The purpose of this parameter is mainly to speed up the algorithm. If the algorithm only moves the worst rated vertex (according to \(\leq_{\Gamma}\)) continuously, we can quickly fall into a case where even after moving it, it stays the worst and Figure 6: An example of the need for long edges, where we want to find the minimum number of crossings by moving only the vertex \(v\). Obviously, some better drawing of the graph above is possible but here, we place ourselves in the case where we can only move \(v\) to another location and the rest of the graph stays put. Hence, the position \(v\prime\) is near-optimal, in terms of crossing number and even for edge length. the algorithm do not stop moving it bit by bit, thus neglecting all the other vertices. For instance, if the vertex in question is trapped in a good facet (_i.e._ it reflects on every edge which may not decrease the crossing number) but the movement function diminish only its energy. Theoretically, a \(\Delta\xi\) large enough could handle this case and avoid those specific micro-optimisations which are not as relevent as some other moves, but the implementation of this mechanism still have some importance to converge faster. ### Accessing facets The aim of this subsection is to find the number of reflections, or rays, needed in average to encounter a certain proportion \(q\in[0,1]\) of the facets of the graph. Since we are interested in an applicable bound at any step of the algorithm, we will consider some of the worst cases in terms of graph configuration. Let us begin with the following lemma: **Lemma 4.1**.: Given \(\mathbf{G}=\langle V,E\rangle\) a graph and \(\Gamma\) a drawing of \(\mathbf{G}\), a random semi-line intersects in average \(\frac{|E|}{6}\) edges. Proof.: Consider three points \(A\), \(B\) and \(R\) chosen at random according to the uniform distribution on \([0,1]^{2}\). Among all the possible rays originating from \(R\), the proportion of those which intersect the segment \(AB\) (_i.e._ those passing into the triangle \(ABR\)) equals \(\frac{\widehat{ARB}}{2\pi}\). Furthermore, since for any three points, the sum of there three angles equals \(\pi\), it is easy to convince oneself that the Figure 7: Two possible oscillating positions \(p_{1}\) and \(p_{2}\) for a vertex \(v\) if \(\Delta\xi=0\) and \(n_{r}=1\). The vertex \(v\), initially in position \(p_{1}\) have for only neighbour the vertex \(u\), if the ray have the same slope as the segment \(\overline{p_{1}p_{2}}\) and hits the edge \((u,w_{2})\), the algorithm can evaluate the position \(p_{2}\) as a valid new location. Once in \(p_{2}\), the same behavior can happen to evaluate \(p_{1}\) as its new position. average angle formed by three points is \(\frac{\pi}{3}\). This gives us the probability of \(\frac{1}{6}\) that a random ray intersects a random segment. Note that this probability actually holds for drawings on any convex surface. Considering that the vertices of \(\mathbf{G}\) are uniformely distributed on \(\mathscr{B}_{\varepsilon}\), for any edge \(e\in E\), a random ray has a probabilty of \(\frac{1}{6}\) to cross it. Hence, it will intersects in average \(\frac{|E|}{6}\) edges. Note that the consideration made here on the uniform distribution of the vertices is a rather strong assumption. In practice, as the algorithm progresses, the edges tends to be much smaller than if the vertices where placed randomly on \(\mathscr{B}_{\varepsilon}\). However, this whole reasoning will give us a limit for the start of the algorithm, which we will use all along. This allows us to prove the following: **Theorem 4.1**.: Given a graph \(\mathbf{G}=\langle V,E\rangle\) with \(|V|\) and \(|E|\) sufficiently large, and a drawing \(\Gamma\) of \(\mathbf{G}\). An upper bound \(P\) for the probability that \(R\) random rays crosse a given facet equals: \[P=1-\left(1-\frac{|E|}{36(|V|+cr(\Gamma)-2)}\right)^{3R}\] Proof.: After the probability for a ray to cross an edge, one needs to have an idea of the number of facets in a generic graph. Considering that we know the number of crossings of our actual \(\Gamma\), we can introduce an approximation for the number of facets. As explained before, the number of facets of a drawing \(\Gamma\) is the number of faces its planarized version \(\Gamma^{\prime}\). Given a graph \(\mathbf{G}=\langle V,E\rangle\) and a drawing \(\Gamma\) of \(\mathbf{G}\) with, by definition, \(cr(\Gamma)\) crossings, without more than two edges intersecting on the same point, let \(\mathbf{G}^{\prime}=\langle V^{\prime},E^{\prime}\rangle\) be the graph obtained from the planarization \(\Gamma^{\prime}\) of \(\Gamma\). We then must have \(|V^{\prime}|=|V|+cr(\Gamma)\) and \(|E^{\prime}|=|E|+2\cdot cr(\Gamma)\). From the definition of the planarization, the new number of vertices is self-evident. For the number of edges, we can see that each crossing between 2 edges leads to 4 new edges in \(\mathbf{G}^{\prime}\). Now to count the number of faces of \(\Gamma^{\prime}\), we can, as we said, consider the worst case, namely, if all the faces are triangular. In this specific case, we can express the number of edges according to the number of faces \(|E|=\frac{3F}{2}\), with F the number of faces. Moreover, thanks to Euler's formula for planar graphs, we know that \(|V|-|E|+F=2\), which naturally leads to \(F=2|V|-4\) If we consider again that these facets are all triangular, meaning that all of the \(2(|V|+cr(\Gamma))-4\) facets of \(\mathbf{G}\) have 3 associate edges, the edges must have in average \(F=\frac{6(|V|+cr(\Gamma)-2)}{|E|}\) joint facets. To have the probability \(Q\) that one specific facet is traversed by a random ray, we multiply the probability that, among the 3 possible attached edges, \(k\) are crossed, by the probability that, for at least one of them, the ray enters the good facet after having crossed it. This give us the following probability \(Q\): \[Q=\sum_{k=0}^{3}\binom{3}{k}\left(\frac{1}{6}\right)^{k}\left(\frac{5}{6} \right)^{3-k}\left(1-\left(1-\frac{1}{F}\right)^{k}\right)\] We consider \(E\) and \(V\) sufficiently large to have independent events and apply _Bernoulli trials_. We can then rearrange the terms to have: \[Q =\sum_{k=0}^{3}\binom{3}{k}\left(\frac{1}{6}\right)^{k}\left( \frac{5}{6}\right)^{3-k}-\sum_{k=0}^{3}\binom{3}{k}\left(\frac{1}{6}\right)^{k }\left(\frac{5}{6}\right)^{3-k}\left(1-\frac{1}{F}\right)^{k}\] \[=\left(\frac{1}{6}+\frac{5}{6}\right)^{3}-\left(\frac{1}{6}\left( 1-\frac{1}{F}\right)+\frac{5}{6}\right)^{3}\] \[=1-\left(1-\frac{1}{6F}\right)^{3}\] This ending result can be interpreted as not hitting the good edge nor entering the good facet three times. To finally have the probability \(P\) for one facet to be hit by at least one of \(R\) rays, we can apply the same process and have \(P=1-(1-Q)^{R}\), which gives us the probability stated in the theorem. If we want that the proportion \(q\) of facets that are encountered, then we need to find the \(R\) such that \(P\geq q\). To have a real idea about the number of rays, for all the graphs of our database on which we applied a random layout, reaching a \(q\leq 0.1\) would require around 10 rays. ## 5 Convergence and Complexity This section gives some rather important properties of the algorithm, namely its convergence and its complexity. First of all, we will give some lemmas helping to prove the convergence theorem. **Lemma 5.1**.: Given a graph \(\mathbf{G}\) and an initial drawing \(\Gamma_{0}\), \(\exists\xi_{MAX}\) such as \(\forall n\in\mathbb{N}\), \(\xi(\Gamma_{n})\leq\xi_{MAX}\). Proof.: Since the frame inside which the vertices can move is fixed at the beginning of the algorithm and never expand during the next steps, the maximum length of an edge \(e\) is bounded by the diagonal of this frame. As a result, the energy of one edge \(e\), since it simply depends quadratically on its length, is also bounded. The maximum energy is either if both of its endpoints are in opposite corners or if they are at the same place. In addition, since the number of edges is also bounded, the total energy of \(\Gamma\) is bounded by a hypothetical value \(\xi_{MAX}\). **Lemma 5.2**.: If a vertex \(v\) is moved to a place which minimizes its local energy, the global energy of \(\Gamma\), i.e. \(\xi(\Gamma)\), can only decrease. Proof.: The local energy of \(v\) is defined by \(\xi(v)=\sum_{e\in E_{v}}\xi(e)\). When we move the vertex \(v\) to a another location, the energy of the edges non-attached to \(v\), \(\overline{\xi}(v)=\sum_{e\in E\setminus E_{v}}\xi(e)\) is unchanged. Since \(\xi(\Gamma)=\xi(v)+\overline{\xi}(v)\), if \(\xi(v)\) is decreased by the repositioning of \(v\), \(\xi(\Gamma)\) can only decrease too. **Lemma 5.3**.: With \(\Delta\xi>0\), when we move only one vertex \(v\), there exists a number of steps \(n_{s}\) after which either \(v\) is blocked in a local minima or a place which decrease the number of crossings of \(v\) is found. Proof.: Since the energy of \(\Gamma\) is bounded and the use of \(\Delta\xi\) makes it decrease by fixed quantified steps. The number of steps, before reaching an energy of \(0\) must be finite. We could even determine that its value equals \(n_{s}=\left\lceil\frac{\xi_{MAX}}{\Delta\xi}\right\rceil\). We can now propose the following convergence theorem. **Lemma 5.4**.: Given a graph \(\mathbf{G}\) and an initial drawing \(\Gamma_{0}\), \(\forall n\in\mathbb{N}\), \(cr(\Gamma_{n})\geq cr(\Gamma_{n+1})\). Proof.: By calling the \(\mathtt{move}(\mathtt{v},\Gamma)\) function, the position that it returns can not have a worse crossing number than the initial position \(p\) of \(v\). Since the first criterion of comparison, if every potential positions obtained by the rays have a higher crossing number, \(p\) is returned. Thus, after each iteration of the main loop in the RRGD function, the crossing number has decreased or remained stationary. **Theorem 5.1**.: Given a graph \(\boldsymbol{G}\), and an initial drawing \(\Gamma_{0}=\Gamma\), the algorithm always converges and stops. Proof.: For each application of the global movement function, one of this outcome can appear: 1. either \(cr(\Gamma)\) remains the same, and \(\xi(\Gamma)\) decreases 2. or \(cr(\Gamma)\) decreases and \(\xi(\Gamma)\) is set to another value but bounded by \(\xi_{MAX}\) First, the case 1 can only appear a finite number of times before we are forced to enter case 2 or to stop completely the algorithm since the decrease of the energy is discretized by the energy delta \(\Delta\xi\), as shown by Lemmas 5.3 and 5.2. Second, the number of times we go into case 2 is also finite: not only the crossing number can not increase by Lemma 5.4, but once it reach the theoretical crossing number of \(\boldsymbol{G}\), it can not go under (obviously, we will almost always stop before). Moreover, since \(\xi(\Gamma)\) is bounded following Lemma 5.1, we also can not start an infinite loop of case 1. **Proposition 5.1**.: Given a graph \(\boldsymbol{G}=\langle V,E\rangle\) and one drawing \(\Gamma_{n}\), \(\Gamma_{n+1}\) can be computed in \(O(|E|^{2}Rn_{r})\) with \(R\) the number of rays cast for each vertex and \(n_{r}\) their sizes. Proof.: To move one node \(v\), we thus cast \(R\) rays, with each one of them reflecting or crossing \(n_{r}\) times. Since the computations of each intersection between a ray and the edges is in \(O(|E|)\) and the computation of the opacity depends on the degree of \(v\) (which we will denote \(k\)). The complexity of the move function is in \(O(k|E|Rn_{r})\). Calling this for at most each node in \(\boldsymbol{G}\) leads to a complexity of \(O(|E|^{2}Rn_{r})\) to go from a \(\Gamma_{n}\) to \(\Gamma_{n+1}\). ## 6 Performances and experiments ### Testing process In order to assess the performances of our algorithm, we tested it on instances of graphs present in the dataset of the graph drawing community 2 and also with random 3-connected graphs constructed by starting from \(K_{4}\) and iteratively inserting new edges between the middle of two previous ones. The final experiments (Figure 10) have been done on 500 graphs of each of the main classes in this dataset, mainly, _NORTH_, _ROME_ and _DAG_. We compared our results with the best Rectilinear Crossing Minimization one of our knowledge, _i.e._ StressMinimization, implemented in _OGDF_. Note that StressMinimization has been chosen over SpringEmbedderKK since it produces in average fewer crossings. However, before comparing frontally with StressMinimization, we will investigate first the empiric influence of our parameters on the results. Note that all the results shown are within the _three-sigma limit_ (_i.e._ values further from the mean by more than three times the standard deviation will be ommitted). Footnote 2: Graph drawing benchmarks from graphdrawing.org can be found at [http://www.graphdrawing.org/data.html](http://www.graphdrawing.org/data.html) Note that we ran all the experiments on an Intel Core i7-10850H processor running at 2.71GHz, with 16GB of RAM. The algorithm has been implemented in Python 3.7.10 and run with the version 7.3.5 of PyPy. ### Drawing initialization Our algorithm needs an existing drawing to perform. In the basic behavior, the x and y coordinates of the vertices are initialized randomly in preset intervals set to respectively the width and the height of the visualization window. We tried other simple initial configurations such as the circle layout (where vertices are trivially placed uniformly on the boundary on a circle according to there indices). We also considered a quick pass with a custom force-directed layout (this pass is just few iterations implementing only Hooke's and Coulomb's, nothing comparable to the StressMinimization algorithm that we speaked about). However, the goal of the initial layout is only to quicken the total computation time of the algorithm, we must ensure that the initialization is not too "powerful" for our algorithm, meaning that the final result does not depends on it. An overview of the influence of these layouts are shown in Figure 8. In addition, to make sure of it, we made statistical tests (_i.e.__Two-sample t-Test_ with a threshold of 1%) to verify the hypotheses which say that the average crossing number is the same for every starting layout. None of these hypotheses have been rejected. ### Energy delta The \(\Delta\xi\) provides an equilibrium between the computation time and the refinement of the solution. The smaller it is, the more the algorithm is careful to move the vertices. On the other hand, with a bigger \(\Delta\xi\), the algorithm do not bother to move vertices bit by bit and can potentially pass to other vertices for which their displacement is more impactful. The aim here is to find a good balance between these two aspects. ### Prohibition window The prohibition window \(\Upsilon\) has a similar role to the \(\Delta\xi\) in the behavior of the algorithm, but in a more direct way. The smaller it is, the more the algorithm spends time on specific vertices. With a large \(\Upsilon\), the vertices move more frequently which allows to find a stable state much quicker. Note that a Figure 8: Plots showing for the three classes of graphs and for the different initialization layouts, the resulting number of crossings (left) and the computation time (right). too large \(\Upsilon\) affects badly the process, diminishing the amount of vertices that can be evaluated in an iteration. Eventually, a \(\Upsilon\geq|V|\) allows the vertices to move only once, which gives obviously some rather bad results. Empirically, these consideration can be seen. Figure 9 shows the inverse proportionality of the number of crossings and the time needed. Note that in this figure, the size of the prohibition window is in proportion to the total node number. ### Crossing behavior The influence of the opacity mode is more subtle than the energy delta and the prohibition window. Deterministic crossings tend to consolidate satisfying configurations of \(\Gamma\). On the other hand, randomized ones allow, as we said, to go beyond a locally worse choice to find a better one after. This results in having fewer edge crossings overall, as we see in the Figure 10. In the latter, we compared the crossings obtained from our two versions of opacity behavior with the ones obtained from the StressMinimization algorithm. Following Section 4.3, we chose \(R=10\) to enter in average a tenth of the facets of the graph under consideration. In addition, we chose also \(n_{r}=10\), to potentially enter all the facets (considering that the rays do not reflect too early and that two rays do not enter the same facet). The two versions have significantly better results than the existing algorithm on _OGDF_. Moreover, the number of crossings from the randomized Figure 9: Plots showing the number of crossings (tops) and the execution time (bottoms) according to the energy delta (left) and the length of the prohibition window expressed as a ratio of the node number of the graphs (right). approach is lower, as expected, than the deterministic one. These results come at a certain cost: the computation time. Figure 10 shows this execution time for our two version. The time is expressed as a function of the number of edges of the graph, since this is the main parameter in the complexity of our algorithm. Again, the results are in correlation with our expectations, the deterministic one converges faster than the randomized one. This is simply due to the fact that the consideration of the opacity only as a bias allow to go beyond some local minima, and push back the convergence of the system to find better solutions. Note that the figure also shows that total convergence time is clearly polynomial. Finally, by reducing the number parameters \(R\) and \(n_{r}\), the resultings crossings do not deteriorate that much. However, the computation time, strongly depending on these values, decrease sharply to go around the second for the bigger graphs, always under StressMinimization but within a reasonable time. ### Sources For testing purposes, full sources are available upon request. ## 7 Conclusion In this paper we proposed a new heuristic for the _Rectilinear Crossing Minimization Problem_ based on the basic principle of iteratively moving vertices along the plane. The main novelty consists in the moving mechanism which is based on an idea of casting rays against the edges of the graph. The new algorithm has a competitive complexity compared to the other vertex-moving algorithms. We also discussed various ways to tune the algorithm parameters and see how to trade precision for time. We studied some geometrical properties connected to our algorithm in order to avoid the behaviors that can lead to edge cases. Benchmarks show that the proposed algorithm causes fewer crossings than the best competitor (at the best of our knowledge) whose implementation is available. Several improvements of the algorithm are possible: the first one being the parallelization of the code to speed up the execution. A second option would be to introduce better data-structures to improve the complexity of finding the intersections between the edges and a ray. Currently, even though it is not fully detailled in the paper, we already optimized this step by sorting Figure 10: Plots showing for the four classes of graphs and for the different algorithms: deterministic, randomized (refering to how the opacity is considered) with \(R=10\) and \(n_{r}=10\) for each of these and StressMinimization, the resulting number of crossings (left) and the computation time according to the number of edges of the considered graph (right) Figure 11: Plots showing for the four classes of graphs and for different values of \(R\) and \(n_{r}\), the resulting number of crossings (left) and the computation time according to the number of edges of the considered graph (right), again compared with Stress Minimization the edges by their left endpoint and applying a dichotomic search on them but a clever algorithmic trick could possibly perform better. Finally, it would be interesting to compare our algorithm with other recent ones based on the same principle. Unfortunetly, we should reimplement one based on a paper, since no algorithm of this kind is available in _OGDF_, and no code is available elsewhere.
2306.00690
Quantum many-body scars in spin-1 Kitaev chain with uniaxial single-ion anisotropy
To establish a solid-state-based framework for the coexistence of quantum many-body scars and quantum criticality, we investigate the spin-1 Kitaev chain with uniaxial single-ion anisotropy (SIA). In the subspace with uniform $\mathbb{Z}_2$ gauge fields, this model can be exactly mapped to the spin-1/2 effective detuned PXP Hamiltonian, where the SIA plays a role of the static detuning term. The quench dynamics starting from the product states is symmetric between positive and negative values of the SIA, while a quantum phase transition from the Kitaev spin liquid to the dimer phase only occurs at the critical point with a negative $D_c$, implying the spontaneous breaking of the translational symmetry. We find that the coherent oscillations of quantum fidelity and certain local observables are sustained against small SIA perturbations in a quantum quench from special initial states. While the oscillation amplitudes of these observables decay with time as the SIA strength is increased, the system completely thermalizes upon approaching the critical point. In contrast, the initial polarized state, which shows an absence of revivals of quantum fidelity, will exhibit long revivals for $D<D_c$. Finally, we investigate the evolution of phase boundaries of the Kitaev spin liquid and dimer phase by introducing Heisenberg interactions, which spoil the $\mathbb{Z}_2$ gauge fields. A complete phase diagram is given by the infinite time-evolving block decimation method and the ground state properties of each phase are accurately captured by various spin correlations. Our work opens the door to understanding exotic connections between many-body scars and quantum criticality in systems with higher spins.
Wen-Yi Zhang, Ya-Nan Wang, Dongchang Liu, Jie Ren, Jia Li, Ning Wu, Andrzej M. Oleś, Wen-Long You
2023-06-01T14:00:59Z
http://arxiv.org/abs/2306.00690v2
# Quantum many-body scars in spin-1 Kitaev chain with uniaxial single-ion anisotropy ###### Abstract To establish a solid-state-based framework for the coexistence of quantum many-body scars and quantum criticality, we investigate the spin-1 Kitaev chain with uniaxial single-ion anisotropy (SIA). In the subspace with uniform \(\mathbb{Z}_{2}\) gauge fields, this model can be exactly mapped to the spin-1/2 effective detuned PXP Hamiltonian, where the SIA plays a role of the static detuning term. The quench dynamics starting from the product states is symmetric between positive and negative values of the SIA, while a quantum phase transition from the Kitaev spin liquid to the dimer phase only occurs at the critical point with a negative \(D_{c}\), implying the spontaneous breaking of the translational symmetry. We find that the coherent oscillations of quantum fidelity and certain local observables are sustained against small SIA perturbations in a quantum quench from special initial states. While the oscillation amplitudes of these observables decay with time as the SIA strength is increased, the system completely thermalizes upon approaching the critical point. In contrast, the initial polarized state, which shows an absence of revivals of quantum fidelity, will exhibit long revivals for \(D<D_{c}\). Finally, we investigate the evolution of phase boundaries of the Kitaev spin liquid and dimer phase by introducing Heisenberg interactions, which spoil the \(\mathbb{Z}_{2}\) gauge fields. A complete phase diagram is given by the infinite time-evolving block decimation method and the ground state properties of each phase are accurately captured by various spin correlations. Our work opens the door to understanding exotic connections between many-body scars and quantum criticality in systems with higher spins. ## I Introduction In the past decade, there has been significant progress in understanding out-of-equilibrium dynamics of isolated quantum systems [1]. The eigenstate thermalization hypothesis (ETH) [2; 3; 4; 5; 6; 7] has been regarded as a cornerstone of contemporary statistical mechanics, which states that in a thermalizing system, the expectation value of a generic local observable in individual eigenstates should be equivalent to its microcanonical average. Despite the significant success of ETH in explaining thermalization of chaotic systems, instances of ergodicity breaking are continually being discovered. The integrable systems [8; 9; 10; 11; 12; 13] and many-body localization [14; 15; 16; 17; 18; 19; 20] are the most noteworthy exceptions. The strong ergodicity breaking phenomena in counter-examples, where most of the eigenstates violate the ETH, can be ascribed to the presence of conserved quantities [21]. In an integrable system, the number of conserved quantities is equal to the number of degrees of freedom [22]. On the other hand, many-body localization occurring in systems where disorder and interactions prevent the system from thermalizing can be also described by the emergence of an extensive set of quasi-local integrals of motions [23]. Recently, a Rydberg-atom quantum simulator [24] revealed the emergence of a new type of ETH-violating eigenstates in certain nonintegrable quantum many-body systems, dubbed quantum many-body scar (QMBS) states [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Some specific low-entanglement states in a many-body quantum system are exceptional in that they violate the ETH and can retain quantum coherence for long times, even when the system is chaotic and thermalizing [36]. To be specific, the number of QMBSs is exponentially smaller than the Hilbert space dimension. The discovery of QMBSs has opened up a new paradigm for studying unusual nonequilibrium phenomena including many-body revivals and nonthermal stationary states [37; 38]. Soon the scarred states have been observed in a variety of physical systems, including _inter alia_, interacting spin chains [39; 40], cold atom systems [41; 42; 43], superconducting qubits [44; 45], etc. In parallel with exciting experimental advances, theoretical studies have shown that QMBSs are not related to the usual symmetries [46]. Known systems that host QMBS states also include the Affleck-Kennedy-Lieb-Tasaki (AKLT) model [47; 48], the spin-1 XY model [49], and the generalized Fermi-Hubbard model [50]. The associated weak ergodicity breaking not only challenges the validity of ETH but also poses a different scenario of nonthermal dynamics. Later it was pointed out theoretically that the Rydberg experiment can be described by the one-dimensional (1D) chain of spin-1/2 degrees of freedom [51; 52; 53; 54; 55; 56], where the spin-up state \(|1\rangle\) corresponds to a Rydberg atom occupying an excited state and the spin-down state \(|0\rangle\) denotes an atom in the ground state. Such a spin-1/2 spin chain, known as the PXP Hamiltonian and resulting from the first-order Schrieffer-Wolff transformation applied to a tilted Ising chain, is described by \[\hat{H}_{\rm PXP}=\sum_{i=1}^{N}P_{i-1}X_{i}P_{i+1}, \tag{1}\] where \(N\) is the number of sites, \(X=|0\rangle\langle 1|+|1\rangle\langle 0|\) and \(P=|0\rangle\langle 0|\) is the projector onto the ground state, ensuring that the nearby atoms are not simultaneously in the excited state. Such Rydberg blockade induced kinetic constraint is responsible for the atypical dynamics of QMBS states. When the system is initialized at time \(t=0\) in the product state \(|\psi(0)\rangle\equiv|\mathbb{Z}_{k}\rangle\left(k=1,2,3,4\right)\), namely, \[|\mathbb{Z}_{1}\rangle = |0000\cdots 00\rangle,\ \ \ \ |\mathbb{Z}_{2}\rangle=|010101\cdots 01\rangle,\] \[|\mathbb{Z}_{3}\rangle = |001001\cdots 001\rangle,|\mathbb{Z}_{4}\rangle=|00010001\cdots 0 001\rangle, \tag{2}\] the system then follows the evolution governed by the PXP Hamiltonian, \(|\psi(t)\rangle=\exp(-i\hat{H}_{\rm PXP}t)|\psi(0)\rangle\). It was noted that the quantum quench from either \(|\mathbb{Z}_{2}\rangle\) or \(|\mathbb{Z}_{3}\rangle\) exhibits periodic revivals in the quantum fidelity \[F(t)=|\langle\psi(0)|\psi(t)\rangle|^{2}, \tag{3}\] while \(|\mathbb{Z}_{1}\rangle\) or \(|\mathbb{Z}_{4}\rangle\) thermalize under time evolution. The observed oscillations and apparent nonergodic dynamics are due to the existence of equal spacing of the QMBS eigenstates [57]. Considering the experimental realization and the important role of the emergence of QMBS in the PXP model, the intensive study of the PXP model has been the subject of a separate thread of investigation of much current interest [58; 59; 60; 61; 62]. In fact, this effective model has a long history dating at least as far back as an effective Hamiltonian for the tilted Bose-Hubbard model [63]. The PXP model has been studied in various other contexts including Fibonacci anyon chains [64; 65; 66], Ising models on dimer ladders [67; 68], U(1) lattice gauge theory in its quantum link [69; 70; 71; 72] dipole-conserving Hamiltonians [73], the quantum Hall effect on a thin torus at filling \(\nu\)=1/3 [74], etc. Meanwhile, the PXP model was extended to Floquet Hamiltonians [75], higher spins [76], and higher dimensions [77]. The PXP model can be deduced from the biaxial Ising model with both transverse and longitudinal fields at zero detuning [78] and the Bose-Hubbard model at resonance [79]. It was also claimed that there is an intimate relation between QMBS and quantum criticality [80] or quantum integrability [81]. Remarkably, the 1D PXP chain is shown to be embedded in the spin-1 Kitaev model [82; 83], highlighting a solid-state-based realization of the PXP model. The celebrated Kitaev model is renowned as a prototype model of quantum spin liquid (QSL), which hosts massive long-range entanglement and fractional quasiparticles from localized spins described by bosonic/fermionic spinons and \(\mathbb{Z}_{2}\) gauge fields [84]. Solid-state material realizations of the bond-dependent Kitaev interactions with \(S\)=1/2 local moments have vitalized the research in QSLs [85; 86], where strong spin-orbit coupling in a strongly correlated Mott insulator plays an essential role. This poses \(4d\) and \(5d\) transition-metal compounds are proposed to be candidate materials, such as triangular lattice \(\rm YbMgGaO_{4}\)[87], \(\rm 1T\)-\(\rm TaSe_{2}\)[88] and \(\rm NaYbS_{2}\)[89], kagome lattice \(\rm ZnCu_{3}(OH)_{6}Cl_{2}\)[90] and \(\rm Na_{4}Ir_{3}O_{8}\)[91], honeycomb lattice \(\alpha\)-\(\rm RuCl_{3}\)[92], \(\rm H_{3}LiIr_{2}O_{6}\)[93], \(\rm Cu_{2}IrO_{3}\)[94], \(\rm RuBr_{3}\)[95] and \(\rm BaCo_{2}(AsO_{4})_{2}\)[96], pyrochlore lattice Ce\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\)[97] and \(\rm Ba_{3}Yb_{2}Zn_{5}O_{11}\)[98]. After the ground-breaking proposal for realizing the higher-spin analogs of the Kitaev interactions [99], a number of materials with strong Hund's coupling among two electrons in \(e_{g}\)-orbitals of transition metal ions and strong spin-orbit coupling of anions have emerged as potential candidates for the \(S=1\) Kitaev model. Recently the importance of studying the higher-spin Kitaev physics has attracted a lot of attention. Both experimental and numerical analyses have been indispensablely carried out to explore the higher-spin Kitaev physics, such as \(S=1\)[100; 101; 102; 103; 104; 105], \(S=3/2\)[106; 107; 108], and even \(S=2\) systems [109]. It is noteworthy that non-Kitaev interactions widely exist in candidate materials, which is a chief obstacle of keeping the system away from the pure Kitaev limit. The ferromagnetic Heisenberg interactions are generated from superexchange paths together with Kitaev interactions, in parallel with the antiferromagnetic Heisenberg term from direct-exchange paths. For Mott insulators with two or more atoms per site, the direct on-site interactions can give rise to a nonlinear term \(\propto D\sum_{j}(S_{j}^{z})^{2}\) for \(S\geq 1\), where \(D\) is the so-called uniaxial single-ion anisotropy (SIA) constant. Recently, theoretical [110; 111; 112] and experimental [113] studies on the Kitaev model with additional SIA have attracted increasing attention. In this work, we will show that the static detuning in the PXP model, which describes the static frequency difference between the ground and excited states, can be mimicked by the additional SIA in the spin-1 Kitaev model, which normally stems from zero-field splitting due to a crystal-field anisotropy. Upon varying the strength of the SIA, a corresponding second-order phase transition will occur with a translational symmetry breaking. A comprehensive study of the phase diagram has been conducted by incorporating significant Heisenberg interactions. In the numerical calculation, we employ the exact diagonalization (ED) method, the time-evolving a matrix product state (MPS) with matrix product operators (MPOs) [114] based on ITensor [115] and the infinite time-evolving block decimation (iTEBD) algorithm [116]. The remainder of this paper is organized as follows: In Sec. II, we present the spin-1 Kitaev model with SIA (the KD model), and deduce the effective spin-1/2 detuned PXP model in the ground-state manifold. The quantum many-body scars in the spin-1 KD model are studied in detail. In Sec. III, we investigate the quantum criticality in the KD model, and find the characteristics of the dimer phase. Under the cooperative effects of the single-ion anisotropy and Heisenberg interactions (the KHD model), we reveal the rich quantum phase diagram of KHD model in Sec. IV. The summary and conclusion are given in Sec. V. Spin-1 Kitaev chain with uniaxial single-ion anisotropy In this work, we consider a spin-1 Hamiltonian composed of the Kitaev interaction and SIA, given by \[\hat{H}_{\rm KD}\!=\!\sum_{j=1}^{N/2}\left(K_{2j-1}S_{2j-1}^{x}S_{2j}^{x}\!+\!K _{2j}S_{2j}^{y}S_{2j+1}^{y}\right)\!+\!\sum_{j=1}^{N}\!D_{j}(S_{j}^{z})^{2}, \tag{4}\] where \(K_{j}\) parameterizes the strength of the bond-dependent Kitaev exchange coupling between two neighbouring sites \(\langle j,j+1\rangle\), and \(D_{j}\) denotes the amplitude of the SIA at the \(j\)-th site. \(S_{j}^{a}\) (\(a=x,y,z\)) is the \(a\)-component of the spin-1 operator at the \(j\)-th site among total \(N\) sites, obeying the SU(2) algebra, i.e., \([S_{i}^{a},S_{j}^{b}]=i\delta_{ij}\epsilon_{abc}S_{j}^{c}\), with the antisymmetric tensor \(\epsilon_{abc}\) and \((\mathbf{S}_{j})^{2}=S(S+1)=2\). We will work with a special spin-1 representation, i.e., \[|x\rangle = \frac{1}{\sqrt{2}}(|-1\rangle\!-\!|1\rangle),\quad|y\rangle\!=\! \frac{i}{\sqrt{2}}(|-1\rangle\!+\!|1\rangle),\quad|z\rangle\!=\!|0\rangle, \tag{5}\] where \(|m\rangle\) is the eigenstate of the spin operator \(S^{z}\) with eigenvalues \(m=\!-1\), \(0\), \(1\). In such a representation, we have \(S_{bc}^{a}=-i\epsilon_{abc}\) and \(\{S^{x},S^{y},S^{z}\}\) are given by \[\left(\begin{array}{ccc}0&0&0\\ 0&0&-i\\ 0&i&0\end{array}\right),\left(\begin{array}{ccc}0&0&i\\ 0&0&0\\ -i&0&0\end{array}\right),\left(\begin{array}{ccc}0&-i&0\\ i&0&0\\ 0&0&0\end{array}\right). \tag{6}\] The corresponding site parity matrices are defined as \(\Sigma_{j}^{a}\equiv e^{i\pi S_{j}^{a}}\!=1-2(S_{j}^{a})^{2}\) and become diagonal, \[\left(\begin{array}{ccc}1&0&0\\ 0&-1&0\\ 0&0&-1\end{array}\right),\left(\begin{array}{ccc}-1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right),\left(\begin{array}{ccc}-1&0&0\\ 0&-1&0\\ 0&0&1\end{array}\right). \tag{7}\] It has been revealed that different Ising interactions on odd and even bonds in Eq. (4) can be rewritten into a similar form through a unitary transformation on the even sites [117, 83]: \[U=\prod_{j}\exp(i\pi S_{2j}^{x})\exp\left(i\frac{\pi}{2}S_{2j}^{z}\right), \tag{8}\] which gives \(US_{2j}^{x}U^{\dagger}=S_{2j}^{y}\), \(US_{2j}^{y}U^{\dagger}=S_{2j}^{x}\), and \(US_{2j}^{z}U^{\dagger}=-S_{2j}^{z}\), as well as \(U|x\rangle=|y\rangle\), \(U|y\rangle=|x\rangle\), \(U|z\rangle=-|z\rangle\). Note that the order of rotations about \(x\)- and \(z\)- axes in Eq. (8) is essential as they do not commute. After the unitary transformation, the Kitaev exchange couplings in Eq. (4) take a translation-invariant form \[\tilde{H}_{\rm K}=\sum_{j=1}^{N}K_{j}S_{j}^{x}S_{j+1}^{y}. \tag{9}\] It is easy to see that the SIA term remains in its original form and the Hamiltonian (4) can be rewritten \[\tilde{H}_{\rm KD}=\sum_{j=1}^{N}K_{j}S_{j}^{x}S_{j+1}^{y}+D_{j}(S_{j}^{z})^{2}. \tag{10}\] Note that the sign of the Kitaev interactions is still under debate with conflicting results from theoretical and experimental studies [118, 119]. Hereafter the uniform couplings with \(K_{j}=1\) and \(D_{j}=D\) (\(\forall j\)) are assumed unless otherwise specified. Under the rotation (8), the local bond parity operators are defined by \[\hat{W}_{j}=\Sigma_{j}^{y}\,\Sigma_{j+1}^{x}. \tag{11}\] One can readily find that \(\tilde{W}_{j}\) is invariant by inspecting \([\hat{W}_{j},\tilde{H}_{\rm KD}]=0\). As the eigenvalues of \(\Sigma_{j}^{a}\) in Eq. (7) are \(\pm 1\), the eigenvalues of \(\hat{W}_{j}\) are related to \(\mathbb{Z}_{2}\)-valued invariants, i.e., \(w_{j}=\pm 1\). It is straightforward to deduce from Eq. (7) that for a pair of nearest neighbor sites \(\langle j,j\!+\!1\rangle\), total \(3\!\times\!3=9\) allowed states can be distinguished into the \(w_{j}=1\) sector spanned by \(|xy\rangle\), \(|xz\rangle\), \(|yx\rangle\), \(|zy\rangle\), \(|zz\rangle\) and the \(w_{j}=-1\) sector spanned by \(|xx\rangle\), \(|yy\rangle\), \(|yz\rangle\), \(|zx\rangle\). Hence, the whole Hilbert space \(\mathcal{H}\) can be decomposed into \(2^{N}\) dynamically disconnected Krylov subspaces of unequal sizes characterized by \(\vec{w}=\{w_{1},w_{n},\cdots,w_{N}\}\) as \[\mathcal{H}=\bigoplus_{n=1}^{2^{N}}\mathcal{K}_{n}. \tag{12}\] The Krylov subspace \(\mathcal{K}_{n}\) is spanned by \[\{\mathcal{K}_{n}\}\equiv{\rm Span}\{|\psi_{n}\rangle,\tilde{H}_{\rm KD}|\psi_ {n}\rangle,\tilde{H}_{\rm KD}^{2}|\psi_{n}\rangle,\cdots\}, \tag{13}\] where \(|\psi_{n}\rangle\) is the so-called root state, which is a product state having explicit \(\mathbb{Z}_{2}\) symmetries. We have identified the ground state of spin-1 Kitaev chain lies within the flux-free sector, i.e., \(\vec{w}=\{1,1,\cdots,1\}\)[82]. In such a constrained Hilbert space, there is one-to-one mapping between base configurations \(\{\tilde{\mathcal{K}}_{S=1}\}\) of Eq. (9) within the flux-free sector and the configurations \(\{\mathcal{K}_{S=1/2}\}\) of Eq. (1) with nearest neighbor exclusion. The rule for constructing the mapping is simple. The one-to-one mapping between the 5 allowed two-site configurations for a pair of nearest neighbor sites \(\langle j,j\!+\!1\rangle\) and spin-\(1/2\) degree of freedom for the bond center \(j+1/2\) is given by [120] \[|\cdots zz\cdots\rangle_{j,j+1} \leftrightarrow|\cdots\downarrow\downarrow\downarrow\cdots\rangle_{j- \frac{1}{2},j+\frac{1}{2},j+\frac{3}{2}},\] \[|\cdots yx\cdots\rangle_{j,j+1} \leftrightarrow|\cdots\downarrow\uparrow\downarrow\cdots\rangle_{j- \frac{1}{2},j+\frac{1}{2},j+\frac{3}{2}},\] \[|\cdots zy\cdots\cdots\rangle_{j,j+1} \leftrightarrow|\cdots\downarrow\downarrow\uparrow\cdots\rangle_{j- \frac{1}{2},j+\frac{1}{2},j+\frac{3}{2}},\] \[|\cdots xz\cdots\rangle_{j,j+1} \leftrightarrow|\cdots\uparrow\downarrow\downarrow\cdots\rangle_{j- \frac{1}{2},j+\frac{1}{2},j+\frac{3}{2}},\] \[|\cdots xy\cdots\rangle_{j,j+1} \leftrightarrow|\cdots\uparrow\downarrow\uparrow\cdots\rangle_{j- \frac{1}{2},j+\frac{1}{2},j+\frac{3}{2}}. \tag{14}\] It is worthy noting that the prime lattice of the spin-1 Kitaev chain is defined on the sites \(\{j\}\), while the dual lattice of spin-1/2 PXP model lives on the linking bonds at sites \(\{j+1/2\}\). This mapping from sites to bonds includes links to the two surrounding sites and vice verse, which becomes subtle for open boundary conditions. As an example, the four product states given by Eq. (2) can be mapped to the following states in \(\{\tilde{\mathcal{K}}_{S=1}\}\): \[|\widetilde{\mathbb{Z}}_{1}\rangle = |zzzz\cdots zz\rangle,\qquad|\widetilde{\mathbb{Z}}_{2}\rangle=| xyxyxyxy\cdots xy\rangle,\] \[|\widetilde{\mathbb{Z}}_{3}\rangle = |yxxyxz\cdots yxz\rangle,|\widetilde{\mathbb{Z}}_{4}\rangle=|yxzzyxzzz \cdots yxzz\rangle. \tag{15}\] The simplest root configuration in \(\{\tilde{\mathcal{K}}_{S=1}\}\) is the product state \(|\widetilde{\mathcal{Z}}_{1}\rangle\), which is the ground state in the \(D\to\infty\) limit, and the Hilbert space of this sector can be constructed by successively applying the Hamiltonian on this root state, i.e., \[\{\tilde{\mathcal{K}}_{S=1}\}\equiv\mathrm{Span}\{|\widetilde{ \mathcal{Z}}_{1}\rangle,\tilde{H}_{\mathrm{K}}|\widetilde{\mathcal{Z}}_{1} \rangle,\tilde{H}_{\mathrm{K}}^{2}|\widetilde{\mathcal{Z}}_{1}\rangle,\cdots\}. \tag{16}\] The corresponding dimension \(d\) of the flux-free sector is proven to be a Lucas number [83], i.e., \(d=F_{N-1}+F_{N+1}\), where \(F_{\ell}\) is the \(\ell\)th Fibonacci number. More precisely, \(d=g^{N}+g^{-N}\) with \(g=(1+\sqrt{5})/2\) being the golden ratio. This exponentially large subspace belongs to the largest Krylov subspace among the exponential number of Krylov subspaces, implying strong fragmentation of the Hilbert space. The graphical representation of the constrained Hilbert space in the \(\vec{w}{=}\{1,1,\cdots,1\}\) subspace is schematically shown in Fig. 1 for \(N=6\). The vertices in the 18-dimensional hypercube is uniquely labeled by the connected configurations (16), which have been arranged by the action of the Kitaev Hamiltonian \(\tilde{H}_{\mathrm{K}}\) on the product state \(|\cdots zzz\cdots\rangle\). The process of bond converting \(|\cdots zz\cdots\rangle_{j,j+1}\leftrightarrow|\cdots yx\cdots\rangle_{j,j+1}\) under the action of \(\tilde{H}_{K}\) corresponds to the spin flip \(|\cdots 0\cdots\rangle_{j+1/2}\leftrightarrow|\cdots 1\cdots\rangle_{j+1/2}\) in \(\{\mathcal{K}_{S=1/2}\}\). In this regard, the spin-1 Kitaev chain with periodic boundary conditions can be exactly mapped to the a single qubit-flip model represented by the effective spin-1/2 PXP model in Eq. (1). Remarkably, we find the ground state remains in the flux-free sector even in the presence of the SIA. The action of the SIA term on the active bases yields, \[D\big{[}(S_{j}^{z})^{2}+(S_{j+1}^{z})^{2}\big{]}|\cdots yx \cdots\rangle_{j,j+1} = 2D|\cdots yx\cdots\rangle_{j,j+1},\] \[D\big{[}(S_{j}^{z})^{2}+(S_{j+1}^{z})^{2}\big{]}|\cdots zz \cdots\rangle_{j,j+1} = 0, \tag{17}\] which results in an effective detuning term on the spin-1/2 degrees of freedom, such that the effective Hamiltonian can be mapped to the spin-1/2 detuned PXP model, \[\hat{H}_{\mathrm{dPXP}} = \sum_{i=1}^{N}P_{i-1}X_{i}P_{i+1}+2D\sum_{i=1}^{N}P_{i-1}n_{i}P_{ i+1}, \tag{18}\] where \(n=1-P=|1\rangle\langle 1|\). Note that in both Eq. (1) and Eq. (18), \(i\) labels the bonds between sites, while the index \(j\) labels the sites in Eq. (10). The detailed derivation of Eq. (18) can be found in Appendix A. The detuning term is commonly prevalent in practical experiments. The static detuning (also called chemical potential [121]) of the driving laser from the excited state can be finely tuned in the cold-atom platforms. It has been noted that quantum quench from initial states \(|\widetilde{\mathcal{Z}}_{2}\rangle\) or \(|\widetilde{\mathcal{Z}}_{3}\rangle\) results in coherent oscillations, indicating the existence of ETH-violating QMBSs. In our ED simulation [122], the time-evolved operator \(\exp(-i\hat{H}t)\) governed by either \(\tilde{H}_{\mathrm{KD}}\) as defined in Eq. (10) or \(\hat{H}_{\mathrm{dPXP}}\) as given in Eq. (18) is discretized using time steps of \(dt=0.01\), and the time-evolved state \(|\psi(t)\rangle\) is subsequently computed using the fourth-order Runge-Kutta method within the corresponding constraint Hilbert space. Figure 2 demonstrates these oscillations in the dynamics of the quantum fidelity for \(D=0.1\). The periodic revivals for the spin-1 KD model (10) starting from the \(|\widetilde{\mathcal{Z}}_{2}\rangle\), \(|\widetilde{\mathcal{Z}}_{3}\rangle\) initial states completely coincide with the ones observed for the spin-1/2 detuned PXP model, which starts from the corresponding \(|\mathcal{Z}_{2}\rangle\), \(|\mathcal{Z}_{3}\rangle\) initial states. Recent studies have signified an intimate relation between QMBS and quantum criticality [80; 81]. As \(D\) is tuned to \(D_{c}\approx-0.655\), the ground state of the detuned PXP model undergoes a Ising phase transition associated with a spontaneous breaking of \(\mathbb{Z}_{2}\)-symmetry [123; 124; 125; 126]. The non-thermalizing Figure 2: Characteristic quantum features of the spin-1 KD model and the spin-1/2 detuned PXP model with \(D=0.1\). Quantum fidelity \(F(t)\) for \(\tilde{H}_{\mathrm{KD}}\) in Eq. (10) (\(\tilde{H}_{\mathrm{dPXP}}\) in Eq. (18)) starting from the initial states: (a) \(|\widetilde{\mathcal{Z}}_{2}\rangle\) (\(|\widetilde{\mathcal{Z}}_{2}\rangle\)) with \(N=18\), and (b) \(|\widetilde{\mathcal{Z}}_{4}\rangle\) (\(|\mathcal{Z}_{3}\rangle\)) with \(N=18\). Figure 1: The Hilbert space graph of the Kitaev Hamiltonian in Eq. (9) within the \(\vec{w}=\{1,1,1,1,1,1\}\) subspace for \(N=6\) sites with periodic boundary conditions. The nodes of the graph \(|m\rangle\) (\(m=0,1,2,\ldots,17\)) label the allowed product states, and the edges connect product state configurations that differ by an excitation \(|\cdots zz\cdots\rangle\leftrightarrow|\cdots yx\cdots\rangle\) due to the action of the Hamiltonian. dynamics can be also captured by measuring the expectation values of certain local observables [127], e.g., \[\left\langle\hat{O}\right\rangle=\frac{1}{2}\left\langle\left[\left(S_{1}^{+} \right)^{2}+\left(S_{1}^{-}\right)^{2}\right]\right\rangle. \tag{19}\] Under the dual transformation (14), the correlator \(\left\langle\hat{O}\right\rangle\) of the KD model in Eq. (10) is found to be equivalent to the density imbalance, \(\langle n_{2}\rangle-\langle n_{1}\rangle\), an observable corresponding to the staggered magnetization in the detuned PXP model in Eq. (18). Performing a quantum quench from an initial state \(|\widetilde{\mathbb{Z}}_{2}\rangle\) leads to nearly perfect coherent dynamics. The coherence oscillations persist for long times for \(D=0\), as is shown in Fig.3. Note that the values of \(F(t)\) and \(\langle\hat{O}\rangle\) is independent of the sign of \(D\) when the system starts from the product states, \(|\widetilde{\mathbb{Z}}_{k}\rangle\) (see details in Appendix B). As exhibited in Fig. 3(a), these oscillations are found to be remarkably robust to small STA perturbations, while moderate perturbations make the oscillations damp sharply until \(D\) reaches a threshold value. One carefully observes from Fig. 3(b) that the oscillations remain strong for deviations up to \(D\approx\pm D_{c}\), and there is barely oscillation at \(D=D_{c}\), upon which the thermalization completely sets in. Suppose that the envelope of \(\langle\hat{O}\rangle\) can be described by exponentially decaying oscillations, \(\langle\hat{O}(t)\rangle=Ae^{-t/\tau}\cos\omega t\) over time \(t\), with the fitting parameters \(A\), \(\tau\), and \(\omega\). We observe that the inverse lifetime approximately follows \(\tau^{-1}\sim D^{2}\) at small \(D\), reminiscent of the Fermi's golden rule [128]. Additionally, it is worth noting that the decay rate of oscillations \(\tau^{-1}\) at \(D=0\) remains small but finite, suggesting that the \(|\widetilde{\mathbb{Z}}_{2}\rangle\) initial state only approximates the near-perfect scar states in the standard PXP model. The fact that the quantum critical point \(D_{c}\) is negative and there is no quantum phase transition for positive \(D\), combined with the significat Figure 3: (a) The contour map of time evolution of \(\langle\hat{O}\rangle\) defined in (19) of \(\widetilde{H}_{\rm KD}\) (10) in a system of \(N=28\) spins prepared in \(|\widetilde{\mathbb{Z}}_{2}\rangle\) obtained by ED. (b) The time evolution of \(\langle\hat{O}\rangle\) for different values of \(D\). The curves correspond to \(D=0.0\), \(-0.1\), \(-0.2\), \(-0.3\), \(-0.4\), \(-0.5\), \(-0.655\) (from bottom to top at \(t=5\)). The dashed lines are fits capturing the amplitude decay. Inset shows inverse lifetime of \(\langle\hat{O}(t)\rangle\) envelop with increasing \(D^{2}\). We extract the decay time by fitting the data to the scaling ansatz \(\langle\hat{O}(t)\rangle=Ae^{-t/\tau}\cos\omega t\) (see main text). Figure 4: The dynamic evolution of the spin-1/2 detuned PXP model starting from the initial state \(|\mathbb{Z}_{1}\rangle\): (a) The quantum fidelity \(F(t)\) with respect to different \(D\) for \(N\!=\!28\); (b) Finite-size scaling of \(F(t)\) for the first peak in panel (a). ground states as \(D\) tends infinity, undermine the viewpoint that quantum many-body scars and quantum criticality are directly bridged. For \(D=0\), the quantum fidelity revivals do not occur for the initial state \(|\mathbb{Z}_{1}\rangle\). As \(D\) decreases from zero, surprisingly, there will be a slight revival in fidelity for the same initial state. As \(D\) continues to decrease, the oscillation becomes more clearly visible with smaller periods, as observed in Fig. 4(a). When the value of \(D\) is smaller than the critical value \(D_{c}\), the revivals become more pronounced. Finite-size scaling in Fig. 4(b) reveals that the first peak will disappear for large \(N\) before the value of \(D\) exceeds \(D_{c}\). When \(D>D_{c}\), the intercepts of the finite-size scaling curves become negative, which is an unphysical artifact and indicates that the linear fit is no longer applicable. In contrast, the first peak will always have a finite value for \(D<D_{c}\) in the thermodynamic limit, which may be related to asymptotic scars [129]. Figure 5(a) shows the quantum fidelity of \(\hat{H}_{\rm dPXP}\) with different values of \(D\) for \(N=28\) using the initial \(|\mathbb{Z}_{2}\rangle\) state. Remarkably, when \(D\) decreases from zero, persistent oscillations first decrease when \(D>D_{c}\), then damp in the critical regime \(D\approx D_{c}\), and finally revive beyond the critical point for \(D<D_{c}\). Figure 5(b) shows the overlaps between the \(|\mathbb{Z}_{2}\rangle\) state and the time evolved state starting from \(|\mathbb{Z}_{2}\rangle\) and \(|\mathbb{Z}_{2}^{\prime}\rangle\), where \(|\mathbb{Z}_{2}^{\prime}\rangle\equiv|101010\cdots 10\rangle\) is obtained by translating one lattice spacing on \(|\mathbb{Z}_{2}\rangle\). The peaks of the oscillations of \(|\langle\mathbb{Z}_{2}|\exp(-i\hat{H}t)|\mathbb{Z}_{2}\rangle|^{2}\) and \(|\langle\mathbb{Z}_{2}|\exp(-i\hat{H}t)|\mathbb{Z}_{2}^{\prime}\rangle|^{2}\) are separated by half a period. We also show the fidelity between the \(|\mathbb{Z}_{2}\rangle\) state and the ground state \(|\psi_{0}\rangle\) at different values of \(D\), as shown in the inset of Fig. 5(b). We observe that as \(D\) approaches negative infinity, the fidelity between the ground state, and \(|\mathbb{Z}_{2}\rangle\) (\(|\mathbb{Z}_{2}^{\prime}\rangle\)) gradually approaches \(1/2\). We remark that due to the Hilbert space constraint, at most half of the atoms could be in the spin-up states. In fact, in the limit of \(D\rightarrow-\infty\), the ground state becomes an antiferromagnetic phase in zero-momentum sector, i.e., \(|\psi_{0}(D=-\infty)\rangle=(|\mathbb{Z}_{2}\rangle+|\mathbb{Z}_{2}^{\prime} \rangle)/\sqrt{2}\). In contrast, Fig. 5(c) demonstrates a complete absence of revivals for \(D<D_{c}\) in the case of an initial state of \(|\mathbb{Z}_{3}\rangle\), featuring approximate QMBS states vanish. We next investigate the dynamics of bipartite entanglement entropies in both the KD model and the detuned PXP model. We choose the region \(A\) to be one half of the chain, and compare the dynamics of half-chain entanglement entropy \(\mathcal{S}\) in a quantum quench from different initial states for both the spin-1 KD model and the spin-1/2 detuned PXP model with periodic boundary conditions. In our numerical calculation for \(S=1\) KD model [cf., Figs. 6(a-b)], we utilize the time-evolving MPS approach with MPOs [115], where the bond dimension is set as \(\chi=500\) and the time step is \(dt=0.025\). The bipartite entanglement of the evolved state starting from the initial state \(|\widetilde{\mathbb{Z}}_{2}\rangle\) is shown in Fig. 6(a). When \(D\) is small Figure 5: Dynamics of quantum fidelity for the detuned PXP model: (a) Starting from initial state \(|\mathbb{Z}_{2}\rangle\) for \(N=28\) sites; (b) The overlaps between the product state \(|\mathbb{Z}_{2}\rangle\) and the time-evolved state starting from \(|\mathbb{Z}_{2}\rangle\) (solid) and \(|\mathbb{Z}_{2}^{\prime}\rangle\) (dashed) with \(D=-0.1\) for \(N=28\) sites. The inset shows the overlap of the prequench ground state with \(|\mathbb{Z}_{2}\rangle\) and \(|\mathbb{Z}_{2}^{\prime}\rangle\). (c) Starting from initial \(|\mathbb{Z}_{3}\rangle\) state for \(N=24\) sites. negative, \(\mathcal{S}\) increases slowly over time while exhibits coherent oscillations, featuring the many-body revivals. As \(D\) becomes more negative, the temporal growth rate of the bipartite entanglement increases, and the coherent oscillation becomes weak. When \(D\) approaches \(D_{c}\), the entanglement \(\mathcal{S}\) almost increases linearly with time until saturation, and the coherent oscillations disappear, implying that the system quickly thermalizes. When \(D\) is smaller than \(D_{c}\), the linear growth rate of entanglement decreases with a smaller saturated value of entanglement. We find that the growth of entanglement entropy starting from the initial state \(|\widetilde{Z}_{3}\rangle\) exhibits a similar trend, as shown in Fig. 6(b). For comparison, we show the evolution of the bipartite von Neumann entanglement entropy of the spin-1/2 detuned PXP model when the system is initially prepared in the state \(|\mathbb{Z}_{2}\rangle\) (\(|\mathbb{Z}_{3}\rangle\)) in Fig. 6(c) [6(d)]. One observes \(\mathcal{S}\) is gradually growing for \(D\) being small negative, while undergoes an extremely fast growth until saturation at \(D~{}\approx D_{c}\). When \(D<D_{c}\), the growth of entanglement entropy slows down again. Although the two Hamiltonians and their corresponding initial states are unitarily equivalent under the local transformation (14), the entanglement evolution displays noticeable differences. Notably, the coherent oscillations for \(S=1/2\) become considerably weaker compared to those for \(S=1\) when \(0\geq D>D_{c}\). The bipartite entanglement entropy heavily depends on the choice of presentations and bipartition methods. This can be perceived by an analytical example presented in the Appendix A of Ref. [83]. ## III Quantum phase transition of spin-1 Kitaev chain with uniaxial single-ion anisotropy In the previous section, we discovered a close relationship between QMBS states and quantum criticality. Accordingly, we proceed to investigate the quantum phase transition of the spin-1 KD model given by Eq. (4). We adopt the iTEBD algorithm with a bond dimension of \(\chi=120\)[122]. In our calculations, we set the imaginary time as \(10^{-5.5}\) to ensure a truncation error smaller than \(10^{-8}\). The advantage of using iTEBD is its capability to treat infinite-size systems directly, providing numerical evidence for the emergence of a symmetry-breaking phase. According to the core spirit of the Landau-Ginzburg-Wilson paradigm, the quantum phase transition of a many-body system can be described by a well-defined order parameter. We calculate the two-point correlations between the \(i\)-th and \(j\)-th sites, \[C^{ab}(i,j)=\left\langle S_{i}^{a}\exp\left(i\theta\sum_{l=i+1}^{j-1}S_{l}^{a }\right)S_{j}^{b}\right\rangle,a,b=x,y,z, \tag{20}\] which can detect different symmetry-breaking phases. Equation (20) reduces to two-point correlations for \(\theta=0\), while it becomes the den Nijs-Rommelse string order parameter for \(\theta=\pi\)[130; 131]. Note that there is no phase accumulated for two nearest-neighboring sites and a general angle \(\theta\) could capture the hidden topological orders [132]. The Hamiltonian in Eq. (4) is invariant under a joint operation that combines a \(\pi/2\)-rotation about the \(z\)-axis and a single-site translation, which implies that on a finite-size system \[C^{xx}(1,2)=C^{yy}(2,3),\quad C^{yy}(1,2)=C^{xx}(2,3),\] \[C^{zz}(1,2)=C^{zz}(2,3). \tag{21}\] The joint symmetry can be expressed in the rotated Hamiltonian (10) as \(C^{xy}(1,2)=\tilde{C}^{xy}(2,3)\), \(\tilde{C}^{yx}(1,2)=\tilde{C}^{yx}(2,3)\), and \(\tilde{C}^{zz}(1,2)=\tilde{C}^{zz}(2,3)\). In the zero-field limit, the ground state is a gapped Kitaev spin liquid (KSL), which is stable against nonzero perturbations [82]. Upon applying the uniaxial single-ion anisotropy \(D\), the ground state remains in the flux-free sector, i.e., \(\tilde{w}\)=\(\{1,1,\cdots,1\}\). At a large positive \(D\), the spins are confined to \(|z\rangle\) (i.e., \(\langle S_{j}^{z}\rangle=0\)), while for a large negative \(D\), the ground states are restricted to \(|x\rangle\) or \(|y\rangle\) (\(\langle S_{j}^{z}\rangle=\pm 1\) ). Surprisingly, unlike the KSL phase, a notable difference between \(C^{xx}(1,2)\) and \(C^{yy}(2,3)\), or equivalently, \(\tilde{C}^{xy}(1,2)\) and \(\tilde{C}^{xy}(2,3)\), implies the spontaneous breaking of the translational symmetry, in the way how the system hosts the dimer order. The dimer phase is characterized by an alternation of nearest-neighbor spin-spin correlations, which is characterized by the differ Figure 7: The three components of dimer order parameter \(O_{D}\) as a function of \(D\). (a) From the dimer phase to the KSL phase at \(J=0\). (b) From the FM\({}_{z}\) phase to the dimer phase to the AF\({}_{z}\) phase at \(D=-2\). Here we use the iTEBD method and the bond dimension is set as \(\chi=120\). ence of \(\langle\mathbf{S}_{i}\cdot\mathbf{S}_{j}\rangle\) between the odd bonds and even bonds. A finite dimer order parameter is defined by \[O_{D} = |\langle\mathbf{S}_{2j-1}\cdot\mathbf{S}_{2j}\rangle-\langle\mathbf{S}_{2j} \cdot\mathbf{S}_{2j+1}\rangle|. \tag{22}\] To be more specific, we can also examine the \(x\), \(y\), and \(z\) components of the dimer order parameter, such as \[O_{D}^{x} = |\langle S_{2j-1}^{x}S_{2j}^{x}\rangle-\langle S_{2j}^{y}S_{2j+1 }^{y}\rangle|,\] \[O_{D}^{y} = |\langle S_{2j-1}^{y}S_{2j}^{y}\rangle-\langle S_{2j}^{x}S_{2j+1 }^{x}\rangle|,\] \[O_{D}^{z} = |\langle S_{2j-1}^{z}S_{2j}^{z}\rangle-\langle S_{2j}^{z}S_{2j+1 }^{z}\rangle|. \tag{23}\] Note that the dimer order arises from the Kitaev interactions (4), leading to the characterization of the \(x\) and \(y\) components as the differences between distinct types of Ising interactions on odd and even bonds. Figure 7(a) illustrates that the \(x\) component of the dimer order parameter increases smoothly from zero to a finite value as the parameter \(D\) is decreased and crosses the critical value \(D_{c}=-0.655\), indicating a second-order transition occurs at \(D_{c}\). The presence of nonvanishing dimer correlations for \(D>D_{c}\) can be attributed to the limitations imposed by the finite bond dimension. The dimer orders are associated with the spontaneous breaking of translational symmetry of \(O_{D}^{x}\) in an infinite system. The emergence of the dimer ordering is distinct from the general mechanism for the formation of dimerized phases, which is typically induced by inherent bond alternation and the resulting breaking of translational symmetry. The ground state is two-fold degenerate for \(D<D_{c}\) in the thermodynamic limit, which is in contrast to the gapped ground state for \(D>D_{c}\). ## IV Effect of Heisenberg interactions It has been recognized the scarred states display anomalous stability in the Kitaev phase in the vicinity of \(D=0\)[83]. Quantum spin liquids are widely believed to be crucially driven by the Kitaev interactions in spin-orbit-coupled materials. While Kitaev interactions are highly anisotropic, the isotropic Heisenberg interaction, ubiquitous in real materials, can also play an essential role in the emergence of exotic phenomena in many-body systems. The relevance of the Kitaev phase in a broader regime becomes paramount for understanding scar stability and its potential applications in solid-state systems. To address this point, we investigate the evolution of the phase boundaries of the Kitaev spin liquid and the dimer phase by introducing Heisenberg interactions that disrupt the \(\mathbb{Z}_{2}\) gauge fields, as given by \[\hat{H}_{\rm J} = J\sum_{j=1}^{N}\mathbf{S}_{j}\cdot\mathbf{S}_{j+1}. \tag{24}\] When the parameters \(\{D,J\}\) vary, the competitions of various correlations trigger miscellaneous phase transitions. Figure 8 depicts the phase diagram for the Kitaev-Heisenberg chain with uniaxial single-ion anisotropy (KHD model). The phase diagram is much richer than expected. Seven distinct phases are identified, including the KSL phase, dimer phase (\(D\)), the spin nematic phase with a left-left-right-right pattern (LLRR), Haldane phase, \(x\)-component ferromagnetic (FM\({}_{x}\)) phase, \(z\)-component ferromagnetic (FM\({}_{z}\)) phase and \(z\)-component antiferromagnetic (AF\({}_{z}\)) phase. The joint symmetry (21) is preserved in the whole KSL phase for the infinite system. It has been reported that on the line of \(D=0\) (the horizontal dashed line in Fig. 8), the ground state of the Kitaev-Heisenberg model undergoes the FM\({}_{z}\) phase, the LLRR phase, the KSL and the Haldane phase with increasing \(J\). The successive second-order quantum phase transitions occur at \(J_{c}=-0.6\), \(-0.08\), and \(0.08\), respectively [82]. For \(J=0\) and \(D=0\), the pure Kitaev chain hosts only two nearest neighboring antiferromagnetic orders \(\langle S_{2j-1}^{x}S_{2j}^{x}\rangle\) and \(\langle S_{2j}^{y}S_{2j+1}^{y}\rangle\) while other correlations vanish, similar to the spin-1/2 Kitaev honeycomb model [133]. Away from the Kitaev limit, the two-spin correlation functions are found to decay exponentially and the short correlation length \(\xi\) will extend to a few sites, as shown in Fig. 9(a). Note that the ground-state properties of integer spin chains are in stark contrast to those of half-odd integer spin. In comparison, the ground state of spin-1/2 Kitaev chain is \(2^{N/2-1}\)-fold degenerate [134], and the macroscopic degeneracy makes the ground state vulnerable. As such, an infinitesimal Heisenberg coupling is sufficient to lift the ground-state degeneracy and generate magnetic long-range order [135; 136]. In contrast, the spin-1 chain supports a gapped KSL ground state, which can sustain a finite Heisenberg coupling. It is remarkable that the KSL phase becomes more robust against the Heisenberg interactions for large positive \(D\). One can further observe that the size of the KSL phase enlarges with increasing positive \(D\) and becomes narrower for negative \(D\). However, it is found that the three components of the dimer order parameter Figure 8: Quantum phase diagram of the spin-1 Kitaev-Heisenberg model with uniaxial single-ion anisotropy calculated by the iTEBDD method with bond dimension \(\chi=120\). The quantum phase transition from the dimer phase (D) to the KSL phase occurs at \(D_{c}=-0.655\) for \(J=0\) (vertical dashed line). At \(D=0\) (horizontal dashed line), the KSL is stable in the range of \(|J|<0.08\). are all mismatched in a fairly small region of the parameter space except for \(J=0\), as exhibited in Fig. 7(b). A hallmark of the Haldane phase is the non-local string order parameter, which was first introduced by den Nijs and Rommelse [130] and later refined by Tasaki [131]. Its limiting value reveals the hidden \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry breaking \[O_{S}^{\mathrm{z}}(i,j) \equiv -\mathrm{lim}_{|i-j|\rightarrow\infty}C^{aa}(i,j). \tag{25}\] This order parameter serves as a distinct feature of the Haldane phase. Figure 10(a) illustrates two-site correlations between sites \(1\) and \(50\). One observes that \(C^{z}(1,50)\) is finite in the FM\({}_{z}\) phase, while the string order parameter \(O_{S}^{\mathrm{z}}(1,50)\) is nonvanishing in two regions, i.e., \(-0.22\lesssim J\lesssim-0.04\) and \(J\gtrsim 0.03\). To distinguish the two phases, we plot the spin-spin correlations between site 1 and site \(1+r\) for typical parameters in Fig. 9. One can observe in Fig. 9(b) that both \(C^{x}(1,1+r)\) and \(C^{y}(1,1+r)\) alternate between two successive positive and negative values as the distance of two sites \(r\) increases, indicating the onset of the spin nematic ordering [137], while \(C^{x}(1,1+r)\) and \(C^{y}(1,1+r)\) decay exponentially with respect to \(r\), manifesting the existence of the Haldane phase as demonstrated in Fig. 9(c). Furthermore, Fig. 9(d) depicts the correlations for \(J=-2\), \(D=0.1\), in which the \(z\)-component correlations \(C^{z}(1,j)\) dominates with a value close to \(1\), implying the FM\({}_{z}\) ground state, while in Fig. 9(e) the dominant correlations \(C^{x}(1,j)\) characterizes the FM\({}_{x}\) phase for \(J=-2\), \(D=1.2\). Upon increasing \(D\) at \(J=-2\), the transition from FM\({}_{z}\) to FM\({}_{x}\) takes place at \(D_{c}=1\), see Fig. 10(b). Unlike the FM\({}_{z}\) phase, the joint symmetry (21) is broken in the FM\({}_{x}\) phase. By further increasing the value of \(D\), the ground state evolves from the FM\({}_{x}\) state into the KSL phase, in which the joint symmetry is restored again. In contrast, the two-site correlation functions in Fig. 9(f) exhibit a distinct behavior. Specifically, \(C^{z}(1,j)\) shows a periodic oscillation between values close to \(-1\) and 1, while \(C^{x}(1,j)\) and \(C^{y}(1,j)\) nearly vanish. These observations provide strong evidence that the system is in the AF Figure 10: (a) Two-point correlation \(C^{z}(i,j)\) (20) with \(\theta=0\) and the string order parameter \(O_{S}^{\mathrm{z}}(i,j)\) (25) between sites \(1\) and \(50\) for \(D=-0.8\) and \(J\) from \(-1\) to \(1\). (b) Two-point spin-spin correlations \(C^{\alpha}(i,j)\) (20) with \(\theta=0\) at \(J=-2\). Here we use the iTEBD method and the bond dimension is set as \(\chi=120\). Figure 9: The correlation between site \(1\) and site \(j\) with \(\theta=0\) for increasing distance \(r\equiv|j-1|\) for representative points in: (a) KSL phase with \(J=-0.2\), \(D=2\); (b) LLRR phase with \(J=-0.15\), \(D=-0.8\); (c) Haldane phase with \(J=1\), \(D=-0.8\); (d) FM\({}_{z}\) phase \(J=-2\), \(D=0.1\); (e) FM\({}_{x}\) phase \(J=-2\), \(D=1.2\); and (f) AF\({}_{z}\) phase \(J=1\), \(D=-2\). Here we use the iTEBD method and the bond dimension set as \(\chi=120\). phase. ## V Summary and Conclusions To summarize, we have explored the physics arising from the cooperative effect of uniaxial single-ion anisotropy (SIA) and Heisenberg interactions in the spin-1 Kitaev chain. We studied quantum many-body scar (QMBS) states and quantum phase transitions in spin-1 Kitaev chain with SIA. We find that the local \(\mathbb{Z}_{2}\) gauge fields, a hallmark of Kitaev model, are still conserved in the spin-1 Kitaev chain with SIA (KD model). In this case, the Hilbert space is fragmented into \(2^{N}\) unequal subspaces characterized by \(\vec{w}=\{w_{1},w_{n},\cdots,w_{N}\}\). Among an exponential number of Krylov subspaces, it has been recognized that in the uniform sector with local \(\mathbb{Z}_{2}\) gauge fields, i.e., \(\vec{w}=\{1,1,\cdots,1\}\), is the largest Krylov subspace, in which a local transformation maps the spin-1 KD model onto the detuned PXP model with spin 1/2 degrees of freedom, and the SIA acts as a static detuning. The dual transformation suggests a solid-state-based realization of the PXP model based on Mott insulator with strong spin-orbit and Hund's couplings. Considering the ground states becomes twofold degenerate \(|\mathbb{Z}_{2}\rangle\) state in the limit of \(D\rightarrow-\infty\) due to the Hilbert space constraint, a continuous transition in the detuned PXP model occurs at \(D_{c}=-0.655\). This quantum phase transition corresponds to the emergence of dimer phase induced by the spontaneous breaking of translational symmetry in the flux-free sector which can be described by the dimer order parameter (22). We find the most prominent coherent oscillations of quantum fidelity in the quantum quench from initial states \(|\mathbb{Z}_{2}\rangle\) and \(|\mathbb{Z}_{3}\rangle\), a characteristic of the embedded prototypical PXP model for \(D=0\). We demonstrate that these fidelity revivals are robust against small SIA perturbation. The non-thermalizing dynamics can be also reflected by measuring the expectation values of certain local observables, which will vanish for \(D<D_{c}\). Finally, we provide a complete phase diagram for the spin-1 KD model by describing the interplay between Kitaev interactions, Heisenberg interactions and SIA. In particular, we underline the evolution of the Kitaev phase in a broader regime, therefore showing the relevance for the scar stability and possible solid-state applications. Seven phases are identified by The numerical methods through the corresponding spin-spin correlations, including the Kitaev spin liquid, dimer phase, LLRR phase, Haldane phase, FM\({}_{x}\) phase, FM\({}_{z}\) phase and AF\({}_{z}\) phase. Our study on the higher-spin Kitaev chain will likely help to identify candidate materials for Kitaev spin liquid. ###### Acknowledgements. The authors appreciate very insightful discussions with Hosho Katsura, Gaoyong Sun and Zhi-Xiang Sun. We acknowledges Ming Xue for bringing Ref. [138] to our attention. This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12174194, Postgraduate Research & Practice Innovation Program of Jiangsu Province, under Grant No. KYCX23_0347, Opening Fund of the Key Laboratory of Aerospace Information Materials and Physics (Nanjing University of Aeronautics and Astronautics), MIT, Top-notch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP), and stable supports for basic institute research under Grant No. 190101. A.M.O. kindly acknowledges Narodowe Centrum Nauki (NCN, Poland) Project No. 2021/43/B/ST3/02166 and is grateful for support via the Alexander von Humboldt Foundation Fellowship (Humboldt-Forschungspreis). Appendix A Mapping the Kitaev model with single-ion anisotropy within flux-free sector to the detuned PXP model For convenience, we use the rotated Hamiltonian Eq. (10) and set \(K_{j}=K\). The local two-spin Hamiltonian is given by \[\tilde{H}_{j,j+1}=KS_{j}^{x}S_{j+1}^{y}+D[(S_{j}^{z})^{2}+(S_{j+1}^{z})^{2}]. \tag{11}\] For the 5 states satisfying \(w_{j}=1\), we have \[\tilde{H}_{j,j+1}|xy\rangle = 2D|xy\rangle,\quad\tilde{H}_{j,j+1}|xz\rangle=D|xz\rangle,\] \[\tilde{H}_{j,j+1}|zy\rangle = D|zy\rangle,\quad\tilde{H}_{j,j+1}|zz\rangle=K|yx\rangle,\] \[\tilde{H}_{j,j+1}|yx\rangle = K|zz\rangle+2D|yx\rangle. \tag{12}\] Accordingly, the Hamiltonian can be written in the matrix form as \[\tilde{H}_{j,j+1}=\left(\begin{array}{ccccc}2D&0&0&0&0\\ 0&D&0&0&0\\ 0&0&D&0&0\\ 0&0&0&2D&K\\ 0&0&0&K&0\end{array}\right), \tag{13}\] which yields 5 energy eigenvalues \(2D\),\(D\),\(D\),\(D\pm\sqrt{D^{2}+K^{2}}\). Hence, within the lowest-state manifold residing in the \(w_{j}=1\) sector that is spanned by \(\{|zz\rangle,|yx\rangle\}\), Eq. (10) can be written for an effective model of spin-1/2 degrees of freedom, which can be simplified as \[\hat{H}_{\rm KD,eff}=X_{i}+2Dn_{i}. \tag{14}\] It is noted that the Hilbert space constraint is imposed by the projector onto the low-energy subspace spanned by configurations with no adjacent excited states, which is written as \[\mathcal{P}=\prod_{j}(\mathbb{1}-n_{j}n_{j+1}). \tag{15}\] The so-called detuned PXP model (18) can be derived via the Schrieffer-Wolff transformation in the limit of strong interactions (small \(\epsilon\)) of the following Hamiltonian \[\hat{H}=\hat{H}_{0}+\epsilon\hat{H}_{1}. \tag{16}\] The leading part of Eq. (16) \(\hat{H}_{0}=\sum_{j=1}^{N}n_{j}n_{j+1}\) vanishes in this subspace, we must consider the first non-trivial order that is given by \(H_{SW}=\epsilon\mathcal{P}H_{1}\mathcal{P}\). If \(\hat{H}_{1}\) describes the transverse field term, \[\hat{H}_{1}=\sum_{j=1}^{N}X_{j}, \tag{100}\] where \(X_{j}\) is defined in Eq. (1), we have \[\mathcal{P}H_{1}\mathcal{P}=\prod_{i=1}^{N}(1-n_{i}n_{i+1})\sum_{ j=1}^{N}X_{j}\prod_{k=1}^{N}(1-n_{k}n_{k+1})\] \[=\sum_{j}\ (1-n_{j-1}n_{j})(1-n_{j}n_{j+1})X_{j}\] \[\qquad\times(1-n_{j-1}n_{j})(1-n_{j}n_{j+1})\] \[=\sum_{j}(X_{j}-n_{j-1}X_{j}-X_{j}n_{j+1}+n_{j-1}X_{j}n_{j+1})\] \[=\sum_{j}(1-n_{j-1})X_{j}(1-n_{j+1})\] \[=\sum_{j}P_{j-1}X_{j}P_{j+1}. \tag{101}\] Then we consider the detuned term, \[\hat{H}_{1}=\sum_{j=1}^{N}n_{j}. \tag{102}\] In this case, we have \[\mathcal{P}H_{1}\mathcal{P}=\prod_{i}(1-n_{i}n_{i+1})\sum_{j}n_{j} \prod_{k}(1-n_{k}n_{k+1})\] \[=\sum_{j}(1-n_{j-1}n_{j})(1-n_{j}n_{j+1})n_{j}(1-n_{j-1}n_{j})(1- n_{j}n_{j+1})\] \[=\sum_{j}(n_{j}-n_{j-1}n_{j}-n_{j}n_{j+1}+n_{j-1}n_{j}n_{j+1})\] \[=\sum_{j}(1-n_{j-1})n_{j}(1-n_{j+1})\] \[=\sum_{j}P_{j-1}n_{j}P_{j+1}. \tag{103}\] Note that a similar form can be derived even when \(\hat{H}_{1}\) is a non-Hermitian matrix, given by \[\hat{H}_{1}=\sum_{j=1}^{N}iY_{j}, \tag{104}\] we also have \[\mathcal{P}H_{1}\mathcal{P}=\prod_{i}(1-n_{i}n_{i+1})\sum_{j}(iY_ {j})\prod_{k}(1-n_{k}n_{k+1})\] \[=\sum_{j}(1-n_{j-1}n_{j})(1-n_{j}n_{j+1})(iY_{j})\] \[\qquad\times(1-n_{j-1}n_{j})(1-n_{j}n_{j+1})\] \[=\sum_{j}\left[(iY_{j})-n_{j-1}(iY_{j})-(iY_{j})n_{j+1}+n_{j-1}( iY_{j})n_{j+1}\right]\] \[=\sum_{j}(1-n_{j-1})(iY_{j})(1-n_{j+1})\] \[=\sum_{j}P_{j-1}(iY_{j})P_{j+1}. \tag{105}\] However, if \(\hat{H}_{1}=\sum_{j=1}^{N}Z_{j}\), the \(PZP\) cannot be derived. A crucial difference is that \(X_{j}P_{j}=n_{j}X_{j}\), \(Y_{j}P_{j}=n_{j}Y_{j}\), \(n_{j}^{2}=n_{j}\), while \(Z_{j}P_{j}\neq n_{j}Z_{j}\). ## Appendix B Dynamical evolution in a quench from initial product states Here we use the similar strategy that is introduced for the dynamics of the generalized Hubbard models [138]. Considering a Hamiltonian \(\hat{H}\) that can be separated into \(\hat{H}_{0}\) and \(\lambda\hat{V}\), where \(\lambda\) denotes the perturbation strength, an initial state \(|\psi(0)\rangle\) evolves into \(|\psi(t)\rangle=e^{-i\hat{H}t}|\psi(0)\rangle\) at any time \(t\). If we can find an antiunitary operator \(\hat{U}\) satisfies the following conditions: (i) \(\hat{U}\) anticommutes with \(\hat{H}_{0}\) and commutes with \(\hat{V}\), i.e., \[\{\hat{U},\hat{H}_{0}\}=0,\quad[\hat{U},\hat{V}]=0. \tag{106}\] (ii) The initial state \(|\psi(0)\rangle\) only acquires a global phase factor under \(\hat{U}\), i.e., \[\hat{U}^{-1}|\psi(0)\rangle=e^{i\chi}|\psi(0)\rangle. \tag{107}\] (iii) We consider a given Hermitian operator \(\hat{O}\) that is even or odd under symmetry operation by \(\hat{U}\), i.e., \[\hat{U}^{-1}\hat{O}\hat{U}=\pm\hat{O}, \tag{108}\] then we can conclude \[\langle\hat{O}\rangle_{+\lambda}=\pm\langle\hat{O}\rangle_{-\lambda}. \tag{109}\] Back to the KD model in Eq.(10), which can be rewritten as \(\tilde{H}=\tilde{H}_{\rm K}+\hat{H}_{\rm D}\). We then apply \(\hat{U}=\exp(i\pi S_{j}^{x})\), which will yield \(S_{j}^{x}\to S_{j}^{x},S_{j}^{y}\to-S_{j}^{y},S_{j}^{z}\to-S_{j}^{z}\). Considering the condition (i), one finds \[\hat{U}^{-1}e^{-i(\tilde{H}_{\rm K}+\tilde{H}_{\rm D})t}\hat{U}=e^{-i(\tilde{ H}_{\rm K}-\tilde{H}_{\rm D})t}. \tag{110}\] To this end, in the quantum quench starting from the initial states \(|\widetilde{\mathbb{Z}}_{k}\rangle\), e.g., \(|\widetilde{\mathbb{Z}}_{2}\rangle\), we have \[\langle\hat{O}\rangle_{+D} = \langle\widetilde{\mathbb{Z}}_{2}|e^{i(\widetilde{\mathbb{Z}}_{ \mathrm{K}}+\widetilde{H}_{\mathrm{D}})t}\hat{O}e^{-i(\widetilde{\mathbb{H}}_{ \mathrm{K}}+\widetilde{H}_{\mathrm{D}})t}|\widetilde{\mathbb{Z}}_{2}\rangle \tag{101}\] \[= \langle\widetilde{\mathbb{Z}}_{2}|Ue^{i(\widetilde{H}_{\mathrm{ K}}-\widetilde{H}_{\mathrm{D}})t}(U^{-1}\hat{O}U)e^{-i(\widetilde{H}_{ \mathrm{K}}-\widetilde{H}_{\mathrm{D}})t}U^{-1}|\widetilde{\mathbb{Z}}_{2}\rangle\] \[= \langle\hat{O}\rangle_{-D},\] \[F(t)_{+D} = \langle\widetilde{\mathbb{Z}}_{2}|e^{-i(\widetilde{H}_{\mathrm{ K}}+\widetilde{H}_{\mathrm{D}})t}|\widetilde{\mathbb{Z}}_{2}\rangle\] (102) \[= \langle\widetilde{\mathbb{Z}}_{2}|Ue^{-i(\widetilde{H}_{\mathrm{ K}}-\widetilde{H}_{\mathrm{D}})t}U^{-1}|\widetilde{\mathbb{Z}}_{2}\rangle=F(t)_{-D},\] where the following simple relations \[\hat{U}^{-1}|\widetilde{\mathbb{Z}}_{2}\rangle = |x(-y)\cdots x(-y)\rangle = (-1)^{N/2}|xy\cdots xy\rangle,\] \[\langle\widetilde{\mathbb{Z}}_{2}|\hat{U} = \langle x(-y)\cdots x(-y)| = (-1)^{N/2}\langle xy\cdots xy|,\] are used. Therefore, the values of \(\langle\hat{O}\rangle\) and \(F(t)\) are symmetric between positive and negative \(D\) in the quantum quench starting from the \(|\widetilde{\mathbb{Z}}_{2}\rangle\) state. It is straightforward to generalize the theorem to other initial product states \(|\widetilde{\mathbb{Z}}_{k}\rangle\)\((k\neq 2)\).
2308.07453
Sign Gradient Descent Algorithms for Kinetostatic Protein Folding
This paper proposes a sign gradient descent (SGD) algorithm for predicting the three-dimensional folded protein molecule structures under the kinetostatic compliance method (KCM). In the KCM framework, which can be used to simulate the range of motion of peptide-based nanorobots/nanomachines, protein molecules are modeled as a large number of rigid nano-linkages that form a kinematic mechanism under motion constraints imposed by chemical bonds while folding under the kinetostatic effect of nonlinear interatomic force fields. In a departure from the conventional successive kinetostatic fold compliance framework, the proposed SGD-based iterative algorithm in this paper results in convergence to the local minima of the free energy of protein molecules corresponding to their final folded conformations in a faster and more robust manner. KCMbased folding dynamics simulations of the backbone chains of protein molecules demonstrate the effectiveness of the proposed algorithm.
Alireza Mohammadi, Mohammad Al Janaideh
2023-08-14T20:48:08Z
http://arxiv.org/abs/2308.07453v1
# Sign Gradient Descent Algorithms for Kinetostatic Protein Folding ###### Abstract This paper proposes a sign gradient descent (SGD) algorithm for predicting the three-dimensional folded protein molecule structures under the kinetostatic compliance method (KCM). In the KCM framework, which can be used to simulate the range of motion of peptide-based nanorobots/nanomachines, protein molecules are modeled as a large number of rigid nano-linkages that form a kinematic mechanism under motion constraints imposed by chemical bonds while folding under the kinetostatic effect of nonlinear interatomic force fields. In a departure from the conventional successive kinetostatic fold compliance framework, the proposed SGD-based iterative algorithm in this paper results in convergence to the local minima of the free energy of protein molecules corresponding to their final folded conformations in a faster and more robust manner. KCM-based folding dynamics simulations of the backbone chains of protein molecules demonstrate the effectiveness of the proposed algorithm. ## I Introduction Numerical simulations that aim at providing a prediction of the three-dimensional structure of folded protein conformations and computing the transitions through which these molecules fold/unfold play an integral role in designing protein-based nanomachines/nanorobots. Indeed, such numerical simulations can estimate the range of motion of these peptide-based mechanisms. For instance, the design of parallel nanorobots, which consist of graphite platforms interconnected together via serially linked protein-based bio-springs, requires numerical simulations for finding the motion pattern of the linear protein actuators within the nano-mechanism (see, e.g., [1, 2]). One class of algorithms for predicting the final folded structures of protein molecules is afforded by the so-called knowledge-based approaches. Rooted in pattern recognition and machine learning, these algorithms predict three-dimensional structures of folded protein conformations by considering the linear amino acid sequence of a given protein molecule and utilizing massive datasets of already available folds [3]. The family of knowledge-based solutions, to which the Google AlphaFold [4] belongs, cannot capture the protein-nucleic acid interactions and address the computation of folding pathways, namely, the transient conformations [5] through which the protein molecule attains its folded conformation. Indeed, AI-based methods have been able to find the most likely folded conformations without considering the stability and kinetics of the folding process. The counterpart to knowledge-based approaches, namely, the family of physics-based methods, relies on using physical first-principles to numerically compute the folding pathways and predict the final three-dimensional folded structure of protein molecules [8, 9, 10]. To increase the accuracy of folding pathway computations, numerous optimization techniques such as optimal control-based [11, 12] and homotopy-based [13] algorithms can also be augmented with these numerical methods. While physics-based approaches provide reliable information about the transient conformations during the folding process, they are computationally burdensome. The promising framework of kinetostatic compliance method (KCM), pioneered by Kazerounian, Ilies, and collaborators, models protein molecules as a large number of rigid nano-linkages that form a kinematic mechanism under motion constraints imposed by chemical bonds [6, 7, 14, 15]. In this framework, which addresses the high computational load of all-atom molecular dynamics approaches, the dihedral angles, which determine the molecule three-dimensional structure, change under the nonlinear effect of interatomic force fields resulting in protein conformational changes until convergence to a minimum energy state. A schematic of protein folding/unfolding against the free energy landscape of the molecule is depicted in Figure 1. Since its advent, the KCM framework has been successfully utilized for investigation of the role of hydrogen bond formation in protein kinematic mobility problem [16] and design of peptide-based nanomachines [17, 18, 19]. For instance, Mundrane _et al._[17] have utilized the KCM framework for simulating the range of motion for closed-loop cyclic 7-R peptides that are subject to external electric field pertur Fig. 1: Protein folding/unfolding against the free energy landscape of the molecule. In the KCM framework for protein folding [6, 7], the molecule dihedral angles vary under the nonlinear effect of interatomic force fields resulting in protein conformational changes until convergence to a minimum energy state. bations. Moreover, it has been demonstrated that entropy-loss constraints during folding can be encoded in the KCM framework by using a proper nonlinear optimization-based control scheme [20]. Moreover, the KCM framework can be used for systematic investigation of the reverse process of folding, namely, protein unfolding [21]. Despite the KCM computational advantages for numerical simulation of protein folding dynamics and its utilization in design of peptide-based nanorobots/nanomachines, this framework has exclusively relied on the so-called _successive kinetostatic fold compliance_ (see, e.g., [6, 7]), where the iterative conformational changes of the protein molecule are computed by taking steps along a special direction determined by heuristics. Furthermore, the convergence properties of the successive kinetostatic fold compliance has not been investigated in the literature due to its heuristic nature. In this paper, we examine the heuristics behind the conventional successive kinetostatic fold compliance for protein folding dynamics and arrive at a sign gradient descent (SGD) iterative algorithm as an alternative to the conventional approach. SGD algorithms, which were originally proposed in the context of training artificial neural networks (see, e.g., [22]), are a class of first-order methods that merely involve the sign of the gradient of the objective function (the free energy of the protein molecule in our case) while enjoying numerical stability and robust convergence properties [23, 24]. In robotics applications, SGD algorithms have been utilized in settings such as autonomous environmental monitoring where the sign of the change in gradient (not the magnitude of the change) plays a crucial role in planning the motion of the robot (see, e.g., [25]). **Contributions of the paper.** This paper contributes to the KCM-based protein folding framework by developing a family of SGD algorithms for numerical simulation of protein folding dynamics. This contribution is a departure from the established literature (see, e.g., [6, 7]) of the KCM folding framework where numerical simulations of the folding dynamics have exclusively relied on the heuristic successive kinetostatic fold compliance scheme. Moreover, by relying on the rich literature of SGD optimization (see, e.g., [24]), this paper provides formal conditions under which the proposed numerical SGD-based iterative algorithm for kinetostatic folding converges to folded protein conformations. Finally, the proposed SGD-based iterative algorithm in this paper results in convergence to the local minima of the free energy of protein molecules corresponding to their final folded conformations in a faster and more robust manner. The rest of the paper organization is as follows. First, in Section II, we provide an overview of the kinematics of protein molecules and the KCM framework for modeling the protein folding process. Thereafter, in Section III, we present the conventional KCM-based iteration and our SGD-based alternative to it. The numerical simulation results are presented in Section IV. Finally, the paper is concluded with future research directions and final remarks in Section V. **Notation.** We denote the set of all non-negative real and non-negative integer numbers by \(\mathbb{R}_{+}\) and \(\mathbb{Z}_{0+}\), respectively. Given a positive integer \(M\), a vector \(\mathbf{x}=[x_{1},\cdots,x_{M}]^{\top}\) in \(\mathbb{R}^{M}\), and a real constant \(p\geq 1\), we denote the \(p\)-norm of the vector by \(|\mathbf{x}|_{p}\). Furthermore, we let \(|\mathbf{x}|_{\infty}=\max\limits_{i}|x_{i}|\). We denote the sign function by \(\text{sgn}(\cdot)\), which is defined according to \(\text{sgn}(a)=1\) if \(a>0\), \(\text{sgn}(a)=0\) if \(a=0\), and \(\text{sgn}(a)=-1\) if \(a<0\). Given a vector-valued function \(\mathbf{f}(\mathbf{x})=[f_{1}(\mathbf{x}),\cdots,f_{M}(\mathbf{x})]^{\top}\) for some positive integer \(M\), we denote \(\text{sgn}(\mathbf{f}(\mathbf{x}))=\left[\text{sgn}(f_{1}(\mathbf{x})), \cdots,\text{sgn}(f_{M}(\mathbf{x}))\right]^{\top}\). ## II Kinetostatic Compliance-Based Protein Folding In this section, we present an overview of the KCM framework for modeling the _in vacuo_ folding dynamics of protein molecules. ### _Nano-linkage-based kinematic model of protein molecules_ Protein molecules are long molecular chains that consist of peptide planes with peptide chemical bonds joining them together. For brevity, we limit our presentation to the protein main chain. Indeed, the essential folding dynamics can be effectively explained by considering the motion of the protein backbone chain (see, e.g., [26]). As demonstrated in Figure 2, each peptide plane, which consists of six coplanar atoms, can be considered as a linkage in the protein kinematic mechanism [27]. Central carbon atoms, which are denoted by \(\text{C}_{\alpha}\) and commonly known as the alpha-Carbon atoms, act as hinges connecting peptide planes together. The peptide plane atoms are bonded together via covalent bonds (i.e., the red line segments in Figure 2). **Remark 1**: _The assumption of coplanarity of the atoms \(\text{C}_{\alpha}\), CO, NH, and \(\text{C}_{\alpha}\), which form each of the peptide planes (see Figure 2), is based on the results from high resolution X-ray crystallographic experiments (see, e.g., [28]). This coplanarity assumption has been the basis of various robotics-inspired approaches in the literature that model protein molecules as robotic mechanisms with hyper degrees-of-freedom (see, e.g., [7, 20, 21, 27])._ Each alpha-carbon atom is bonded to four other chemical components including the three atoms C, N, and H, and a variable side chain shown with SR. The first alpha-Carbon of the protein chain structure is bonded to N-terminus, which is an amino group, as well as one peptide plane. Similarly, the last \(\text{C}_{\alpha}\) atom is chemically bonded to the C-terminus, which is a carboxyl group, as well as one other peptide plane. The backbone conformation of the protein molecule kinematic structure consisting of the subchain \(-\text{N}-\text{C}_{\alpha}-\text{C}_{-}\), is described by a collection of bond lengths and a set of pairs of dihedral angles, namely, the angles representing the rotations around the covalent bonds \(\text{C}_{\alpha}-\text{C}\) and \(\text{N}-\text{C}_{\alpha}\) (see Figure 2). Accordingly, \[\boldsymbol{\theta}:=\left[\theta_{1},\cdots,\theta_{2N}\right]^{\top}, \tag{1}\] is the configuration vector of the kinematic structure of a given protein backbone chain with \(N-1\) peptide planes. **Remark 2**: _In the biochemistry literature, 'conformation' is the standard word for describing the geometry of the protein molecule kinematic structure. In the robotics literature, on the other hand, the terminology 'configuration' is frequently used to describe the kinematic structures of robots. In this paper, unless otherwise stated, we use the two words 'conformation' and 'configuration' interchangeably._ Each of the dihedral angles in the conformation vector \(\boldsymbol{\theta}\) in (1) correspond to one degree-of-freedom (DOF) of the protein molecule kinematic chain. Associated with each DOF, one may consider a unit vector denoted by \(\mathbf{u}_{j}\), \(1\leq j\leq 2N\). Each of these vectors are aligned with the rotation axis about which the kinematic chain can rotate. Therefore, as demonstrated in Figure 2, the vectors \(\mathbf{u}_{2i}\) and \(\mathbf{u}_{2i+1}\) represent the unit vectors along the \(\text{C}_{\alpha}-\text{C bond and }\text{N}-\text{C}_{\alpha}\) bond located within the \(i\)-th peptide plane, respectively. Finally, \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2N}\) are the unit vectors of the N- (the amino group) and C-termini (the carboxyl group), respectively. An additional collection of vectors, which are called the **body vectors**, are required to completely determine the spatial orientation of the rigid peptide nano-linkages in protein molecules. The body vectors, which are denoted by \(\mathbf{b}_{j}\), \(1\leq j\leq 2N\), complete the description of the relative position of the coplanar atoms that are located within each of the peptide planes. Specifically, the relative position of any two atoms is given by a linear sum of the form \(k_{1m}\mathbf{b}_{2i}+k_{2m}\mathbf{b}_{2i+1}\), in which the coefficients \(k_{1m}\) and \(k_{2m}\), \(1\leq m\leq 4\), are constant and the same across all peptide linkages (see, e.g., [19] for further details). The body vectors \(\mathbf{b}_{j}\) along with the unit vectors \(\mathbf{u}_{j}\) can be utilized to provide a complete description of the protein molecule conformation as a function of the dihedral angle vector \(\boldsymbol{\theta}\) consisting of the peptide dihedral angles. Indeed, after one designates the zero position configuration with \(\boldsymbol{\theta}=\mathbf{0}\), the matrix transformations \[\mathbf{u}_{j}(\boldsymbol{\theta})=\Xi(\boldsymbol{\theta},\mathbf{u}_{j}^{0} )\mathbf{u}_{j}^{0},\,\mathbf{b}_{j}(\boldsymbol{\theta})=\Xi(\boldsymbol{ \theta},\mathbf{u}_{j}^{0})\mathbf{b}_{j}^{0}, \tag{2}\] determines the kinematic structure of the protein molecule using the dihedral angle conformation vector \(\boldsymbol{\theta}\). In (2), the transformation matrix \(\Xi(\boldsymbol{\theta},\mathbf{u}_{j}^{0})\) is defined according to \[\Xi(\boldsymbol{\theta},\mathbf{u}_{j}^{0}):=\prod_{r=1}^{j}R(\theta_{j}, \mathbf{u}_{j}^{0}). \tag{3}\] Furthermore, in (3), the rotation about the vector \(\mathbf{u}_{j}^{0}\) with angle \(\theta_{j}\) is given by the rotation matrix \(R(\theta_{j},\mathbf{u}_{j}^{0})\). After determining the body vectors \(\mathbf{b}_{j}(\boldsymbol{\theta})\) from (2) and fixing the N-terminus atom at the origin, the Cartesian coordinates of the \(k^{\text{th}}\)-peptide plane atoms are given by \[\mathbf{r}_{i}(\boldsymbol{\theta})=\sum_{j=1}^{i}\mathbf{b}_{j}(\boldsymbol{ \theta}),\,\,\,1\leq i\leq 2N-1, \tag{4}\] where the integers \(i=2k-1\) and \(i=2k\) represent the nitrogen atoms and the alpha-Carbon atoms, respectively. ### _KCM-based dynamics of protein folding_ The KCM approach for modeling the protein folding process pioneered by Kazerounian and collaborators is based on the well-established fact that the essential folding dynamics can be explained by neglecting the inertial forces (see, e.g., [6, 11, 29, 30]). Instead, the dihedral angles vary kinetostatically under the effect of the interatomic force fields. Consequently, the dihedral angles at each conformation of the protein molecule change in proportion to the effective torques acting on the peptide chain. Considering a peptide chain with \(N_{a}\) atoms and \(N-1\) peptide planes, where the dihedral angle vector is given by \(\boldsymbol{\theta}\) in (1), and denoting the Cartesian position of any two atoms \(a_{i}\), \(a_{j}\) in the protein chain by \(r_{i}(\boldsymbol{\theta})\), \(r_{j}(\boldsymbol{\theta})\), their distance can be computed from \(d_{ij}(\boldsymbol{\theta}):=|r_{i}(\boldsymbol{\theta})-r_{j}(\boldsymbol{ \theta})|\). The parameters associated with respective electrostatic charges of the atoms in the protein molecule, the van der Waals radii of these atoms, the van der Waals distance between any two atoms, their dielectric constant, the depth of potential well of any pair of atoms, and the weight factors for the electrostatic and van der Waals forces between any pair of two atoms can be found from [19] and the references therein. Under these considerations, the aggregated free energy of the protein molecule can be written as \[\mathcal{G}(\boldsymbol{\theta}):=\mathcal{G}^{\text{dec}}(\boldsymbol{\theta })+\mathcal{G}^{\text{vdw}}(\boldsymbol{\theta}), \tag{5}\] Fig. 2: The protein molecule kinematic mechanism consisting of peptide planes and \(\text{C}_{\alpha}\) atom hinges. There also exists a hydrogen atom, which is not depicted in the figure, connected to each \(\text{C}_{\alpha}\) atom via a covalent bond. where \(\mathcal{G}^{\text{elec}}(\mathbf{\theta})\) and \(\mathcal{G}^{\text{vdw}}(\mathbf{\theta})\) are the protein molecule electrostatic potential energy and the van der Waals interatomic potential energy, respectively (see, e.g., [7] for the detailed expressions). The resultant forces of Coulombic and van der Waals nature exerted on each atom \(a_{i}\), \(1\leq i\leq N_{\text{a}}\), can be computed from \(F_{i}^{\text{elec}}(\mathbf{\theta})=-\nabla_{\mathbf{r}_{i}}\mathcal{G}^{\text{ elec}}\) and \(F_{i}^{\text{vdw}}(\mathbf{\theta})=-\nabla_{\mathbf{r}_{i}}\mathcal{G}^{\text{vdw}}\), respectively. According to the KCM-based modeling framework [7], it is required to compute the resultant forces and torques acting on the peptide planes in the protein molecule. Subsequently, the computed forces and torques are appended in the \(6N\)-dimensional vector \(\mathcal{F}(\mathbf{\theta})\), which is the generalized force vector directing the process of protein folding. In the next step, the generalized force vector \(\mathcal{F}(\mathbf{\theta})\) needs to be mapped to an equivalent torque vector, which is responsible for varying the dihedral angle vector of the protein molecule. Specifically, the vector \(\mathbf{\tau}(\mathbf{\theta})\in\mathbb{R}^{2N}\), which is along the gradient of the aggregated free energy \(\mathcal{G}(\mathbf{\theta})\), is given by \[\mathbf{\tau}(\mathbf{\theta})=\mathcal{J}^{\top}(\mathbf{\theta})\mathcal{F}(\mathbf{\theta }), \tag{6}\] where the matrix \(\mathcal{J}(\mathbf{\theta})\in\mathbb{R}^{6N\times 2N}\) is the molecule chain Jacobian at conformation \(\mathbf{\theta}\) (see [7] for further details). At each folded protein molecule conformation \(\mathbf{\theta}^{*}\), which corresponds to a local minimum of the aggregated free energy \(\mathcal{G}(\mathbf{\theta})\), the torque vector vanishes, namely, \(\mathbf{\tau}(\mathbf{\theta}^{*})=\mathbf{0}\). As described in the next section, although the torque vector \(\mathbf{\tau}(\mathbf{\theta})\) is along the steepest-descent direction of the free energy gradient in the conformation landscape, Kazerounian and collaborators [7, 27] have noticed that using a normalized version of the torque vector for iterative update of the protein molecule conformations would have an improved performance in terms of stability and convergence rate. ## III The Conventional KCM-based Iteration and its SGD-based Alternative In this section we first present the conventional KCM-based iteration for protein folding dynamics. Next, by closely examining the heuristics behind this numerical scheme, we propose an SGD-based successive kinetostatic fold compliance alternative and present its convergence properties. Given an unfolded protein molecule conformation \(\mathbf{\theta}_{0}\), the conventional successive kinetostatic fold compliance, which relates the joint torques to the changes in the dihedral angles, is given by the numerical iteration (see, e.g., [6, 7]) \[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}+\kappa_{0}\,\frac{\mathbf{\tau}(\mathbf{\theta}_{k} )}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}},\,\,k\in\mathbb{Z}_{0+}, \tag{7}\] where the normalized torque vector \(\frac{\mathbf{\tau}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}}\) in (7) is responsible for varying the dihedral angle vector at each conformation \(\mathbf{\theta}_{k}\). Moreover, the positive constant \(\kappa_{0}\) is chosen small enough to avoid large variations in the dihedral angles and is tuned in a heuristic manner. The iterative steps in (7) are repeated until the aggregated free energy \(\mathcal{G}(\mathbf{\theta})\) of the molecule converges to a close vicinity of a free-energy-landscape local minimum where the norm of the torque vector, namely, \(|\mathbf{\tau}(\mathbf{\theta}_{k})|_{2}\), becomes less than a desired tolerance \(\tau_{\text{tol}}\) (i.e., the convergence criterion is met if \(|\mathbf{\tau}(\mathbf{\theta}_{k})|_{2}<\tau_{\text{tol}}\)). Despite the fact that the torque vector \(\mathbf{\tau}(\mathbf{\theta}_{k})\) in the conventional KCM-based iteration given by (7) is along the steepest-descent direction of the free energy gradient in the conformation landscape, Kazerounian and collaborators [7, 27] have noticed that using the normalized torque vector \(\frac{\mathbf{\tau}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}}:= \big{[}\frac{\tau_{1}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty} },\cdots,\frac{\tau_{2N}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{ \infty}}\big{]}^{\top}\) for iterative update of the protein molecule conformations would outperform using \(\mathbf{\tau}(\mathbf{\theta}_{k})\) in terms of stability and convergence rate. Indeed, normalizing by the maximum joint torque \(|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}=\max_{i}|\tau_{i}(\mathbf{\theta}_{k})|\) throughout the entire chain, results in normalizing the torques according to \(\frac{\tau_{1}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}}\in[-1,1]\). _The aforementioned analysis leads to the important insight that the magnitude of the torque vector \(\mathbf{\tau}(\mathbf{\theta}_{k})\) does not play a role in the successive kinetostatic fold compliance algorithm given by (7)._ Using this insight, it is possible that one only considers the sign of the torque vector \(\mathbf{\tau}(\mathbf{\theta}_{k})\) as an alternative to the heuristic approach in the conventional successive kinetostatic fold compliance in (7). In particular, following the sign gradient descent optimization literature (see, e.g., [22, 24]), we propose the _SGD-based successive kinetostatic fold compliance algorithm_ \[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}+\kappa_{k}\,\text{sgn}\big{(}\mathbf{\tau}(\mathbf{ \theta}_{k})\big{)},\,k\in\mathbb{Z}_{0+}, \tag{8}\] where \(\kappa_{k}\) is a step size that changes dynamically in every iteration and \(\text{sgn}\big{(}\mathbf{\tau}(\mathbf{\theta}_{k})\big{)}=\big{[}\text{sgn}(\tau_{1}( \mathbf{x})),\cdots,\text{sgn}(\tau_{2N}(\mathbf{x}))\big{]}^{\top}\). Furthermore, the step size \(\kappa_{k}\) is tuned according to a proper adaptive step size strategy \[\kappa_{k+1}=\mathcal{S}(\kappa_{k}),\,\,k\in\mathbb{Z}_{0+}, \tag{9}\] where the mapping \(\mathcal{S}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) should satisfy \(\mathcal{S}(\kappa_{k})<\kappa_{k}\) for every positive \(\kappa_{k}\). **Remark 3**: _Comparing the successive kinetostatic fold compliance schemes given by (7) and (8), it can be seen that none of them rely on the magnitude of the original torque vector \(\mathbf{\tau}(\mathbf{\theta}_{k})\). Indeed, the conventional kinetostatic fold compliance in (7) relies on the normalized torque vector \(\frac{\mathbf{\tau}(\mathbf{\theta}_{k})}{|\mathbf{\tau}(\mathbf{\theta}_{k})|_{\infty}}\), while the proposed SGD-based successive kinetostatic fold compliance scheme in (8) relies on the sign of the torque vector, i.e., \(\text{sgn}\big{(}\mathbf{\tau}(\mathbf{\theta}_{k})\big{)}\). Furthermore, while the step size \(\kappa_{0}\) in (7) is fixed, the step size \(\kappa_{k}\) in (8) varies in a dynamic manner._ To design the adaptive step size mapping \(\mathcal{S}(\cdot)\) in (9), one can use various established methods such as the following step size adaptation rule (see, e.g., [24]) \[\mathcal{S}(\kappa_{k})=\gamma_{0}\kappa_{k},\,k\in\mathbb{Z}_{0+}, \tag{10}\] where \(\gamma_{0}\in(0,1)\) is a positive constant, resulting in the step size sequence \(\big{\{}\kappa_{0}\cdot(\gamma_{0})^{k}\big{\}}_{k\in\mathbb{Z}_{0+}}\). It is remarked that in the special case of \(\gamma_{0}=0.5\), the step size adaptation rule in (10) is called the DICHO algorithm. Moulay _et al._[24] have provided conditions on adaptive step size strategies under which sign gradient descent algorithms converge. Considering an unfolded conformation \(\mathbf{\theta}_{0}\) of a protein molecule in the vicinity of a folded conformation \(\mathbf{\theta}^{*}\) one can utilize Theorem 1 in [24] to find conditions on the SGD-based successive fold compliance iteration in (8) with adaptive step size strategy given by (9) to guarantee asymptotic convergence to the folded conformation \(\mathbf{\theta}^{*}\). In particular, assuming that \(\mathbf{\theta}^{*}\) is an isolated local minimum of \(\mathcal{G}(\mathbf{\theta})\) and that \((\mathbf{\theta}^{*}-\mathbf{\theta}_{k})^{\top}\text{sgn}(\mathbf{r}(\mathbf{\theta}_{k}))>0\) in an open neighborhood of \(\mathbf{\theta}^{*}\), the conditions due to Moulay _et al._[24] in the context of SGD-based protein folding read as follows: 1. \(0<\kappa_{k}<2(\mathbf{\theta}^{*}-\mathbf{\theta}_{k})^{\top}\text{sgn}(\mathbf{r}(\mathbf{ \theta}_{k}))\); 2. \(\kappa_{k}(\mathbf{\theta}^{*}-\mathbf{\theta}_{k})^{\top}\text{sgn}(\mathbf{r}(\mathbf{ \theta}_{k}))\geq c\,\|\mathbf{\theta}^{*}-\mathbf{\theta}_{k}\|^{\alpha}\) for some positive \(\alpha\) and \(c\); and, 3. \(\lim\limits_{k\rightarrow\infty}\kappa_{k}=0\). ## IV Numerical Simulations In this section we present numerical simulation results for KCM-based protein folding dynamics to validate our proposed SGD-based successive kinetostatic fold compliance algorithm given by (8) and compare its performance against the conventional kinetostatic fold compliance algorithm given by (7). In our simulations, we considered a protein molecule backbone chain consisting of \(N-1=15\) peptide planes, which corresponds to having a \(2N=32\)-dimensional dihedral angle space (i.e., the conformation vector \(\mathbf{\theta}\) in (1) consists of \(32\) dihedral angles). Our implementation followed the guidelines of Protofold I [6, 19] on an Intel(r) Core(tm) i7-6770HQ [email protected]. To demonstrate the advantages of our proposed SGD-based folding algorithm, we purposefully chose a relatively large parameter \(\kappa_{0}\) for the conventional kinetostatic fold compliance algorithm given by (7). In particular, we set \(\kappa_{0}=0.01\). Furthermore, we set the initial step size for the SGD-based algorithm in (8) to be equal to \(\kappa_{0}=0.01\) (the same as the fixed step size in (7)). Moreover, we chose an adaptive step size strategy similar to (10) with \(\gamma_{0}=0.99\). Our initial protein molecule conformations in both tests were chosen to be the same pre-coiled backbone chain in a vicinity of an \(\alpha\)-helix conformation. Figure 3 depicts the free energy of the protein backbone peptide chain starting from the same initial conformations (also depicted in the same figure) under the conventional algorithm (blue curve) and the SGD-based algorithm (red curve). Furthermore, Figure 3 depicts the configurations of the protein molecule in the 30(tm) and the 600(tm) iterations under the conventional and SGD-based folding algorithms, respectively. As it can be clearly seen from Figure 3 the free energy under the successive kinetostatic fold compliance algorithm has gone through oscillations and converged to a higher free energy level of the protein molecule. On the other hand, the free energy of the protein backbone chain under the SGD-based algorithm has not gone through the same oscillations and converged to a lower free energy level of the protein molecule in a faster manner. The observations in Figure 3 are in accordance with a well known fact from the established KCM literature that the price to pay for convergence is to choose smaller step sizes with more iterations required for convergence and a consequent higher computational burden. To demonstrate this fact, we reduced the step size associated with the conventional algorithm from \(\kappa_{0}=0.01\) to \(\kappa_{0}=0.001\). The free energy level of the protein backbone chain is depicted in Figure 4. As it can be seen from the figure, the free energy of the protein molecule under the conventional successive kinetostatic fold compliance algorithm with a smaller step size (\(\kappa_{0}=0.001\)) manages to converge to the same free energy level as of its SGD-based counterpart, but only with a higher number of numerical iterations (1500 iterations in contrast with 600 iterations). ## V Concluding Remarks and Future Research Directions In a departure from the established kinetostatic fold compliance literature on numerically simulating the protein folding process, this paper proposed a sign gradient descent algorithm for predicting the three-dimensional folded protein molecule structures. The more stable and robust convergence properties of the proposed SGD-based algorithm makes it suitable for accurate simulation of the range of motion of peptide-based nanorobots/nanomachines such as parallel nanomechanisms [1, 2] and closed-loop cyclic 7-R peptide-based mechanisms [17]. As future research directions, we envision that the proposed SGD-based successive kinetostatic fold compliance literature can be utilized for efficient numerical investigation of the KCM-based protein folding dynamics under solvation effects and entropy-loss constraints. Furthermore, our proposed algorithm has the potential of lending itself to stochastic SGD extensions by relying on the emerging literature of stochastic sign descent methods (see, e.g., [31]). ## Acknowledgments This work is supported by the National Science Foundation (NSF) through the award number CMMI-2153744.
2305.09482
Your Identity is Your Behavior -- Continuous User Authentication based on Machine Learning and Touch Dynamics
The aim of this research paper is to look into the use of continuous authentication with mobile touch dynamics, using three different algorithms: Neural Network, Extreme Gradient Boosting, and Support Vector Machine. Mobile devices are constantly increasing in popularity in the world, today smartphone subscriptions have surpassed 6 billion. Mobile touch dynamics refer to the distinct patterns of how a user interacts with their mobile device, this includes factors such as touch pressure, swipe speed, and touch duration. Continuous authentication refers to the process of continuously verifying a user's identity while they are using a device, rather than just at the initial login. This research used a dataset of touch dynamics collected from 40 subjects using the LG V30+. The participants played four mobile games, PUBG, Diep.io, Slither, and Minecraft, for 10 minutes each game. The three algorithms were trained and tested on the extracted dataset, and their performance was evaluated based on metrics such as accuracy, precision, false negative rate, and false positive rate. The results of the research showed that all three algorithms were able to effectively classify users based on their individual touch dynamics, with accuracy ranging from 80% to 95%. The Neural Network algorithm performed the best, achieving the highest accuracy and precision scores, followed closely by XGBoost and SVC. The data shows that continuous authentication using mobile touch dynamics has the potential to be a useful method for enhancing security and reducing the risk of unauthorized access to personal devices. This research also notes the importance of choosing the correct algorithm for a given dataset and use case, as different algorithms may have varying levels of performance depending on the specific task.
Brendan Pelto, Mounika Vanamala, Rushit Dave
2023-04-24T13:45:25Z
http://arxiv.org/abs/2305.09482v1
Your Identity is Your Behavior - Continuous User Authentication based on Machine Learning and Touch Dynamics ###### Abstract The aim of this research paper is to look into the use of continuous authentication with mobile touch dynamics, using three different algorithms: Neural Network, Extreme Gradient Boosting, and Support Vector Machine. Mobile devices are constantly increasing in popularity in the world, today smartphone subscriptions have surpassed 6 billion. Mobile touch dynamics refer to the distinct patterns of how a user interacts with their mobile device, this includes factors such as touch pressure, swipe speed, and touch duration. Continuous authentication refers to the process of continuously verifying a user's identity while they are using a device, rather than just at the initial log. This research used a dataset of touch dynamics collected from 40 subjects using the LG V30+. The participants played four mobile games, PUBG, Diep.io, Slither, and Minecraft, for 10 minutes each game. The three algorithms were trained and tested on the extracted dataset, and their performance was evaluated based on metrics such as accuracy, precision, false negative rate, and false positive rate. The results of the research showed that all three algorithms were able to effectively classify users based on their individual touch dynamics, with accuracy ranging from 80% to 95%. The Neural Network algorithm performed the best, achieving the highest accuracy and precision scores, followed closely by XGBoost and SVC. The data shows that continuous authentication using mobile touch dynamics has the potential to be a useful method for enhancing security and reducing the risk of unauthorized access to personal devices. This research also notes the importance of choosing the correct algorithm for a given dataset and use case, as different algorithms may have varying levels of performance depending on the specific task. Continuous User Authentication, Touch Dynamics, Machine Learning, PUBG, Diep.io, Biometric Data, Neural Network, XGBoost, SVC, Gesture-based Touch Dynamics, Security ## I Introduction In the world today, the use of mobile devices has become a huge part of daily life. These devices have become important tools for modern-day communication, from accessing personal data to financial transactions. However, the increased use of these devices has also created an additional risk of unauthorized access and data theft. Many forms of authentication have been developed to counter this threat, such as passwords, PIN, and face identification. However, these ways are not guaranteed and can be accessed by hackers. This is why the need for continuous authentication has become more important than ever before. Studies have shown that touch dynamics, the analysis of touch patterns on mobile devices, can provide another layer of security for mobile authentication. We will focus on multi-finger touch dynamics and its potential ability for continuous mobile authentication. Specifically, we look at the use of two fingers as a way of improving the accuracy of touch-based models. We use machine learning algorithms such as a Neural Network, XGBoost, and SVC to achieve this. The way people connect with their mobile devices can allow for several features to be extracted from their touch data, this is generally known as touch dynamics. There are two sub-categories, one being keystroke-based and the other being gesture-based touch dynamics. Keystroke-based touch dynamics looks at individual taps made by users when using a mobile device, while gesture-based touch dynamics looks at the continuous motion of the touch data, often called a swipe or gesture. Both ways use extracted features from the touch data, which can then be used to train algorithms to recognize distinct users. Features such as hold times can be extracted from keystroke touch dynamics, while features such as swipe direction, speed, and acceleration can be extracted from gesture touch dynamics. Our research presents a contribution to the field of touch dynamics in the following ways: * Create and examine several binary classifiers (Neural Network, XGBoost, SVC) to look into the ability of multi-finger touch dynamics * Give a dataset of 40 users playing four games, PUBG, Diep.io, Slither, and Minecraft. [https://github.com/Bprb08/Touch-Dynamics-Research](https://github.com/Bprb08/Touch-Dynamics-Research) * Compare the accuracy of binary classification algorithms with multi-finger data to previous research involving single-finger gesture-based touch dynamics. ## II Background and Related Work Touch dynamics is a fast-growing area of study that looks at the distinct characteristics of touch, including pressure, and duration of the interactions with mobile devices. Since there is a growing use of touch-enabled devices in daily life the study of touch dynamics has gained a lot more attention. The aim of touch dynamics research has focused on authentication and gesture recognition. Authentication refers to the process of identifying users based on their distinct touch patterns, while gesture recognition involves recognizing specific touch patterns, such as individual swipes or taps on the screen, and extracting them into different actions. In recent studies, machine learning algorithms have gotten promising results in both authentication and gesture recognition. One study by DeRidder et al. in 2022 looked at the potential of using machine learning algorithms for authentication on mobile devices. They used k-Nearest Neighbors (kNN) and Random Forest (RF) algorithms to classify different users based on touch dynamics data collected from mobile devices to determine an authentic user from an imposter. The study achieved high accuracy rates of 83.4% for kNN and 93.49% for RF in classifying different users, showing the ability of touch dynamics for authentication. Our study is building on this previous research, which used machine learning algorithms for authentication based on touch dynamics. This research aims to explore the field of mobile touch dynamics with sets of data points from each finger and use machine learning algorithms to analyze touch dynamics data and achieve promising results in authentication. The field of touch dynamics is growing with massive potential for authentication on touch devices. Other research has mainly focused on using machine learning algorithms to analyze touch dynamics data. Some of these can be seen in Table 1. This research looks to address the gap in multi-finger touch dynamics datasets and build on previous research to look into the potential of touch dynamics for authentication on mobile devices. The use of kNN and SVM algorithms, as seen in DeRidder et al. in 2022, leads the current study to use machine learning algorithms for touch dynamics analysis. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Title & Method & Dataset & Results \\ \hline “Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication" (2015) by Frank et al. & Machine Learning algorithms for both pressure of each touch and gesture recognition & Dataset of 200 users that typed on a smartphone for 3 weeks and achieved an average of 3.3\% very similar to our FPR scores & They utilized EER as a biometric performance achieved an average of 3.3\% very similar to our FPR scores & They achieved an average of 3.3\% very similar to our FPR scores \\ \hline “Continuous user Authentication Using Machine Learning and Multi-Finger Mobile Touch Dynamics” (2022) by DeRidder et al. & Machine Learning using multiple features from raw touch and swipe data & Dataset of 25 users who played 10 minutes accuracy for an RF classifier as well as 83\% for a KNN model \\ \hline “Techniques for Continuing Touch-Based Authentication Modeling” (2022) by DeRidder et al. & Utilized several models however, utilized a NN with nearly the same batch size and epochs & The actual size of the datasets is unknown; however, 2 sessions was their second-best being all with each using all gestures & Averaged for their link 15.20\% \\ \hline “Continuous User "Authentication Using Mouse Dynamics, Machine Learning, and Minecraft," 2021 Siddiqui et al. & The dataset used was classifier with user dynamics & Achieved an average accuracy of the RF classifier \\ \hline “Applications of Mouse Dynamics for Continuous User Authentication” (2022) Siddiqui et al. & Used KNN, SVM, and Neural Network with each user being used for all algorithms & The dataset consisted of 400 users & Achieved a peak accuracy of 92.48\% with the Neural Network \\ \hline “Applications of Recurrent Neural Neural Network for Biometric Authentication & Used a Neural Network to train for user mouse and keyboard dynamics & The dataset used had 744 male and female users & They achieved an accuracy of 99.4\% with the use of the RNN \\ \hline “Applications of Recurrent Neural Neural Network for Biometric Authentication & Used a Neural Network to train for and evaluate users & The dataset used had 744 male users & They achieved an accuracy of the use of the RNN \\ \hline “Applications of A Anomaly Detection” (2021) & Used a Neural Network to train for user mouse and the neural network & The dataset used had 744 male users & They achieved an accuracy of 99.4\% with the use of the RNN \\ \hline \end{tabular} \end{table} Table 1: A comparison table of similar research shows their methods of research, the size of their datasets, and their results. ## III Methodologies ### _Data Collection_ In our research, we collected data from 40 different users playing 4 games for 10 minutes each, PUBG, Diep.io, Slither, and Minecraft. All the games had the same settings to make sure that all users played in the same environment. In deciding what games to go with, we needed to utilize using two fingers simultaneously, also these games require a lot of finger movement leading to more data. The data was collected from an Android phone, the LG V30+, in a landscape position. In order to collect useful data, a Python script was written to connect to the Android phone with the use of the Android debugging interface. An algorithm was also written to be able to distinguish between fingers in the raw sensor data, and the data was saved in a text file. Each line of text contained \(8\) fields, including Timestamp, X, Y, Button Touch, Width Major, Pressure, Orientation, and Finger. Where the Timestamp is the time in seconds, X and Y show the device's x and y coordinates. Button Touch tells whether a finger is currently pressed on the screen. Width Major shows the length of the touch shape on the screen. The finger column determines which of the two fingers the data corresponds to, there were values of 0 or 1 assigned to each finger to tell the difference in the inputs. Other research in touch dynamics has focused on areas around authentication. Some studies have collected touch dynamics data from users' interactions with touch devices and used machine learning algorithms to determine specific users based on their own touch patterns. The results showed that touch dynamics can be a useful way for continuous authentication on mobile devices. We also had groupings of touch data which were done using a large enough number of touch events per group to ensure enough data differentiation for feature extraction, and we chose to group events like previous research. We wanted to avoid using a timed approach to define gestures as it created high variability, whereas a fixed number of events created more meaningful data extraction and training. The groupings were split so that 10 separate touch/sliding events created enough data for feature extraction and so that the model would have enough data to train from. This can be seen in Figure 1 below. ### _Data Cleaning_ To check the accuracy of our data, we took several steps to clean and preprocess the data collected from the 40 users. Any rows containing null values were deleted from the dataset. This made sure that there were no missing values in the data that could possibly affect our results. Also we had to make sure that the different types of actions were consistent across all users and their devices. This was done to ensure that any changes in the data were only from the specific user and not differences in device operation. Another issue that came up during data collection was multiple events happening simultaneously, which lead to errors in our calculations. To correct this, we sorted the data from each finger into its own area, with data from finger 1 at the top and data from finger 2 at the bottom. This way helped to prevent any problems between for each finger, which could have created errors in our feature extraction process. Then to improve the accuracy of our results, we combined the separated finger data and randomized it to separate data from each finger into different spots. This made sure that the classifiers were analyzing data from both fingers the same amount and that any change in the data was not from a specific pattern in the data collection process. These data-cleaning steps were important in making sure the accuracy of the data in our analysis. ### _Feature Extraction_ From the data for each user, we extracted 11 additional features from the initial fields. These features are an important part of distinguishing users since each one is specific to an individual [20]. The 11 features: X Speed, Y Speed, Speed, X Acceleration, Y Acceleration, Acceleration, Jerk, Path Tangent, Angular Velocity, Touch Major, and Touch Minor. The definitions for each feature can be found in Table 2 below. These features in Table 2 are important for the binary classifiers to have enough differences in the data for each user. To have even more differences we extract from these 11 features, the average, minimum, maximum, and standard deviation, which create 44 features for these classifiers to use to train. The features collected from this data are very important to determine the difference between each user. The touch dynamics is very distinct to each user and can be used to identify specific ones. Along with this, the psychological effects are also seen through the pressure of gestures which can create patterns within the data. ### _Training and Testing_ In order to make sure the accuracy of our classifiers was consistent, we used a data splitting approach that divided the total data generated by each user into training and testing sets, with 80% and 20% of the data. Our choice of 80% training data was informed by other research evidence, as this is said to produce highly accurate results [23]. To decrease the potential for bias in our dataset, we concatenated and shuffled the training and testing data from all users into a main text file, thereby taking out any noticeable pattern for any of the users. Also, we equally distributed the data for authentic and imposter users. We used a binary classifying system, by assigning 0 or 1 to each row in the dataset to tell if it is authentic, with 0 representing authentic data and 1 representing imposter data. Lastly, we analyzed the accuracy of our classifiers by comparing the original values of the testing data to the model. ## IV Results and Analysis Our research aims to evaluate the effectiveness of a recently created dataset by examining its ability to discern users using only the features extracted from their touch data with mobile devices. The dataset was obtained equally from all participants, and every classifier was trained and tested on each individual user. To determine the performance of each binary classifier, we used a range of evaluation criteria. This included accuracy, false positive rate (FPR), false negative rate (FNR), and F1 score. We calculated these metrics for each of the 40 individual users in our study. With all these criteria, obtaining the lowest scores for FPR and FNR is necessary. In a real evaluation of a biometric system, a high FPR might allow unauthorized users to gain access to sensitive data, where a high FNR could prevent authentic users from getting access. Accuracy is also an important metric since it offers a good idea of the classifier's performance in authenticating a distinct user compared to their test data. The F1 score evaluates both the recall and precision accuracies of a model. ### _Diep.io and PUBG results_ Fig. 2: a portion of the user outcomes from the Neural Network and XGBoost approaches, as well as the comprehensive mean results for every assessment metric. Abbreviations: Avg signifies average, while Stdv represents Standard Deviation. Complete values for all users have been excluded to save space, but they can be viewed at { [https://github.com/Bprpb08/Touch-Dynamics-Research](https://github.com/Bprpb08/Touch-Dynamics-Research)} The classifiers used in this study were Neural Network, XGBoost, and SVC. The Neural Network achieved an average accuracy of 90.04%, with an F1 score of 91.41%, an FNR of 2.69%, and an FPR of 3.60%. The XGBoost classifier obtained an average accuracy of 86.61%, with an F1 score of 87.36, an FNR of 4.47%, and an FPR of 4.28%. The SVC classifier achieved an average accuracy of 78.65%, with an F1 score of 80.63, an FNR of 6.42%, and an FPR of 4.56%. Decreasing the FPR and FNR is important in making sure there is secure access to a system, as wrong assessments could lead to access by impostors. The FPR for all classifiers remained relatively low, with an average of 4.0% for XGBoost, 5.3% for SVC, and 3.3% for Neural Network. However, the FNR was higher in most instances, reaching 4.6% for XGBoost, 6.1% for SVC, and 2.6% for Neural Network. Our research looked at the performance of Neural Network, XGBoost, and SVC classifiers and found that all three algorithms had relatively high accuracy rates, as seen in Figure 2 above. The SVC algorithm performed a bit worse than the others, likely due to the difference in algorithm functionality. However, we also evaluated FPR and FNR scores, which provide important information on the potential for inaccurate judgments. These results note the effectiveness of these algorithms for authentication tasks and provide important details for future research. ## V Limitations and Future Work Even though multi-finger authentication has many advantages over single-finger approaches, there are still limitations that could cause problems in its practical uses. One potential drawback of this type of authentication is that it requires two fingers to be used continuously. It is very common for users on a mobile device to only use one finger, which would defeat the purpose of this specific type of authentication. Another difficulty with continuous two-finger input being required to accurately predict users is that it may not be possible for all users. If using two fingers on a device is not possible, it makes this form of authentication unusable. Also, without prior user data, it can be impossible to authenticate users correctly. One possible solution to this issue is to have users create their touch dynamics prior to authentication, like facial recognition on devices such as iPhones and Androids. For future work, it would be useful to create methods for detecting impostors without this prior user data, such as using machine learning algorithms to compare new touch dynamics data with existing user data. We also look to continue further research into other types of machine learning classifiers and compare their use for multi-finger authentication. ## VI Conclusion This research paper investigated the potential of continuous authentication with mobile touch dynamics using three different machine learning algorithms: Neural Network, XGBoost, and SVC. The study used a dataset of touch dynamics collected from 40 unique participants playing four mobile games, PUBG, Diep.io, Slither, and PUBG. The results show that all algorithms were able to classify users based on their touch dynamics, with accuracy ranging from 80% to 95%. The Neural Network algorithm performed the best, achieving the highest accuracy scores, followed by the XGBoost and SVC. These results suggest that continuous authentication using mobile touch dynamics can be an effective method for improving security and lowering the risk of unauthorized access. The results of this study support the use of continuous authentication with mobile touch dynamics as an effective means of improving security on personal devices, and the potential for future research in this area.
2306.10483
Study of Ge Doped Garnet Type Li$_7$La$_3$Zr$_2$O$_{12}$ as Solid Electrolyte for Li-ion Battery Application
Li$_{7-4x}$Ge$_x$La$_3$Zr$_2$O$_{12}$ has been synthesized using the conventional solid-state reaction method by substituting Germanium (Ge) at the Li site, which increases the Li-ion vacancies and leads to an increase in conductivity with $x$ varying from 0.05-0.20. The formation of cubic phase is confirmed by using XRD analysis. The surface morphology and elemental distribution have been studied using SEM characterization which gives the average particle size of the sample. The densities of the samples were calculated. For the confirmation of functional groups present within the sample, IR spectroscopy has been studied. The modulus and ac conductivity studies have also been studied. A complex impedance study has been done in the frequency range 20Hz-20MHz. Increase in ionic conductivity by one order has been observed in the sample with $x=0.10$. The minimum value of 0.56 eV activation energy is associated with the highest conductivity value of 7.23 x 10$^{-6}$ S/cm at room temperature. Thus increment in ionic conductivity at room temperature makes this material a promising solid electrolyte for future sustainable energy storage devices.
Muktai Aote, A. V. Deshpande
2023-06-18T05:47:46Z
http://arxiv.org/abs/2306.10483v1
# Study of Ge Doped Garnet Type Li?La3Zr2O12 as Solid Electrolyte for Li-ion Battery Application ###### Abstract Li?-4xGexLa3Zr2O12 has been synthesized using the conventional solid-state reaction method by substituting Germanium (Ge) at the Li site, which increases the Li-ion vacancies and leads to an increase in conductivity with x varying from 0.05-0.20. The formation of cubic phase is confirmed by using XRD analysis. The surface morphology and elemental distribution have been studied using SEM characterization which gives the average particle size of the sample. The densities of the samples were calculated. For the confirmation of functional groups present within the sample, IR spectroscopy has been studied. The modulus and ac conductivity studies have also been studied. A complex impedance study has been done in the frequency range 20Hz -20 MHz. Increase in ionic conductivity by one order has been observed in the sample with x=0.10. The minimum value of 0.56 eV activation energy is associated with the highest conductivity value of 7.23 x 10\({}^{-6}\) S/cm at room temperature. Thus increment in ionic conductivity at room temperature makes this material a promising solid electrolyte for future sustainable energy storage devices. Li?La3Zr2O12, 0.10 Ge, Solid electrolyte, Ionic conductivity. ## 1 Introduction As time passes, the world is encountering an energy crisis due to the high burn rate of fossil fuels. These kinds of energy sources are diminishing, building pressure on society for the upcoming generation's energy needs. The world is enriched with renewable energy sources; thus, many countries are trying to set up renewable energy plants, which contributes to the reduction in the usage of carbon-emitting fuels and promoting the era of electric vehicles. But the initiative that has been taken needs proper devices to store and convert energy. Some known energy devices are capacitors, fuel cells, batteries and solar cells. Ionic conduction is the main key factor of these devices[1,2]. Out of all these electrochemical devices, batteries are superior in energy density and duty cycle with the low self-discharge ability and lighter weight. Batteries are classified into two types, i.e. primary batteries, which are non-rechargeable and secondary batteries which are reusable after charging. The volumetric and gravimetric energy densities of secondary batteries are very high [3]. Thus it makes a secondary battery promising energy storage device [1][4, 5, 6]. As we are discussing batteries, the main components of the batteries are their anode, cathode, and electrolyte, which is the path for the flow of charge carriers from anode to cathode and vice versa. Mostly liquid electrolytes have been used in the batteries due to their high conductivity. But along with that, many issues have been aroused related to liquid electrolytes, such as leakage, which can lead to flammability and dendrite growth which hampers the conduction of ions [7, 8]. Thus to overcome these situations, the focus has been shifted towards developing solid electrolyte favoring ionic conduction. Many solid electrolytes have been studied, such as NASICON, LISICON, LIPON, Beta Alumina and Li\({}_{3}\)N type [9, 10, 11, 12]. But all these have chemical instability, with lithium metal electrodes having low electrochemical potential windows. Also, they are difficult to prepare in pure form, affecting their ionic conductivity Hence, a solid electrolyte is required to fit in all the criteria and overcome these previously mentioned problems. Solid electrolytes are made with non-flammable and volatile components, which helps in transport of lithium ions and prevent electronic transmission, simultaneously eliminating safety risk [13, 14, 15]. Here Li\({}_{7}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\), commonly known as LLZO, the garnet-type solid electrolyte, comes into the picture. It has very good thermal and chemical stability, with lithium electrodes having comparatively high conductivity and a high potential window [16]. LLZO possesses two structures which are tetragonal and cubic. But for the conduction of Li ions, the cubic phase is essential. The study on enhancing the ionic conductivity of LLZO at room temperature has been reported earlier by adding various dopants such as Al, Ba, Fe, Ga, Ta, and Bi in place of Li and Zr [17, 18, 19, 20, 21, 22]. These dopants help in improving ionic conductivity by increasing Li-ion vacancies. Previously the effect of the partial substitution of Ge for Zr has been studied [23]. Thus in this work, the effect of substitution of supervalent Germanium (Ge) at the Li site has been studied to enhance ionic conductivity at comparatively lower sintering temperature. ## 2 Experimental Section ### Sample Preparation The series Li\({}_{7-4\mathrm{x}}\)Ge\({}_{\mathrm{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) (x= 0.05,0.10-0.20) with four compositions have been prepared using Li\({}_{2}\)CO\({}_{3}\) (Merck, \(>\)99.9%), La\({}_{2}\)O\({}_{3}\) (Sigma Aldrich, \(>\)99.99%), ZrO\({}_{2}\) (Sigma Aldrich, \(>\)99.9%) and GeO\({}_{2}\) (Sigma Aldrich, \(>\)99.99%) as preparative chemicals by conventional solid-state reaction method in an air atmosphere. The stoichiometric amounts of all the above chemicals have been taken in agate mortar and hand mixed thoroughly, followed by dry mixing for 2 h and wet mixing for 1 h using acetone as the dispersive agent. To compensate the loss of lithium during heating process, 10% excess lithium in the form of Li\({}_{2}\)CO\({}_{3}\) has been added. The formed powder is then transferred into an alumina crucible and kept in a muffle furnace for calcination at 900\({}^{0}\)C for 8 h. After the calcination, the powder was cooled at room temperature and again crushed into fine powder. To form the pellets of around 10 mm diameter and 1.5 mm thickness, the powder was uniaxially pressed under the pressure of 4 tons per cm\({}^{2}\) using a hydraulic press. Formed pellets were then kept in a bed of mother powder for sintering at 1050\({}^{0}\)C for 7.30 h in a muffle furnace. ### Characterizations For the phase identification, the pellet sintered at 1050\({}^{0}\)C was crushed and then examined by X-ray diffraction with RIGAKU diffractometer using Cu-k\(\alpha\) having wavelength of 1.54 A\({}^{0}\) as a radiation source. The data were collected in the range of 10\({}^{0}\) - 80\({}^{0}\) with a step size of 0.02 degrees and 2\({}^{0}\)/min scan speed. The densities of samples were determined by Archimedes principle with toluene as an immersion medium using K-15 Classic (K-Roy) instrument. Scanning electron microscopy (JSM-7600 F/JEOL) was done for the microstructural analysis and compositional study using an accelerating voltage of 15kV. The conductivity study and related modulus study have been carried out using a Novocontrol impedance analyzer in the frequency range 20Hz- 20KHz and temperature range varying from room temperature to 150\({}^{0}\)C. The silver paste was coated on both the faces of sintered pellets with a diameter of around 10mm and 1.3 mm thickness to maintain ohmic contact with the silver electrodes, which act as blocking electrodes for AC and DC conductivity measurements. DC polarization technique was carried out to calculate the ionic transport number using KEITHLEY 6512 programmable electrometer. The constant voltage of 1 V was applied to the silver electrodes in which the pellet was placed, and the corresponding current through the sample was measured in the nanometer range with time. ## 3 Results and Discussion ### X-ray diffraction Fig.1(a) shows the powder X-ray diffraction pattern of the Li\({}_{7\text{-}4x}\)Ge\({}_{x}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) samples (x= 0.05,0.10-0.20), which are sintered at 1050\({}^{0}\)C for 7.30 hours in a muffle furnace. All the recorded peaks agree with the peaks from JCPDS file no. 45.0109. The peaks are marked with the corresponding (hkl) planes. It confirms that all the samples possess cubic phase with space Group: Ia-3d, which is the primary requirement for developing highly conducting garnet type Li\({}_{7}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). Among all the samples, the sample with x= 0.10 Ge shows accurate matching of more crystalline peaks with high intensity. This result is in agreement with the work by Huang [24]. It has been reported that, concentration of 1 wt% of Ge, which is nearly equal to 0.12 mol of Ge, helps to form and stabilize the cubic phase. This argument is well supported by the observed splitting of (321) and (420) peaks for samples with x= 0.05, 0.15, and 0.20 Ge. The splitting of these peaks may be attributed to the interference of the tetragonal phase (Space group: I4\({}_{1}\)/acd) within the material. Along with the required phase, the La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) phase has also been observed around 27\({}^{0}\) (JCPDS file no. 17-0450). La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) pyrochlore commonly occurs as an impurity phase in the synthesis of Li\({}_{7}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). The occurrence of pyrochlore of lanthanum is due to the volatilization of lithium from the sample, which occurs during the sintering process [25]. Fig. 1(b) shows the shifting of (211) peak towards higher values of 2\(\Theta\). This result may be due to the increment of Ge content in the sample. The substitution of Ge in place of Li may cause a decrease in the lattice constant because the ionic radius of Ge\({}^{4+}\) is smaller than the ionic radius of Li\({}^{+}\). which results in a shrinking effect [26]. The average crystallite size for x= 0.10 Ge was calculated using the Debye Scherrer formula, and it was found to be 34.59 nm. **Fig.1 (a)** X-ray diffraction patterns of Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) (x=0.05-0.20) and **b)** Shifting in (211) peak of x= 0.10 Ge **Table1.** Density and Relative density of Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). \begin{tabular}{|c|c|c|c|} \hline **x** & **Density (g/cm\({}^{3}\))** & **Relative density (\%)** & **Lattice constant** \\ \hline 0.05 & 4.23 & 83.10 & 12.9872 \\ \hline 0.10 & 4.4 & 86.44 & 12.9914 \\ \hline 0.15 & 4.11 & 80.74 & 12.9792 \\ \hline 0.20 & 4.09 & 80.35 & 12.9703 \\ \hline \end{tabular} **3.3 Morphological and EDS Studies** Fig.2 (a,b,c,d) shows the magnified images of typical surface micrographs of Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) with Ge varying from 0.05-0.20 mol. In Fig.2 (a), it can be observed that voids are present with irregularities in the size of particles. Fig.2 (b) shows the sample with x=0.10 Ge. It can be observed that, as compared to all other samples, the sample with x= 0.10 Ge has a dense structure with a larger particle size. The grains are well connected with the neighboring grains giving compact nature to the sample. No pores can be seen. This compact arrangement of particles results in a high density of the pellet as compared to other samples. The values of density can be clearly reviewed in Table 2. Fig. (c & d) clearly shows the decrease in density with the increase in Ge content. From the figure, it is clear that, as the amount of Ge increases in Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\), the grain has grown in size, but the void spaces also increase increases the porosity of the sample and results in a lowering of density values. These pores also obstruct the Li\({}^{+}\) ion conduction within the material. This may be due to the effect of the viscous flow of sintering that might have occurred in the sample [23]. The elemental mapping of Li\({}_{6.6}\)Ge\({}_{0.1}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) is shown in Fig.3, where grain boundaries can be observed. All the elements viz. La, Zr, and Ge, which are constituents of made samples, are uniformly distributed throughout the sample. Elemental mapping of Ge confirms the insertion of Ge into the lattice. This is also supported by the X-ray diffraction analysis in which shifting of peak occurs due to a change in lattice constant. The EDX spectrum of the sample with 0.10 Ge has been shown in Fig.4 (a). The average particle size was calculated from the histogram mentioned in Fig.4 (b), and it was found to be 3.28 \(\upmu\)m. **Fig.2 SEM images of Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) with x= **a)** 0.05 **b)** 0.10 **c)** 0.15 **d)** 0.20. **Fig.3 (a)** Magnified image of Li\({}_{6.6}\)Ge\({}_{0.1}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) **(b)** Elemental mapping of La, Zr and Ge for Li\({}_{6.6}\)Ge\({}_{0.1}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). **Fig.4 (a)** EDX spectrum and **(b)** Particle size distribution of Li\({}_{6.6}\)Ge\({}_{0.1}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). ### Conductivity Studies #### 3.4.1 Impedance Plots The frequency range of 20Hz to 20MHz was used electrical conductivity of Li?-4xGe\({}_{x}\)La3Zr2O\({}_{12}\) for all the values of x. Fig. 5(a) shows the Nyquist plots for the samples with Ge ranging from 0.05, 0.10-0.20 at 25\({}^{0}\)C. Fig. 5(b) shows the fitted graph for 0.10 Ge substituted LLZO. The ionic conductivity is calculated by using the equation, \[\sigma=\frac{t}{RA} \tag{1}\] in which, \(\sigma\) is the ionic conductivity, \(t\) is the thickness of the sample, \(A\) is the area of the electrode, and \(R\) is the resistance offered by the sample. The resistance can be calculated from the intercept made by the semicircle on the real axis of Zs' in the high-frequency region. The appearance of a tail in a low-frequency region is because of the Ag electrode's blocking nature. It can be observed from Fig. 5 that the sample with x = 0.10 Ge has a minimum intercept on the real axis offering minimum resistance as compared to other samples. This indicates that substituting Ge\({}^{4+}\) at Li\({}^{+}\) lowers the resistance of LLZO. The sample with 0.10 Ge has maximum conductivity of 7.23 x 10\({}^{-6}\)S/cm at room temperature, which is one order higher in magnitude than pure LLZO. This can be explained on the basis of uniformity and increment in grain size, good contact with neighboring grains with no visible pores, high density and stabilized cubic phase. With further increase in Ge content, intercept on real axis shifts towards lower frequency region, giving high resistance and thus decrease in conductivity. This can attributed to formation of a non-conductive La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) phase. Thus the high ionic conductivity of 7.23 x 10\({}^{-6}\)S/cm at 25\({}^{0}\)C for 0.10 Ge substituted LLZO makes this ceramic a prominent candidate for battery applications as a solid electrolyte. #### 3.4.2 Arrhenius Plots The Arrhenius plots for Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) with Ge varying from 0, 0.05- 0.20 mol are shown in Fig.6 (a). This data was collected in the temperature range from 50\({}^{0}\)C - 150\({}^{0}\)C. The graph plotted using acquired data follows Arrhenius behavior. The Arrhenius equation is given as \[\sigma\left(T\right)=\text{ }\sigma_{0}\text{ exp }(-\frac{E_{a}}{K_{B}T}) \tag{2}\] Where, \(E_{a}\) is the activation energy, \(\sigma\) is the conductivity, \(\sigma_{0}\) is the pre-exponential factor, \(K_{B}\) is the Boltzmann constant, and \(T\) is the temperature in Kelvin. The activation energy was calculated using this equation for lithium ion conductivity. From the figure, it can be observed that the minimum activation energy (E\({}_{\text{a}}\)) is obtained for x= 0.10 Ge, which also has maximum ionic conductivity at room temperature. The value of activation energy is found to be 0.56 eV for Li\({}_{6.6}\)Ge\({}_{0.1}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). The activation energy values with its corresponding ionic conductivity for all the samples have been tabulated in Table 2. The minimum activation energy for 0.10 Ge containing sample may assign to high relative density. This makes hopping pathways favorable for lithium-ion conduction. Migration of Li-ion is possible through Li point defect as well as Frankel defect between two tetrahedral within the structure according to density functional theory. The distribution of Li vacancy and interstitial lithium in the structure strongly affects Li conductivity [27]. Based on charge neutrality, Ge\({}^{4+}\) should substitute Li\({}^{+}\) and creates three vacancies. Thus the concentration of Li vacancy increases, which eventually leads to an increase in conductivity and minimizes the activation energy. This result is well supported by the earlier report in which replacement of Al\({}^{3+}\) with Li\({}^{+}\) has been done [28]. Fig.6 (b) shows the variation of activation energy and ionic conductivity at 25\({}^{0}\)C with increasing Ge content in Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). **Fig.6 (a)** Ionic conductivity as a function of temperature for x= 0, 0.05,0.10, 0.15 & 0.20 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). **(b)** Variation of conductivity at 25\({}^{0}\)C and activation energy with varying content of Ge in Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). **Table2**. Ionic Conductivity of Ge substituted Li7-4xGe\({}_{\rm x}\)La3Zr2O12. \begin{tabular}{|c|c|c|} \hline **x** & \(\sigma_{\rm bulk}\) **(S/cm) at 25\({}^{\bf 0}\)C** & **E\({}_{\bf a}\) (eV)** \\ \hline 0 & 1.43 x 10\({}^{-7}\) & 1.11 \\ \hline 0.05 & 4.85 x 10\({}^{-6}\) & 0.58 \\ \hline **0.10** & **7.23 x 10\({}^{-6}\)** & **0.56** \\ \hline 0.15 & 2.86 x 10\({}^{-6}\) & 0.61 \\ \hline 0.20 & 1.36 x 10\({}^{-6}\) & 0.98 \\ \hline \end{tabular} #### 3.4.3 DC Conductivity DC conductivity measurement plays an important role in ensuring the presence of electronic conduction in the sample from the calculation of ionic transport number [2]. Fig.7 shows the DC conductivity plots of Li7-4xGe\({}_{\rm x}\)La3Zr2O12 with x= 0.05 and 0.10 Ge samples. The whole data was analyzed using a Keithley electrometer up to 300 minutes. For the calculation of ionic transport number, the equation \[t_{i}=(\sigma_{total}-\sigma_{e})/\sigma_{total} \tag{3}\] is used where \[\sigma_{total}=\sigma_{ions}+\sigma_{electrons} \tag{4}\] It was found to be \(>\) 0.999 for the specimens with 0.05 and 0.10 Ge. Thus, confirming the predominant ionic conduction within the ceramics. **Fig.7** Variation of DC conductivity with time for transport number measurement. ### FTIR Study FTIR spectroscopy is used to study the presence of functional groups within the material. Various peaks corresponding to the respective groups can be observed in the spectra. Fig.8 depicts the FTIR spectra of all Ge-substituted LLZO samples. The peaks at 563 cm-1 and 866 cm-1 could be related to Zr-O, whereas the peak observed at 681 cm-1 corresponds to the La-O group. Peaks at 1438 and 1504 cm-1 are the result of the asymmetric stretching vibration mode of v\({}_{\rm as}\) (C=O), gives the evidence of the presence of Li\({}_{2}\)CO\({}_{3}\) in the samples[29], [30]. The effect of moisture absorbed from the surrounding by the specimens can be observed from the broad peaks in the region of 3400-3600 cm-1 related to OH groups [29]. ### AC conductivity and modulus study Two mechanisms are studied to interpret a material's electrical data: ac conductivity and electric modulus [31]. Here in this study, both mechanisms are discussed. Fig. 9 (a) shows the variation of ac conductivity of 0.10 Ge substituted LLZO with frequency. The nature of graph comprises three distinct regions. These are (1) low-frequency region due to polarization at the electrode-electrolyte interface, (2) mid plateau region, which is ascribed to conductivity which is frequency independent and (3) high-frequency region, which shows an increase in conductivity with the frequency. In this high-frequency region, under the influence of an alternating electric field, the movement of ions in the structure is given by Jonscher's universal power law. According to this law, the real part of ac conductivity can be expressed as \[\sigma(\omega)=\ \sigma_{dc}+A\omega^{n} \tag{5}\] Where, \(\sigma(\omega)\) is the total conductivity, \(\sigma_{dc}\) is dc conductivity, \(A\omega^{n}\) is the dispersive component of ac conductivity in which \(A\) is constant, and \(n\) is frequency exponent value. The value of n is physically acceptable in the range of \(0\leq n\leq 1\) and subjected to the interaction of the ions. In Table 3, the values of \(n\) are listed for 0.10 Ge substituted LLZO for different temperatures. From that, it can be observed that with the increase in temperature, the value of n increases which can be well explained based on a theoretical model named quantum mechanical tunneling (QMT). Also, the increase in conductivity with the temperature occurs due to the mobility of ions. Temperature-dependent frequency exponent obtained in the quantum mechanical tunneling model framework assumes that charge carriers form non-overlapping small polaron. In this model, polaron hopping energy and characteristic relaxation time are used to calculate it [32]. Fig.9 (b) shows the scaled temperature-dependent spectrum of the real part of ac conductivity of 0.10 Ge. It can be observed from the graph that the conductivity spectra at different temperatures almost merge into a single master curve which is an important feature of temperature independent relaxation process under conductivity formalism. At higher frequencies, deviation in the curve can be observed which is possibly due to structural peculiarities in the specimen caused by different conduction pathways. In comparison, the variation in the superimposition of conductivity spectra at lower frequencies gives compositional dependence of the material. It may be ascribed to the polarization effect due to the electrode-electrolyte interface [31], [32]. The electric modulus study is used to build the relation between conductivity and relaxation of mobile ions in conducting solids. According to this, complex electric modulus (\(M^{*}\)) is the reciprocal of complex permittivity (\(\epsilon^{*}\)), and the equation is given by \[M^{*}=\ \frac{1}{\epsilon^{*}}=\frac{(\epsilon^{\prime}-j\epsilon^{\prime\prime})}{| \epsilon^{*}|^{2}}=\ M^{\prime}+\ jM^{\prime\prime} \tag{6}\] Where, \(M^{\prime}\), \(M^{\prime\prime}\), \(\epsilon^{\prime}\), \(\epsilon^{\prime\prime}\) are real and imaginary parts of complex modulus and complex permittivity respectively [31], [33]. Fig. 10 (a) shows variation of \(M^{\prime\prime}\), with the frequency for 0.10 Ge substituted LLZO at different temperatures. This represents energy loss under an applied electric field. The peak in the spectrum also represents the conductivity relaxation[34]. From the figure, two regions can be identified, i.e. the region of frequency below \(M^{\prime\prime}\) peak and frequency region above \(M^{\prime\prime}\) peak. These regions show the range where charge carriers are mobile on long and short distances. For further evaluation, relaxation time is calculated using frequency corresponding to\(M_{max}^{\prime\prime}\). The plots of \(M^{\prime\prime}/M_{max}\) Vs \(f/f_{max}\) known as modulus scaling, are shown in Fig. 10 (b). Here the overlapping of curves can be observed for the distinct temperature range, which indicates temperature-independent conduction. Fig. 10 (c) represents the variation of \(M^{\prime\prime}/M_{max}\) Vs \(f/f_{max}\) of all the Ge substituted LLZO. In this figure, the non-overlapping curves indicate that the conduction phenomenon depends on the composition. This may attribute to the insertion of Ge within the sample. Variations in relaxation time with temperatures can be seen in Fig. 10 (d). The nature of the graph follows Arrhenius law, i.e. \[\tau=\ \tau_{0}\exp\ (\frac{E_{a}}{k_{B}T}) \tag{7}\] where, \(\tau_{0}\) is a pre-exponential factor, \(E_{a}\) is the activation energy, \(K_{B}\)is Boltzmann constant, and T is the temperature in Kelvin. The values of activation energies for \(E_{a}(\tau)\) and \(E_{a}(\sigma)\) are found to be 0.52 eV and 0.56 eV, respectively. There is a small change in both energy values. This small difference does not affect any properties and suggests that the same type of charge carrier is responsible for both the processes of conduction and relaxation. **Table3**. Frequency exponent values of 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). \begin{tabular}{|c|c|} \hline **Temperature (\({}^{0}\)C)** & **Frequency exponent (n)** \\ \hline 150 & 0.62 \\ \hline 130 & 0.59 \\ \hline 110 & 0.57 \\ \hline 90 & 0.52 \\ \hline 70 & 0.50 \\ \hline 50 & 0.47 \\ \hline \end{tabular} **Fig.10 (a)** Imaginary part of electric modulus as a function of frequency and temperature for 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\), **(b)** Normalized modulus plots (M\({}^{\text{\text{\textregistered}}}\)/M\({}^{\text{\textregistered}}\)\({}_{\text{max}}\)) Vs log (f/f\({}_{\text{max}}\)) for 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) at different temperatures, **(c)** Normalized modulus plots (M\({}^{\text{\text{\textregistered}}}\)/M\({}^{\text{\textregistered}}\)\({}_{\text{max}}\)) Vs log (f/f\({}_{\text{max}}\)) for x= 0 - 0.20 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) at 25 \({}^{0}\)C, **(d)** Variation of relaxation time with temperature for 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). ## 4 Conclusions The garnet type Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) (x= 0.05-0.20) has been prepared by the conventional solid-state reaction method. The cubic phase has been confirmed by XRD characterization. The density measurement and SEM analysis confirmed the compactness and uniformity in grain size for x= 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\). The elemental mapping for this sample reveals the uniform distribution of Ge in the specimen with an average particle size of 3.28 \(\upmu\)m. There is an enhancement in an ionic conductivity by one order of magnitude for 0.10 Ge at room temperature with minimum activation energy. This has been attributed to the substitution of supervalent dopant for lithium, which increases the pathways for lithium-ion conduction. The FTIR study confirmed the existence of the functional groups in the samples. The ac conductivity and modulus study confirmed that, both the conduction and relaxation processes are due to the same kind of charge carrier and the relaxation process is temperature independent. Thus, 0.10 Ge substituted Li\({}_{7\text{-}4\text{x}}\)Ge\({}_{\text{x}}\)La\({}_{3}\)Zr\({}_{2}\)O\({}_{12}\) having ionic conductivity of 7.23 x 10\({}^{\text{-}6}\) S/cm at room temperature and activation energy of 0.56 eV, is a prominent candidate for the solid electrolyte. ## Acknowledgments One of the authors would like to express sincere appreciation to VNIT, Nagpur, for providing a Ph.D. fellowship. The author appreciate the support of DST FIST project number SR/FST/PSI/2017/5(C) for the XRD facility provided by the Department of Physics at VNIT, Nagpur.
2304.11682
Theoretical and numerical comparison of quantum- and classical embedding models for optical spectra
Quantum-mechanical (QM) and classical embedding models approximate a supermolecular quantum chemical calculation. This is particularly useful when the supermolecular calculation has a size that is out of reach for present QM models. Although QM and classical embedding methods share the same goal, they approach this goal from different starting points. In this study, we compare the polarizable embedding (PE) and frozen-density embedding (FDE) models. The former is a classical embedding model, whereas the latter is a density-based QM embedding model. Our comparison focuses on solvent effects on optical spectra of solutes. This is a typical scenario where super-system calculations including the solvent environment become prohibitively large. We formulate a common theoretical framework for PE and FDE models and systematically investigate how PE and FDE approximate solvent effects. Generally, differences are found to be small, except in cases where electron spill-out becomes problematic in the classical frameworks. In these cases, however, atomic pseudopotentials can reduce the electron-spill-out issue.
Marina Jansen, Peter Reinholdt, Erik D. Hedegård, Carolin König
2023-04-23T15:21:45Z
http://arxiv.org/abs/2304.11682v1
# Theoretical and numerical comparison of quantum- and classical embedding models for optical spectra ## Abstract Quantum-mechanical (QM) and classical embedding models approximate a supermolecular quantum-chemical calculation. This is particularly useful when the supermolecular calculation has a size that is out of reach for present QM models. Although QM and classical embedding methods share the same goal, they approach this goal from different starting points. In this study, we compare the polarizable embedding (PE) and frozen-density embedding (FDE) models. The former is a classical embedding model, whereas the latter is a density-based QM embedding model. Our comparison focuses on solvent effects on optical spectra of solutes. This is a typical scenario where super-system calculations including the solvent environment become prohibitively large. We formulate a common theoretical framework for PE and FDE models and systematically investigate how PE and FDE approximate solvent effects. Generally, differences are found to be small, except in cases where electron spill-out becomes problematic in the classical frameworks. In these cases, however, atomic pseudopotentials can reduce the electron-spill-out issue. ## 1 Introduction Quantum-mechanical (QM) methods are indispensable for the calculation of optical spectra, but their use often becomes computationally too demanding for large systems. Embedding schemes have been introduced to circumvent the full, super-system QM calculation by including large environments through an effective _embedding_ operator. The definition of an embedding model requires that the system is split into an active system and the remaining part ("the environment").[1; 2; 3; 4] Embedding approaches can be divided into two main classes: (i) QM-classical embedding approaches describe the active system by a QM method, whereas all interactions between the active system and environment (as well as the environment itself) are treated by a classical description. (ii) QM-QM embedding describes both active system and environment with QM methods (either on the same or different footings). In this case, the interaction between active system and the environment also contains QM contributions. For optical properties, the electrostatic interaction between the active system and the environment is often the dominating embedding contribution. In traditional QM-classical approaches, this contribution is modeled through (atomic) point charges in the environment.[4] The point-charge model is, however, insufficient in many cases.[1; 5; 6; 7; 8; 9] Therefore, a large number of more advanced embedding schemes have been developed over the years.[5; 7; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] In this work, we employ an advanced QM-classical embedding model, namely the polarizable embedding (PE) model[20]. In this model, point charges in the environment are replaced by a multipole expansion. Additionally, PE incorporates the environment polarization through anisotropic electronic dipole-dipole polarizabilities. The parameters for the environment (_i.e._ multipoles and polarizabilities) are obtained from QM calculations on isolated fragments. If the environment is a solvent, these fragments are most naturally defined as solvent molecules. In the class of QM-QM embedding methods, the total system is expressed by means of fragments or _subsystems_[24; 25; 26; 27]: Within density functional theory (DFT) this is known as subsystem DFT.[28] All subsystems are described by their electron densities, which are obtained by quantum-chemical calculations. The interaction of the environment subsystems with the active subsystem is then recovered through an embedding potential, which contains quantum-mechanical contributions. This potential is dependent on all other subsystem's electron densities.[28; 29] In practice, the environmental electron densities are commonly kept frozen, so that only the active subsystem's electron density is polarized. In this frozen-density embedding (FDE) approach[29], the effect of the envi ronment's polarizability can be incorporated by self-consistently cycling through all subsystems in a so-called freeze-and-thaw scheme.[25; 27; 28; 29; 30] Both embedding classes share, hence, a common goal and have been employed to show that polarization effects originating from the environment can play a significant role in the accurate calculation of local optical properties.[31; 32; 33; 34; 8] Yet, direct numerical comparisons have been rare due to their different formulations and implementations.[35; 36] We recently developed a common theoretical framework[17] encompassing both fragmentation-based QM-QM and QM-classical embedding methods with a special focus on FDE and PE. This framework was employed to dissect how the two classes of embedding models describe the interactions between the active system and the environment. We here continue this comparison by quantifying how the theoretical differences manifest numerically for optical properties of two solvated systems, employing a supermolecular calculation as a reference. Our target systems (fig. 1) are two fluorescent dyes whose excited-state properties are known to be sensitive to solvent effects: The first target system is _para_-nitroaniline (_p_NA), which has been studied with several different embedding schemes [37; 38; 39; 7; 8; 40]. Yet, the performance of the QM-QM and QM-classical embedding schemes have never been compared in a combined study. The second test case is pentameric formyl thiophene acetic acid (pFTAA), a luminescent biomarker developed for fluorescence imaging for amyloid proteins.[41] The mechanism occurring with the chromophore embedded in the protein site is not fully understood yet, but it is known that its properties strongly depend on the solvent-solute interactions and the conformation of the molecule.[42; 43; 44; 45] Notably, pFTAA is an anionic system and therefore poses somewhat different challenges in the description of the solute-solvent interaction than _p_NA. Figure 1: Dyes considered in this work: (a) _para_-nitroaniline (b) pentameric formyl thiophene acetic acid This paper is organized as follows. We first briefly introduce the PE and FDE scheme in the common theoretical framework as derived in previous work[17] (sec. 2). In particular, we point out similarities and differences between these schemes. We describe the computational setup, that was developed to enable us to compare the embedding methods on equal footing (sec. 2.1). In section 3, the computational details are given. Subsequently, the results are presented and discussed (sec. 4). Finally, in the last section (sec. 5), we conclude and summarize the study. ## 2 Theoretical Background In this section, we give a brief overview of the density-based QM-QM embedding and classical PE methods. For a more detailed derivation, reviews, and further extensions of the presented models we refer to Refs. [14, 15, 16, 17, 46, 47, 48, 18, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 255, 266, 277, 288, 290, 211, 223, 244, 256, 271, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 346, 350, 351, 352, 353, 361, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 435, 44, 455, 46, 47, 48, 49, 411, 44, 44, 46, 49, 42, 44, 45, 46, 47, 48, 49, 40, 41, 44, 49, 43, 44, 45, 46, 48, 49, 40, 42, 45, 46, 49, 43, 47, 48, 49, 44, 45, 46, 49, 45, 47, 49, 46, 48, 49, 40, 41, 44, 49, 42, 45, 46, 49, 47, 48, 49, 40, 43, 49, 41, 44, 45, 46, 49, 42, 46, 49, 43, 44, 47, 48, 49, 45, 48, 49, 40, 44, 49, 45, 46, 47, 49, 41, 46, 48, 49, 42, 49, 43, 44, 45, 46, 47, 49, 45, 48, 49, 46, 47, 48, 49, 40, 41, 42, 49, 43, 44, 45, 49, 45, 46, 47, 48, 49, 40, 42, 49, 44, 46, 49, 43, 45, 47, 49, 48, 49, 41, 45, 49, 42, 46, 49, 45, 47, 48, 49, 49, 40, 43, 44, 48, 49, 45, 49, 46, 47, 49, 48, 49, 49, 40, 44, 49, 41, 45, 49, 42, 46, 49, 43, 45, 47, 48, 49, 49, 45, 49, 46, 47, 48, 49, 49, 40, 44, 49, 41, 45, 49, 42, 49, 43, 44, 45, 46, 47, 49, 48, 49, 49, 40, 44, 49, 41, 45, 49, 42, 46, 49, 45, 46, 47, 48, 49, 49, 41, 48, 49, 42, 49, 45, 49, 46, 48, 49, 47, 49, 48, 49, 49, 40, 41, 42, 49, 45, 49, 46, 47, 49, 48, 49, 49, 42, 49, 43, 44, 45, 46, 48, 49, 47, 49, 48, 49, 49, 40, 45, 49, 46, 49, 47, 48, 49, 49, 40, 48, 49, 49, 41, 49, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 48, 49, 49, 40, 49, 41, 45, 49, 49, 42, 49, 43, 46, 49, 47, 48, 49, 49, 45, 49, 46, 47, 49, 48, 49, 49, 49, 49, 40, 49, 41, 45, 49, 42, 49, 45, 49, 46, 47, 49, 48, 49, 49, 49, 49, 41, 49, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 49, 42, 49, 45, 49, 46, 47, 49, 48, 49, 49, 49, 49, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 9, 9, 9, 9, 10, 8, 9, 11 where both contributions can be obtained analogously to eq. (1). The Coulomb part is included in both density-based and classical embedding schemes. In the density-based QM-QM embedding schemes this part is expressed as \[E_{\rm int}^{\rm C}[\rho_{\rm A},\rho_{\rm env}]= \sum_{X\neq\rm A}\int\int\frac{\rho_{\rm A}({\bf r}_{\rm a})\rho_{X} ({\bf r}_{x})}{|{\bf r}_{\rm a}-{\bf r}_{x}|}{\rm d}{\bf r}_{\rm a}{\rm d}{\bf r }_{x}-\sum_{X\neq\rm A}\sum_{I\in\rm A}\int\frac{Z_{I}\rho_{X}({\bf r}_{x})}{|{ \bf R}_{I}-{\bf r}_{x}|}{\rm d}{\bf r}_{x}\] \[-\sum_{X\neq\rm A}\sum_{J\in X}\int\frac{Z_{J}\rho_{\rm A}({\bf r} _{\rm a})}{|{\bf R}_{J}-{\bf r}_{\rm a}|}{\rm d}{\bf r}_{\rm a}+\sum_{X\neq\rm A }\sum_{I\in\rm A}\sum_{J\in X}\frac{Z_{I}Z_{J}}{|{\bf R}_{I}-{\bf R}_{J}|}, \tag{3}\] where \({\bf R}_{I/J}\) denote nuclear coordinates, \({\bf r}_{a/x}\) denote electronic coordinates, and \(Z_{I/J}\) are nuclear charges. The interaction of the environment with the active subsystem is included _via_ Coulomb [and possibly quantum-mechanical (QM)] contributions through an effective embedding operator. The effective Hamiltonian for subsystem A can be expressed as \[\hat{H}_{\rm A}^{\rm eff}=\hat{H}_{\rm A}+\hat{v}_{\rm A}^{\rm emb}=\hat{H}_{ \rm A}+\int\hat{\rho}_{\rm A}({\bf r}_{\rm a})v_{\rm A}^{\rm emb}({\bf r}_{ \rm a}){\rm d}{\bf r}_{\rm a}, \tag{4}\] where the density operator \(\hat{\rho}_{\rm A}({\bf r}_{\rm a})=\sum_{i\in\rm A}\delta({\bf r}_{i}-{\bf r }_{\rm a})\) defines the connection between the Hamiltonian with electron-based coordinates and density-based expressions with real-space coordinates. We define the embedding operator via a real-space potential, \[v_{\rm A}^{\rm emb}({\bf r}_{\rm a})= \frac{\delta}{\delta\rho_{\rm A}}(E_{\rm tot}[\rho_{\rm tot}]-E_{ \rm A}[\rho_{\rm A}])\] \[= \frac{\delta E_{\rm tot}^{\rm C}[\rho_{\rm tot}]}{\delta\rho_{ \rm A}}-\frac{\delta E_{\rm A}^{\rm C}[\rho_{\rm A}]}{\delta\rho_{\rm A}}+ \frac{\delta E_{\rm tot}^{\rm QM}[\rho_{\rm tot}]}{\delta\rho_{\rm A}}-\frac{ \delta E_{\rm A}^{\rm QM}[\rho_{\rm A}]}{\delta\rho_{\rm A}}. \tag{5}\] In FDE, the environmental density \(\rho_{\rm env}\) is kept frozen so that the total embedding potential resulting from eq. (5) becomes \[v_{\rm A}^{\rm FDE}({\bf r}_{a})= v_{\rm A}^{\rm C(FDE)}[\rho_{\rm env}]({\bf r}_{\rm a})+v_{ \rm A}^{\rm nadd,kin}[\rho_{\rm A},\rho_{\rm env}]({\bf r}_{\rm a})+v_{\rm A}^ {\rm nadd,xc}[\rho_{\rm A},\rho_{\rm env}]({\bf r}_{\rm a}), \tag{6}\] with the Coulomb potential only depending on \(\rho_{A}\) and the (frozen) densities of the environment \[v_{\rm A}^{\rm C(FDE)}[\rho_{\rm env}]({\bf r}_{\rm a}) = \frac{\delta E_{\rm int}^{\rm C}[\rho_{\rm A},\rho_{\rm env}]}{ \delta\rho_{\rm A}} \tag{7}\] \[= -\sum_{X\neq{\rm A}}\sum_{J\in X}\frac{Z_{J}}{|{\bf r}_{\rm a}-{ \bf R}_{J}|}+\sum_{X\neq{\rm A}}\int\frac{\rho_{X}({\bf r}_{x})}{|{\bf r}_{\rm a }-{\bf r}_{x}|}{\rm d}{\bf r}_{x},\] where \(E_{\rm int}^{\rm C}[\rho_{\rm A},\rho_{\rm env}]\) is defined in eq. (3). The QM contributions from eq. (5) are comprised of kinetic and an exchange-correlation (xc) parts, represented by \(v_{\rm A}^{\rm nadd,kin}\) and \(v_{\rm A}^{\rm nadd,xc}\) in eq. (6). In practical calculations, these contributions are often approximated by orbital-free DFT methodologies[28, 29, 47], though for \(v_{\rm A}^{\rm nadd,kin}\) also orbital-dependent projection schemes have been reported.[49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61] The application of a fixed \(\rho_{\rm env}\) in FDE leads to several possible choices of frozen densities. The crudest approximation is to use the density from the isolated fragments by a superscript \(\{\rho_{X}^{(0)}\}\) (and likewise we also can define \(\rho_{A}^{(0)}\)). Allowing the active subsystem A to relax by submitting \(\rho_{\rm A}\) to a self-consistent-field optimization in the frozen environment density, \(\rho_{\rm env}^{(0)}\), leads to a relaxed density \(\rho_{A}^{(1)}\). In terms of density-based embedding schemes, this approach directly refers to FDE.[29] It yields a relaxed energy for the active subsystem \(E_{\rm A}[\rho_{A}^{(1)}]\). The relaxation of \(\rho_{A}^{(0)}\) to \(\rho_{A}^{(1)}\) can be done for all fragments in a step-wise manner until self-consistency to obtain the relaxed densities \(\rho_{A}^{(2)}\) and \(\rho_{\rm env}^{(2)}\). This is denoted a freeze-and-thaw procedure[62]. Formally, the _mutual polarization_ of the densities in the ground state of the super system is recovered when performing a sufficient number of freeze-and-thaw cycles. In contrast to that, the PE methods approximate both static electrostatics and polarization solely based on frozen densities in the environment, \(\rho_{\rm env}^{(0)}=\sum_{X\neq A}\rho_{X}^{(0)}\). An expression for the total energy comparable to eq. (1) can then be obtained through Rayleigh-Schrodinger perturbation theory, \[E_{\rm tot}\approx E^{(0)}+E^{(1)}+E^{(2)}. \tag{8}\] By expressing the interaction between the subsystems as perturbations of the energy, the zeroth-order perturbation can be identified as the isolated subsystem energies. Thus, from eq. (1) we identify \(E^{(0)}=E_{\rm A}[\rho_{\rm A}^{(0)}]+E_{\rm env}[\rho_{\rm env}^{(0)}]\) and the interaction energy (\(E_{\rm int}[\rho_{\rm A}^{(0)},\rho_{\rm env}^{(0)}]\)) must there fore come through the higher-order energy corrections. Indeed, the first-order correction corresponds to eq. (3) with frozen densities, _i.e._, \(E_{\rm int}^{\rm C}[\rho_{\rm A}^{(0)},\rho_{\rm env}^{(0)}]\). The PE model further approximates \(E_{\rm int}^{\rm C}[\rho_{\rm A}^{(0)},\rho_{\rm env}^{(0)}]\) through a multipole expansion[63, 64, 65], _i.e._, \[E^{(1)}=E_{\rm int}^{\rm C}[\rho_{\rm A}^{(0)},\rho_{\rm env}^{(0)}]\approx E _{\rm int}^{\rm mult}[\rho_{\rm A}^{(0)},\rho_{\rm env}^{(0)}]. \tag{9}\] The multipole expansion employs individual atoms of the subsystems/fragments as expansion points (\(\{{\bf R}_{s}\}\) or in short _sites_, \(s\)). The multipole expansion can thus be written as \[E_{\rm int}^{\rm mult}[\rho_{\rm A}^{(0)},\{\rho_{X}^{(0)}\}]= \sum_{X\neq A}\sum_{s\in X}\left(-\int\rho_{\rm A}^{(0)}({\bf r}_{ \rm a})T_{\rm sa}^{(0)}{\rm d}{\bf r}_{\rm a}+\sum_{I\in A}Z_{I}T_{sI}^{(0)} \right)q_{\rm s}[\rho_{X}^{(0)}]\] \[-\sum_{X\neq A}\sum_{s\in X}\left(-\int\rho_{\rm A}^{(0)}({\bf r} _{\rm a}){\bf T}_{\rm sa}^{(1)}{\rm d}{\bf r}_{\rm a}+\sum_{I\in A}Z_{I}{\bf T }_{sI}^{(1)}\right)\mathbf{\mu}_{s}[\rho_{X}^{(0)}]+\cdots. \tag{10}\] In the above equation, we have defined the interaction operators \({\bf T}_{\rm sa}^{(k)}=\frac{\partial^{k_{x}+k_{y}+k_{z}}}{\partial x_{x}^{k_ {x}}\partial y_{a}^{k_{y}+k_{z}}}|{\bf R}_{\rm a}-{\bf R}_{s}|^{-1}\) that describe interactions at point \(a\) due to site \(s\). Moreover, the multipole moment operator of \(k\)'th order at site \(s\) is defined as \({\bf Q}_{s}^{(k)}[\rho_{X}^{(0)}]=\langle\Psi_{X}^{(0)}|\hat{\bf Q}_{\rm s}^ {(k)}|\Psi_{X}^{(0)}\rangle\). The term for zeroth-order moments represents the charge contribution \(\hat{\bf Q}_{\rm s}^{(0)}=\hat{\bf q}_{\rm s}\), the first-order term denotes the dipole contribution \(\hat{\bf Q}_{\rm s}^{(1)}=\hat{\mathbf{\mu}}_{\rm s}\) and so on. In the following, we combine the two sums over fragments \(X\) and sites \(s\) into one sum over all sites \(s\). The mutual polarization effects are approximately covered by the second-order correction[64] \[E^{(2)}=E_{\rm A}^{\rm pol}+E_{\rm env}^{\rm pol}+E^{\rm disp}. \tag{11}\] We focus on the following only on the polarization part, while neglecting \(E^{\rm disp}\). The environment polarization energy, \(E_{\rm env}^{\rm pol}\), can be described as \[E_{\rm env}^{\rm pol}[\rho_{\rm A}^{(0)}]=-\frac{1}{2}\mathbf{\cal E} ^{T}[\rho_{\rm A}^{(0)}]\cdot\mathbf{\mu}^{\rm ind}[\rho_{\rm A}^{(0 )}], \tag{12}\] and \(E_{\rm A}^{\rm pol}\) can in principle be obtained analogously. This part is, however, inherently included in the QM model for the active system. The field \(\cal E\) is defined as the sum of the fields from electrons in system A, nuclei in system A, and the multipoles in the environment \[\mathbf{\mathcal{E}}[\rho_{\mathrm{A}}^{(0)}]=\mathbf{\mathcal{E}}_{\mathrm{A}}^{\mathrm{ e}}[\rho_{\mathrm{A}}^{(0)}]+\mathbf{\mathcal{E}}_{\mathrm{A}}^{\mathrm{n}}+\mathbf{ \mathcal{E}}_{\mathrm{env}}^{\mathrm{mult}}. \tag{13}\] The induced dipole moment on site \(s\) can then be obtained as \[\mathbf{\mu}_{s}^{\mathrm{ind}}[\rho_{\mathrm{A}}^{(0)}]=\mathbf{\alpha}_{s}\cdot \left(\mathbf{\mathcal{E}}_{s}[\rho_{\mathrm{A}}^{(0)}]+\sum_{s^{\prime}\neq s} \mathbf{T}_{ss^{\prime}}^{(2)}\mathbf{\mu}_{s^{\prime}}^{\mathrm{ind}}\right), \tag{14}\] where \(\mathbf{\alpha}_{s}\) is the (static) point-polarizability localized in site \(s\) and \(\mathbf{\mathcal{E}}_{s}[\rho_{\mathrm{A}}^{(0)}]\) the field in eq. (13) on site \(s\). Note that the induced dipole on site \(s\) depends on the field generated from the induced dipoles on all remaining sites. Thus, a self-consistent optimization is required to obtain the induced dipole moment \(\mathbf{\mu}_{s}^{\mathrm{ind}}\). This optimization problem can be written as \[\mathbf{\mu}_{s}^{\mathrm{ind}}[\rho_{\mathrm{A}}^{(0)}]=\sum_{t}\mathbf{R}_{ts} \mathbf{\mathcal{E}}_{\mathrm{A},s}[\rho_{\mathrm{A}}^{(0)}], \tag{15}\] where the so-called classical response matrix, \(\mathbf{R}\), is given as \[\mathbf{R}=\begin{pmatrix}\mathbf{\alpha}_{1}^{-1}&-\mathbf{T}_{12}^{(2)}&\ldots &-\mathbf{T}_{1S}^{(2)}\\ -\mathbf{T}_{21}^{(2)}&\mathbf{\alpha}_{2}^{-1}&\ldots&-\mathbf{T}_{2S}^{(2)}\\ \vdots&\vdots&\ddots&\vdots\\ -\mathbf{T}_{S1}^{(2)}&-\mathbf{T}_{S2}^{(2)}&\ldots&\mathbf{\alpha}_{S}^{-1}\\ \end{pmatrix}\quad. \tag{16}\] The total energy (in eq. 1) is now defined by combining eqs. (8)-(11), \[E_{\mathrm{tot}}^{\mathrm{PE}}[\rho_{\mathrm{A}},\rho_{\mathrm{env}}^{(0)}]= E_{\mathrm{A}}^{\mathrm{PE}}[\rho_{\mathrm{A}},\rho_{\mathrm{env}}^{(0)}]+E_{ \mathrm{env}}^{\mathrm{PE}}[\rho_{\mathrm{env}}^{(0)},\rho_{\mathrm{A}}]+E_{ \mathrm{int}}^{\mathrm{mult}}[\rho_{\mathrm{A}},\rho_{\mathrm{env}}^{(0)}], \tag{17}\] where we skip the superscript for \(\rho_{\mathrm{A}}\) to denote that it is subject to change in the self-consistent-field (SCF) procedure performed during the optimization of the QM system (note that eq. (15) will then have to be solved within each SCF cycle). The environment density \(\rho_{\mathrm{env}}^{(0)}\) remains the isolated density of the environment fragments (represented by a multipole expansion). For consistency with the definition in eq. (1), we have written the total energy in eq. (17) as a sum of the energies of the active system A and the environment plus an interaction energy, where we have combined the energy of the isolated subsystem and polarization in the term \(E_{\rm A}^{\rm PE}[\rho_{\rm A},\rho_{\rm env}^{(0)}]=E_{\rm A}[\rho_{\rm A}]+E_{ \rm A}^{\rm pol}[\rho_{\rm env}^{(0)}]\). The term \(E_{\rm env}^{\rm PE}[\rho_{\rm env}^{(0)},\rho_{\rm A}]\) is defined in a similar fashion. With this starting point, a PE embedding potential according to eq. (5) is derived in Ref. [17]. The emanating formalism can be summarized as follows \[v_{\rm A}^{\rm PE}= \frac{\delta E_{\rm tot}^{\rm PE}[\rho_{\rm A},\rho_{\rm env}^{( 0)}]}{\delta\rho_{\rm A}}-\frac{\delta E_{\rm A}^{\rm PE}[\rho_{\rm A}]}{ \delta\rho_{\rm A}}=v^{\rm mult}+v^{\rm pol}, \tag{18}\] where the two operators are defined as \[v^{\rm mult}=\sum_{s}\sum_{k=0}\frac{(-1)^{|k|}}{k!}{\bf T}_{\rm sa}^{(k)}({ \bf r}_{\rm a}){\bf Q}_{s}^{(k)} \tag{19}\] and \[v^{\rm pol}=\sum_{s}\left(\mathbf{\mu}_{s}^{\rm ind}[\rho_{\rm A}] \right)^{T}\,\mathbf{\varepsilon}_{{\rm A},s}^{\rm e}({\bf r}_{\rm a }), \tag{20}\] The field potential \(\mathbf{\varepsilon}_{{\rm A},s}^{\rm e}({\bf r}_{\rm a})\) is the component of the electronic part of the electric field operator at the site \(s\), defined in real-space coordinates as, \[\mathbf{\varepsilon}_{{\rm A},s}^{\rm e}({\bf r}_{\rm a})=\frac{ \delta\mathcal{E}_{{\rm A},s}^{\rm e}[\rho_{\rm A}({\bf r}_{\rm a})]}{\delta \rho_{\rm A}({\bf r}_{\rm a})}. \tag{21}\] where \(\mathcal{E}_{{\rm A},s}^{\rm e}[\rho_{\rm A}({\bf r}_{\rm a})]\) is the electronic component of the field in system A at site \(s\) (cf. eq. 13). Thus, \(v^{\rm mult}\) corresponds to a multipole approximation of eq. (7), where only \(\rho_{X}^{(0)}\) are employed. Similarly, \(v^{\rm pol}\) approximates the effect of mutual polarization, _i.e._, moving from \(\rho_{X}^{(0)}\) to \(\rho_{X}^{(2)}\) in eq. (7). The QM contributions from eq. (6) are not included in standard PE. Optical spectra are in this work obtained by linear response theory. To incorporate embedding contributions of the introduced models in local response calculations, we employ the commonly used framework of time-dependent DFT (TD-DFT). Therefore, we add the embedding potential \(v_{\rm emb}\) to the Kohn-Sham operator of the vacuum system \(\hat{f}_{\rm iso}\) \[\hat{f}_{\rm tot}=\hat{f}_{\rm iso}+v_{\rm emb}, \tag{22}\] with \(v_{\rm emb}\) being eqs. (6) or (18) for FDE or PE, respectively. Replacing \(\hat{f}_{\rm iso}\) with \(\hat{f}_{\rm tot}\) in the derivation of the response equations leads to a set of modified response equations, that are, \[\left[\left(\begin{array}{cc}\mathbf{A}&\mathbf{B}\\ -\mathbf{B}^{*}&-\mathbf{A}^{*}\end{array}\right)-\omega\left(\begin{array}[] {cc}\mathbf{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{1}\end{array}\right)\right]\left(\begin{array}{c}\mathbf{ X}\\ \mathbf{Y}\end{array}\right)=0, \tag{23}\] with the excitation energies \(\omega\) and \[A_{ai,bj}= \delta_{ij}\delta_{ab}\left(\varepsilon_{a}-\varepsilon_{i} \right)+B_{ai,jb} \tag{24}\] \[B_{ai,bj}= \frac{\partial F_{ai}^{\rm iso}}{\partial P_{bj}}+\frac{\partial \left\langle\phi_{a}|v^{\rm emb}|\phi_{i}\right\rangle}{\partial P_{bj}}. \tag{25}\] \(F_{ai}^{\rm iso}\) denotes a Fock matrix element of the isolated system and \(P_{bj}\) is an element of the density matrix. The quantities \(\varepsilon_{a}\) and \(\varepsilon_{i}\) are orbital energies, where occupied orbitals are labelled with \(i\) or \(j\) and the virtual ones with \(a\) or \(b\). The orbital energies are eigenvalues of the _total_ Fock operator \(\hat{f}_{\rm tot}\) in eq. (22). Thus, part of the environmental contribution enters through these energies. For FDE, \(v^{\rm emb}\) can be chosen to rely only on \(\{\rho_{X}^{(0)}\}\) densities, which we will denote FDE NOPOL. An equivalent contribution can be defined for PE (which we denote PE NOPOL) if only \(v^{\rm mult}\) of eq. (18) is included in \(v_{\rm emb}\). Employing the relaxed densities \(\{\rho_{X}^{(2)}\}\) in eq. (5) corresponds to including the ground-state polarization and we denote this model FDE GSPOL. The corresponding model in the PE framework (denoted PE GSPOL) corresponds to employing both \(v^{\rm mult}\) and \(v^{\rm pol}\) of eq. (18) in \(v_{\rm emb}\), while neglecting the \(v_{\rm emb}\) part of \(B_{ai,bj}\) in eqs. (24) and (25). The second term of B requires more attention since the physical content between PE and FDE models is rather different [17]. The term can be identified as \[\frac{\partial\left\langle\phi_{a}|v^{\rm emb}|\phi_{i}\right\rangle}{ \partial P_{bj}}=\left\langle\phi_{a}(\mathbf{r}_{\rm a})\phi_{b}(\mathbf{r}_ {\rm a}^{\prime})\left|\frac{\delta v_{\rm A}^{\rm emb}(\mathbf{r}_{\rm a})}{ \delta\rho_{\rm A}(\mathbf{r}_{\rm a}^{\prime})}\right|\phi_{i}(\mathbf{r}_{ \rm a})\phi_{j}(\mathbf{r}_{\rm a}^{\prime})\right\rangle. \tag{26}\] The functional derivative (\(\frac{\delta v_{\rm A}^{\rm emb}({\bf r}_{\rm a})}{\delta\rho_{\rm A}({\bf r}_{ \rm a}^{\prime})}\)) for the corresponding embedding scheme can be derived from eqs. (6) and (18) for FDE and PE, respectively. In the case of FDE, we have a static (ground-state) potential and the Coulomb terms vanish. Thus, only quantum-mechanical terms contribute [66], so that \[\frac{\delta v_{\rm A}^{\rm FDE}({\bf r}_{\rm a})}{\delta\rho_{\rm A}({\bf r}_ {\rm a}^{\prime})}= \frac{\delta^{2}E^{\rm QM}[\rho_{\rm tot}]}{\delta\rho_{\rm tot}({ \bf r}_{\rm a})\delta\rho_{\rm tot}({\bf r}_{\rm a}^{\prime})}-\frac{\delta^{2 }E^{\rm QM}[\rho_{\rm A}]}{\delta\rho_{\rm A}({\bf r}_{\rm a})\delta\rho_{\rm A }({\bf r}_{\rm a}^{\prime})}. \tag{27}\] The remaining QM embedding contributions to the response kernel, however, are often small, so that the major environmental effect in the electronic transitions and oscillator strengths are results of differences in the canonical orbitals and orbital energies.[35, 67] The QM contributions to the embedding in the response kernel are, hence, not included in the numerical examples of the present work. For PE, the functional derivative is obtained from eq. (18) as[17] \[\frac{\delta v_{\rm A}^{\rm PE}({\bf r}_{\rm a})}{\delta\rho_{\rm A}({\bf r}_ {\rm a}^{\prime})}=-\sum_{t}\sum_{s}{\bf T}_{at}^{(1)}({\bf r}_{\rm a}^{ \prime}){\bf R}_{ts}{\bf T}_{as}^{(1)}({\bf r}_{\rm a}). \tag{28}\] This contribution can be understood as an (approximate) treatment of differential polarization, _i.e._, the difference in the interaction between the ground-state and excited-state densities with the environment densities. We denote PE models with this effect included as PE DPOL. There is no corresponding term for FDE models, although extensions have been suggested that include differential polarization[8, 32, 68, 69, 70, 71, 72]. Most of them are, however, rather computationally demanding[68] or require embedded excited-state densities[8], which are somewhat tedious to obtain for TD-DFT methods[69]. While excitation energies are a fundamental part of a UV-vis spectrum, the associated intensities are often highly important for assignments. The intensity is usually estimated based in the oscillator strength which can also be extracted from eq. 23; for transition \(n\), the oscillator strength, \(f_{n}\), can be calculated as \[f_{n}=\frac{2}{3}\omega_{n}{\mathbf{\mu}}_{n}^{2}, \tag{29}\] where \(\omega_{n}\) is the excitation frequency and \(\mathbf{\mu}_{n}^{2}\) is the transition dipole moment. The latter can (for the \(\alpha\)-component) can be obtained from the converged response vectors in Eq. (23) as[73] \[\mu_{n}^{\alpha}=\frac{1}{\sqrt{\omega_{n}}}\mathbf{M}^{\alpha}(\mathbf{X}+\mathbf{Y}), \tag{30}\] where the \(\mathbf{M}^{\alpha}\) is a vector comprised of the \(\alpha=x,y\), and \(z\) components with the elements \[M_{ai}^{\alpha}=-\langle\phi_{a}|r_{\alpha}|\phi_{i}\rangle. \tag{31}\] Since the introduction of \(v^{\rm emb}\) in the response equations (eqs. 23-25) also affects the eigenvectors, the embedding also influences the calculated oscillator strengths. However, the external field employed to excite the solute also generates an induced dipole on environment sites, that effectively modifies \(\mathbf{M}\). This effect is not included in the standard local embedding schemes but can approximately be accounted for by an external effective field (EEF) term, \(\langle\phi_{a}|\hat{V}^{\rm loc}|\phi_{i}\rangle\), with [72, 74] \[\hat{V}^{\rm loc}=\sum_{ts}\mathbf{T}_{ta}^{(1)}\mathbf{R}_{ts}\mathcal{E}_{s} ^{\rm uni}=\sum_{t}\mathbf{T}_{ta}^{(1)}\mathbf{\mu}_{\rm ext,t}^{\rm ind}, \tag{32}\] where \(\mathbf{\mu}_{\rm ext,t}^{\rm ind}\) is the induced dipole to a unit field, \(\mathcal{E}_{s}^{\rm uni}\). We have summarized the contributions considered in the different embedding approaches in Tab. 1, and we refer to the labels used in this table in the following sections. ### Computational Setup The common theoretical comparison of density-based QM/MM embedding and PE is only a first step. We also aim for a setup that allows a one-to-one comparison between the two embedding models in practical calculations. Our setup is shown in fig. 2. In the following, we describe the employed workflows. The general procedure for the PE model involves the construction of the embedding poten \begin{table} \begin{tabular}{l l l l l l l l l} \hline Class & Label & Static & G.s. pol & QM\({}^{a)}\) & Diff. pol. & EEF & Comments \\ \hline QM/classical & PE NOPOL & ✓ & ✗ & ✗ & ✗ & ✗ & \(v^{\text{mult}}\) (based on \(\{\rho_{X}^{(0)}\}\)), see eqs. (18) \\ QM/QM & FDE NOPOL & ✓ & ✗ & ✓ & ✗ & ✗ & \(v^{\text{FDE}}_{\text{A}}(\mathbf{r}_{a})\) in eq. (6) \\ & & & & & & & based on \(\{\rho_{X}^{(0)}\}\). \\ \hline QM/classical & PE GSPOL & ✓ & ✓ & ✗ & ✗ & ✗ & \(v^{\text{mult}}\) (based on \(\{\rho_{X}^{(0)}\}\)) and \(v^{\text{pol}}\), see eqs. (18) \\ QM/QM & FDE GSPOL & ✓ & ✓ & ✓ & ✗ & ✗ & \(v^{\text{FDE}}_{\text{A}}(\mathbf{r}_{a})\) (eq. 6) based on \(\{\rho_{X}^{(2)}\}\). \\ \hline QM/classical & PE DPOL & ✓ & ✓ & ✗ & ✓ & ✗ & PE GSPOL and additionally eq. (28). \\ QM/classical & PE DPOL+EEF & ✓ & ✓ & ✗ & ✓ & ✓ & PE DPOL with modified dipole transition moments, eq. (32) \\ \hline \end{tabular} a) Can be included via orbital-free DFT. Note that we do not consider the response kernel, eq. (27), in this work. \end{table} Table 1: Overview of the included contributions in the different embedding models considered here. Figure 2: Flowchart of the performed workflows for molecule (A) in the environment of the molecule (env). The steps in dark blue boxes stand for subprograms used for a task. The large arrows indicate results that are passed in between the different programs. In either workflow, the supermolecular structure is passed to PEAS or PyADF, respectively, and split up into subsystems. (a) PE workflow utilizing PEAS and PElib[75], which is calling Openmoclas for subsystem calculations. (b) The FDE workflow uses the PyADF scripting framework, that is calling all programs and managing all results mentioned in this workflow. The green box refers to freeze-and-thaw cycles. tial. For this purpose we employ the PE Assistance Script (PEAS).[76] The script divides the environment into subsystems/fragments and constructs the densities \(\{\rho^{(0)}_{X}\}\) for the individual fragments, employing DFT calculations with Openmolcas[77]. From these densities localized multipoles, \(\{\mathbf{Q}_{s}[\rho^{(0)}_{X}]\}\), and static polarizabilities, \(\{\mathbf{\alpha}^{0}_{s}\}\), can be derived from the LoProp[78] method, implemented in Openmolcas. PEAS collects multipoles and polarizabilities in a potential file that is employed in the calculation of the excitation energies and oscillator strengths with TD-DFT. Optimization of the ground-state as well solving TD-DFT equations [eqs. (23)-(25)] are done while including the PE potential; this means that the induced dipole moments in eq. (28) are self-consistently optimized along with the SCF/linear response iterations. PElib handles this self-consistent calculation of the induced dipole moments and adds the resulting PE contributions to the Kohn-Sham or Kohn-Sham-like matrices used in the SCF or response calculations.[75] We employed four different models of increasing accuracy. (i) PE NOPOL which only contains multipoles up to quadrupoles and no polarizabilities (eq. 18) (ii) PE GSPOL where full ground-state polarization is included, but differential polarization (eq. 28) is ignored in the TD-DFT calculation and (iii) PE DPOL including both multipoles, ground-state, and differential polarization (eq. 28). (iv) A PE DPOL with modified dipole transition moments (eq. 30) due to the effect of the external effective field in eq. (32). This model is equivalent to (iii) for excitation energies but leads to a change in oscillator strengths, and we denote this model PE DPOL+EEF. For the FDE calculations, we employed the PyADF scripting framework (fig. 2b).[79; 80] A supermolecular DFT integration grid was obtained via the ADF program from AMS2020.103 program suite[81] to ensure that the grids include the full area of all subsystems to preempt grid artifacts that could affect our comparison. Subsequently, ground-state calculations of all subsystems were performed individually in Dalton[82]. The resulting molecular orbitals from these calculations were then translated to electron density and electrostatic potentials on the initially generated grid using the DensityEvaluator module of PyADF. The non-additive kinetic and the exchange-correlation term (eq. 6) were then evaluated by PyEmbed module of PyADF on the same grid. Finally, the embedding potential was obtained by adding the environmental electrostatic potential and the environmental non-additive potential for the kinetic and exchange-correlation term. For the FDE calculations, we employed two potentials: (i) A static embedding potential from isolated environmental densities (skipping the performance of FDE cycles, see fig. 2b: blue under- laid box, FDE NOPOL, eq. 6). (ii) A potential including mutual polarization via freeze-and-thaw cycles of the active subsystem with environmental fragments in the ground state (FDE GSPOL, eq. 6). In the freeze-and-thaw procedure, we employ the DensityEvaluator to write updated density and electrostatic potential of every ground-state subsystem calculation in Dalton on the integration grid. This results in an updated embedding potential when evaluating the embedding potential with PyEmbed. It should be noted, that the current implementation in Dalton includes the non-additive parts of the embedding potential in the SCF process, but not in the response kernel for the TD-DFT calculation. The above-described framework enables us to dissect the embedding contributions and quantify the different approximations discussed above. For this, the presented embedding models were to a large degree implemented in Dalton to allow a fair side-by-side comparison and the stepwise inclusion of polarization effects. ## 3 Computational Details The snapshots of _p_NA were taken from an MD simulation, using the AMBER software[83]. We parameterized the _p_NA molecule with the General AMBER force field (GAFF)[84] and RESP charges[85] calculated with B3LYP[86, 87, 88] 6-31+G* basis set[89, 90, 91] (with PCM[92] using the dielectric constant of water). The system was set up with tleap of the Amber package and _p_NA was solvated with 3160 water molecules, represented by the OPC model.[93] We first ran a minimization using 10000 steps of steepest descent, followed by 10000 steps of conjugate gradient minimization. We next equilibrated the system by running a 1 ns (in the NPT ensemble), heating the system from 0 to 298 K (at 1 atm. pressure) over the first 20 ps. This was followed by a 100 ns production run, using the NPT ensemble (at 298 K), a Langevin thermostat, and a Monte Carlo barostat. Electrostatics were treated with Particle Mesh Ewald[94], and non-bonded interactions were cut-off at 12 A. The hydrogen bonds were constrained with the SHAKE algorithm.[95, 96] For further calculations we arbitrarily selected seven out of the total one hundred obtained snapshots. For these snapshots, we constructed systems where all environment molecules within 3, 4, 5, and 12 A of _p_NA were included. The snapshots of pFTAA were taken from an MD simulation, using the GROMACS software[97, 98, 99, 100, 101, 102, 103, 104]. We parameterized the pFTAA molecule with an adapted CHARMM force field.[42, 43, 105, 106] The pFTAA molecule was solvated with 4028 water molecules, represented by the TIP3P model.[107] We first ran a minimization using 50000 steps of steepest descent. We next equilibrated the system by running a 10 ns (in the NPT ensemble), heating the system from 0 to 300 K (at 1 atm. pressure) over the first 0.2 ps. All employed snapshots were taken from a 100 ns production run, using the NVT ensemble (at 300 K), a velocity-rescaling thermostat[108], a Berendsen barostat[109] and electrostatics were treated with Particle Mesh Ewald[110], and non-bonded interactions were cut-off at 10 A. All bonds were constrained with the LINCS algorithm.[111] For pFTAA, we only consider solvated models with a 3 A water environment for a selection of eight independent snapshots. We note that for some snapshots there are sodium ions in the 3 A environment, while for others there are no sodium ions in close proximity of the dye. The reference calculations were performed with Dalton 2020[82] in a supermolecular TD-DFT calculation with the CAM-B3LYP[112] xc functional. The workflow for the embedding calculation is shown in fig. 2. The construction of the environment potential in the polarizable embedding approach was performed with LoProp[78] in Openmolcas[77] in combination with the Polarizable Embedding Assistant Script (PEAS) [113]. In these fragment calculations, the B3LYP[86, 87, 88] xc functional was employed together with ANO-type recontractions of the aug-cc-pVDZ or aug-cc-pVTZ basis set, respectively.[114, 115, 116, 117] For sodium ions, the ANO-L basis sets were applied.[118] The linear response calculation for _p_NA was then carried out with Dalton including the constructed PE potential using PElib[75]. For PE pFTAA calculations with sodium counterions, the QM core region was adapted to account for the lack of repulsion via transferable atomic all-electron pseudopotentials for the sodium ions.[119] In the FDE approach, the supermolecular grid with a "good" Becke grid quality[120] was ob tained with the ADF[121] code and the TD-DFT calculations with the Dalton code via the PyADF scripting environment.[79, 80] In line with the calculations for the PE approach, the linear response calculations for _p_NA were performed with the CAM-B3LYP xc functional whereas for the environment molecules, a B3LYP xc functional was employed. In all FDE calculations, the additive xc functional BP86[122, 86] and the kinetic energy functional PW91k[123, 124] were applied for the non-additive contributions to the embedding potential. Three freeze-and-thaw cycles have been used throughout as this setting had been found to generally yield sufficient results[125]. All calculations for _p_NA were performed with an aug-cc-pVDZ [see supporting information (SI)] or aug-cc-pVTZ basis set. For pFTAA an aug-cc-pVDZ basis set was employed in all calculations. After calculating the five lowest excitations for the reference as well as for the embedding calculations, they were sorted by the oscillator strength of the transition. The strongest \(\pi\rightarrow\pi^{*}\) transition (ensured _via_ inspection of response vectors and orbitals) was chosen to be compared with other results. ## 4 Results and Discussion We numerically compare calculated excitation energies and oscillator strengths for the models NOPOL, GSPOL, PE DPOL, and PE DPOL+EEF introduced in Section 2 (see tab. 1 for an overview). We generally report on shifts, _i.e._, differences in excitation energy or oscillator strength of a solvation model to the vacuum case with the same structure of the dye. We denote these shifts as solvatochromic (\(\mathcal{S}\)-)shifts and \(\mathcal{F}\)-shifts for excitation energies and oscillator strengths, respectively. We compare the shifts from embedding models to reference shifts obtained as the difference of a full quantum-chemical result to the vacuum case (\(\Delta\)REF). We generally denote these shifts with \(\Delta\), _i.e._, \(\Delta\)NOPOL is the shift obtained with the NOPOL approximation and \(\Delta\)GSPOL, and \(\Delta\)DPOL are defined analogously. The individual contributions are then defined with respect to the next lower model, _i.e._, \(\Delta\Delta\)GSPOL\(=\)\(\Delta\)GSPOL\(-\)\(\Delta\)NOPOL, \(\Delta\)\(\Delta\)DPOL\(=\)\(\Delta\)DPOL\(-\)\(\Delta\)GSPOL, and \(\Delta\)\(\Delta\)EEF\(=\)\(\Delta\)(DPOL+EEF) \(-\)\(\Delta\)DPOL. Additionally, we define \(\Delta\Delta\)(DPOL+EEF) \(=\)\(\Delta\)(DPOL+EEF) \(-\)\(\Delta\)GSPOL. When regarding the proportion of the single contributions (\(\Delta\)NOPOL, \(\Delta\)\(\Delta\)GSPOL, \(\Delta\)\(\Delta\)DPOL) to the total shift, we reference to the total supermolecular shift (\(\Delta\)REF) for both models, whenever available, and to the total DPOL shift (\(\Delta\)DPOL) when a supermolecular reference is unavailable (_c.f._ tabs. 2-3). #### _para_-Nitroaniline Our first test system is _para_-nitroaniline (_p_NA) in different water environments (see fig. 3). First, we investigate environment sizes of 3 A and 4 A for seven snapshots. Fig. 4 shows the contributions to the total \(\mathcal{S}\)-shifts of _p_NA for the different solvent models and compare them to the supermolecular reference. It can be seen, that the total \(\mathcal{S}\)-shift varies largely for the different snapshots, independent of the environment size or embedding scheme used. Both, PE and FDE models reproduce the changes obtained in the reference calculations qualitatively correctly: For both, FDE and PE, \(\Delta\)NOPOL is the largest contribution to the \(\mathcal{S}\)-shift (on average a proportion of more than 86%). Thus, the GSPOL contribution (\(\Delta\Delta\)GSPOL) is small (7% and 9% of the supermolecular \(\mathcal{S}\)-shift for FDE and PE, respectively). The PE \(\Delta\Delta\)DPOL proportion lies below 5%. Thus, \(\Delta\)NOPOL and \(\Delta\)GSPOL for PE as well as FDE are in very good agreement with \(\Delta\)REF (the largest average differences are \(-0.01\) eV, see tabs. S-2.3, S-2.4, S-2.7 and S-2.8 in the SI). Ultimately, the total \(\mathcal{S}\)-shifts for \(\Delta\)GSPOL and \(\Delta\)DPOL are in good agreement with the \(\Delta\)REF (fig. 4). \(\Delta\)GSPOL on average slightly underestimates the total \(\mathcal{S}\)-shift, whereas adding the \(\Delta\)DPOL Figure 3: Example MD configuration of _para_-nitroaniline in a 4 Å and 12 Å water environment selection leads to an (equally small) overestimation: For most snapshots, the \(\Delta\)DPOL from PE is slightly higher than for the \(\Delta\)REF, with an average deviation of \(-0.01\) eV and \(-0.02\) eV for the 3 A and 4 A system, respectively (see tables S-2.3-S-2.4 in the SI for the 3 A system and tables S-2.7-S-2.8 in the SI for the 4 A system). These differences are smaller than differences we would expect from the differences in the applied xc functionals (the reference calculation is a full CAM-B3LYP calculation and in the determination of the PE embedding potential B3LYP was employed for environment fragments). It should be noted that for individual snapshots, the \(\Delta\Delta\)GSPOL proportion can exceed the average considerably. This is most pronounced for snapshot 5, which shows an overall small total \(\mathcal{S}\)-shift: Here, the \(\Delta\Delta\)GSPOL proportion for both FDE and PE constitutes between 13% and 16% of the supermolecular \(\mathcal{S}\)-shift for 3 and 4 A environments, respectively. PE \(\Delta\Delta\)DPOL takes a proportion of 10% and 13% in the 3 and 4 A environments. We further observe a slight change in the proportions when increasing the environment size: When going from 3 A to 4 A environment, the average \(\Delta\Delta\)GSPOL proportion remains around 7% of the supermolecular shift for FDE and slightly increases from 7% to 9% for PE. The PE \(\Delta\Delta\)DPOL proportion on average increases from 4% to 5%. In absolute values, however, these contributions for all environment sizes are rather low, _i.e._ at most \(-0.10\) eV for \(\Delta\Delta\)GSPOL for both PE and FDE and \(-0.06\) eV for \(\Delta\Delta\)DPOL in PE. For PE, we extend the environment further to 5 A and a 12 A. The results for the \(\mathcal{S}\)-shifts from Figure 4: Contributions from the different models to the total \(\mathcal{S}\)-shifts and their average for different configurations of \(p\)NA in 3 and 4 Å environments of water obtained from an MD simulation and subsequently calculated in a PE and FDE framework and different orders of polarization contributions obtained in calculations with a aug-cc-pVTZ basis set. all these calculations are depicted in fig. 5. The overall trend is a distinct increase in the size of the total \(\mathcal{S}\)-shifts when extending from a 3 A to a 12 A environment (_i.e._ the shift becomes more negative): on average it increases by \(-0.27\) eV. Again, \(\Delta\)NOPOL is the largest contribution and it increases with enlarged environment size: With the extension from the 3 A to 4 A environment it increases by \(-0.02\) eV on average, from the 4 A to the 5 A environment it increases by \(-0.06\) eV on average, and by \(-0.07\) eV when further extending to the 12 A environment. \(\Delta\Delta\)GSPOL also increases: the increase is on average \(-0.02\) eV from the 3 A to 4 A environment, additional \(-0.02\) eV from the 4 A to the 5 A environment, and \(-0.05\) eV when extending to the 12 A environment. \(\Delta\Delta\)DPOL also shows an increase when going from a 3 A to a 12 A environment. The absolute contribution on average increases from \(-0.03\) eV to \(-0.06\) eV. However, the average proportion of the total shift does not steadily increase: From a 3 to a 4 A environment it changes from \(-0.03\) eV (4% of \(\Delta\)DPOL) to \(-0.04\) eV (5% of \(\Delta\)DPOL), for a 5 A environment it decreases to \(-0.04\) eV (4% of \(\Delta\)DPOL) and increases for a 12 A environment to \(-0.06\) eV (6% of \(\Delta\)DPOL). As discussed above, for the snapshots with smaller total shifts, \(\Delta\Delta\)GSPOL can exceed the average considerably: Here, we again look at snapshot 5 for which the \(\Delta\Delta\)GSPOL proportion changes from 13% (\(-0.05\) eV ) in a 3 A environment to 15% (\(-0.05\) eV) in a 4 A environment, 14% (\(-0.05\) eV) Figure 5: Contributions from the different models to the total \(\mathcal{S}\)-shifts and their average for different configurations of \(p\)NA in 3, 4, 5, and 12 Å environments of water obtained from an MD simulation and subsequently calculated in a PE framework and different orders of polarization contributions obtained in calculations with a aug-cc-pVTZ basis set. in a 5 A environment, and 37% (\(-\)0.22 eV) in a 12 A environment, where all percentages refer to the \(\Delta\)DPOL shift. Thus, for this particular snapshot, the \(\Delta\)NOPOL accounts for 55% (\(-\)0.33 eV) of \(\Delta\)DPOL for a 12 A environment model. According to previous studies on _p_NA in a water environment with an EOM-CCSD/EFP scheme, Slipchenko _et al.[39]_ found the \(\Delta\)NOPOL proportion of the excitation energy to be of similar amount (80%) as was obtained in our calculations (86-88%). The \(\Delta\)\(\Delta\)DPOL proportion was determined to be 3-8% which is in good agreement with our result of 4-5%. In their study, increasing the number of water molecules used in the solvation (2-6 molecules) led to an increase in \(\Delta\)\(\Delta\)DPOL, similar to the increase observed in our results. In a study by Sneskov _et al.[40]_, the average polarization contribution was obtained from 100 snapshots. Both the \(\Delta\)\(\Delta\)GSPOL and the \(\Delta\)\(\Delta\)DPOL proportion are higher than in the present study's result: 19-21% and 13% for the \(\Delta\)\(\Delta\)GSPOL and \(\Delta\)\(\Delta\)DPOL proportion, respectively. These authors also noted that the variation of these values is partially dependent on the individual snapshot. In and FDE context, absolute values for \(\Delta\)\(\Delta\)DPOL were obtained from mutual optimization with excited-state densities a the study of Daday _et al.[8]_. The magnitudes (0.01-0.22 or \(-\)0.02-0.15 eV, depending on the description of the excited-state density) are similar to our results for \(\Delta\)\(\Delta\)DPOL ranging between 0.02-0.06 eV. Equivalently to the \(\mathcal{S}\)-shifts discussed above, fig. 6 displays the change in the oscillator strength Figure 6: Contributions from the different models to the total \(\mathcal{F}\)-shifts and their average for different configurations of _p_NA in 3 and 4 Å environments of water obtained from an MD simulation and subsequently calculated in a PE and FDE framework, different orders of polarization contributions and added EEF effects obtained in calculations with a aug-cc-pVTZ basis set. (Full data in tabs. S-2.15, S-2.16, S-2.19, S-2.20) (\(\cal F\)-shift) of the strongest (\(\pi\rightarrow\pi^{*}\)) transition for the various PE and FDE solvation models. We find that the \(\cal F\)-shift of the reference calculations displays larger sensitivity than the \(\cal S\)-shifts with respect to both, the size of the system and snapshot. This is in line with previous comparisons of different electronic structure methods, showing oscillator strengths to be more sensitive to the employed electronic structure methods[126]. In contrast to the discussion of \(\cal S\)-shifts, we here omit the presentation of single contributions (\(\Delta\)NOPOL, \(\Delta\Delta\)GSPOL, and \(\Delta\Delta\)DPOL) as percentage of the total shift since the oscillator strengths are generally smaller than the excitations energies and even small changes can lead to large percent-wise changes. The reference \(\cal F\)-shifts are on average 0.06 for both 3 and 4 A. The FDE and PE NOPOL models both give an average \(\cal F\)-shift of 0.03 for 3 A and 0.04 for 4 A. Generally, \(\Delta\)NOPOL is estimated similarly by FDE and PE, the largest deviation being 0.01. \(\Delta\)NOPOL is _often_ the largest contribution to the total \(\cal F\)-shift, but is much less dominant compared to the \(\cal S\)-shift. Notably, the \(\Delta\)NOPOL results alone are often rather far from the total shifts (most obvious in snapshots 6 and 7 for both 3 and 4 A). \(\Delta\Delta\)GSPOL generally improves the results for PE and FDE similarly: The largest deviation between FDE and PE amounts to less than 0.01 for both the 3 A and 4 A systems. In contrast to the \(\cal S\)-shift, \(\Delta\Delta\)DPOL can be rather large for \(\cal F\)-shifts: In some cases (see snapshots 2 and 6) both \(\Delta\Delta\)GSPOL (for FDE and PE) and \(\Delta\Delta\)DPOL (for PE) correct the \(\cal F\)-shift in the opposite direction of \(\Delta\)NOPOL, where \(\Delta\)GSPOL is in better agreement with \(\Delta\)REF than \(\Delta\)DPOL. This occurs both for snapshots 2 and 6 and on average. Especially for the 4 A environment, we observe large \(\Delta\)DPOL values. This over-correction of \(\Delta\)DPOL led us to investigate local field effects on the oscillator strength by means of effective external field (fig. 6). While for the 3 A system on average only a small increase in \(\cal F\)-shift can be observed (below 0.01), the total \(\cal F\)-shift for the 4 A system decreases significantly (for all snapshots), leading to an improved result compared to the reference: The average deviation is 0.03 for \(\Delta\)DPOL compared to and less than 0.01 for \(\Delta\)DPOL+EEF. We finally note that all the discussed results are obtained with aug-cc-pVTZ but for calculations with an aug-cc-pVDZ basis set, the same trends can be observed (see figs. SI-2.2-SI-2.3 and tabs. SI-2.1-SI-2.23 in the SI). The proportions of the single contributions are in line with those obtained with an aug-cc-pVTZ basis set. In summary, we observe similar results for FDE GSPOL and PE GSPOL, suggesting that the additional quantum-mechanical contribution and real-space treatment in FDE have only a minor effect in this case. The supermolecular reference of excitation energies of _p_NA in the 3 A and 4 A water environment is well in line with FDE results as well as the PE values with or without including differential polarization effects. When going to larger systems sizes within the PE model, we observe an increased \(\Delta\Delta\Delta\texttt{GSPOL}\) proportion to the \(\mathcal{S}\)-shift, but an unclear trend for the differential polarization. The observations for the oscillator strengths are similar, though not identical to those for the electronic excitation energies. In particular, we observe a larger snapshot dependence and the differential polarization contribution in the PE calculations is larger than for the excitation energies. We further observe an overcorrection for the 4 A environments due \(\Delta\Delta\texttt{DPOL}\), which can be largely cancelled by accounting for local field effects (EEF). #### Pentamericic formyl thiophene acetic acid (pFTAA) Our second test case, pFTAA, is in contrast to _p_NA highly negatively charged (4\(-\)). It is hence, more challenged by possible electron-spill-out effects, which makes it difficult to describe classical models like PE. The FDE is expected to be less prone to electron-spill-out effects due to the approximate quantum contributions in the embedding potential (_c.f._ eq. (6)).[127, 128] Indeed, we found that in the five snapshots that contained sodium cations close to the pFTAA chromophore, the standard PE model broke down. The electron spill-out was revealed by analyzing the con Figure 7: Selected snapshots of pFTAA in a pure 3 Å water environment (left, snapshot 1) and additionally including sodium ions in close vicinity to pFTAA (right, snapshot 2). tributing orbitals in the response solution vectors. We counteracted the electron spill-out in these cases by placing atomic pseudopotentials on sodium ions.[119] This led in all cases to meaningful results. The PE results for all these snapshots can be found in tabs. SI-3.2 and SI-3.4 in the SI. Here, we focus on the discussion of two, representative snapshots: one with and one without a sodium ion in close proximity and again compare the performance of the PE and FDE models (see tab 2). Again, the supersystem reference shifts (\(\Delta\)REF) are well reproduced for both the FDE and PE models. \(\Delta\)NOPOL is by far the largest contribution, while the \(\Delta\Delta\)GSPOL is small. Both, \(\Delta\)NOPOL and \(\Delta\Delta\)GSPOL, are close to identical for FDE and PE. \(\Delta\Delta\)DPOL is of similar magnitude as \(\Delta\Delta\)GSPOL but points in the opposite direction. Tab. 3 shows the \(\mathcal{F}\)-shifts for the PE and FDE embedding model employing the two snapshots. \(\Delta\)NOPOL and \(\Delta\Delta\)GSPOL are similar for FDE and PE, where the \(\Delta\Delta\)GSPOL contributions are significantly smaller than the \(\Delta\)NOPOL contributions. \(\Delta\Delta\)DPOL in the PE model for the two investigated snapshots is also rather small and of similar magnitude as the \(\Delta\Delta\)GSPOL contributions. The \(\Delta\Delta\)EEF contribution is of similar magnitude as the \(\Delta\Delta\)DPOL contribution but of opposing sign. For snapshot 1, both \(\Delta\)GSPOL and \(\Delta\)DPOL are in reasonable agreement with the reference value of 0.388: For \(\Delta\)GSPOL we obtain deviations of 0.04 and 0.01, respectively for the FDE and the PE \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\Delta\)supp} & \multirow{2}{*}{\(\mathcal{F}\)} & \multicolumn{3}{c}{**FDE**} & \multicolumn{3}{c}{**PE**} & \multicolumn{3}{c}{**PE**} & \multirow{2}{*}{\(\Delta\)**REF**} \\ \cline{2-2} \cline{5-10} & \(\Delta\)NOPOL & \(\Delta\)GSPOL & \(\Delta\)GSPOL & & & & & & & \\ \hline 1 & 0.15 & 0.01 & 0.17 & 0.16 & 0.01 & -0.03 & 0.17 & 0.14 & 0.14 \\ 2-2 & 0.04 & 0.02 & 0.07 & 0.01 & 0.05 & -0.08 & 0.07 & 0.04 & 0.03 \\ \hline \hline \end{tabular} \end{table} Table 2: Contributions from the different models to the total \(\mathcal{S}\)-shift \(\mathcal{S}\) in eV for different configurations of pFTAA in a 3 Å environment of water obtained from an MD simulation and subsequently calculated in a PE and FDE framework and different orders of polarization contributions. Snapshots marked with \({}^{*}\) incorporate pseudopotentials in the PE calculations. \begin{table} \begin{tabular}{c|c c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\Delta\)supp} & \multirow{2}{*}{\(\mathcal{F}\)} & \multicolumn{3}{c}{**FDE**} & \multicolumn{3}{c}{**FE**} & \multirow{2}{*}{\(\Delta\)**REF**} \\ \cline{2-2} \cline{5-10} & & \(\Delta\)NOPOL & \(\Delta\)GSPOL & & & & & & & & \\ \hline 1 & 0.294 & 0.053 & 0.348 & 0.239 & 0.000 & 0.063 & -0.029 & 0.380 & 0.443 & 0.444 & 0.388 \\ 2-2 & 0.140 & 0.044 & 0.183 & 0.143 & 0.044 & 0.052 & -0.075 & 0.167 & 0.219 & 0.143 & -0.007 \\ \hline \hline \end{tabular} \end{table} Table 3: Contributions from the different models to the total \(\mathcal{F}\)-shifts for different configurations of pFTAA in a 3 Å environment of water obtained from an MD simulation and subsequently calculated in a PE and FDE framework and different orders of polarization contributions obtained with a aug-cc-pVDZ basis set. Snapshot marked with \({}^{*}\) incorporate pseudopotentials in the PE calculations models. As also seen for _p_NA, \(\Delta\Delta\texttt{DPDOL}\) overcorrects the \(\mathcal{F}\)-shift (leading to a deviation of 0.05 to the reference), whereas introducing EEF effects (\(\Delta\texttt{DPDL+EEF}\)) again brings the value closer to the reference. ## 5 Summary and Conclusions We have investigated the influence of different approximations in classical and quantum-based embedding schemes on the excitation energies and oscillator strengths for _p_NA and pFTAA in different sizes of water environments. Frozen-density embedding (FDE) and polarizable embedding (PE) schemes have been compared. To enable a one-to-one comparison of these two methods, we employed an FDE framework that complies to a large degree with the PE implementation in Dalton[129, 65, 113]. In particular, we performed the mutual polarization of subsystems within FDE in the PyADF scripting environment[79, 80] using Dalton[82] for all (TD-)DFT calculations. With this computational setup at hand, we performed a detailed analysis of the different contributions, _i.e._, static electrostatics (no polarization), ground-state polarization, differential polarization, and quantum-mechanical effects in the FDE and PE model to the solvent shift of _p_NA and pFTAA in an explicit water solvent. We compared the obtained excitation energies and oscillator strengths to supermolecular TDDFT calculations. We find that FDE and PE perform similarly with the inclusion of static environmental densities and ground-state polarization, respectively. Since these two contributions are dominating the solvochromatic (\(\mathcal{S}\)-)shift for both _p_NA and pFTAA, FDE and PE both achieve good agreement with the reference \(\mathcal{S}\)-shifts. This also holds when neglecting differential polarization effects. The effect of differential polarization on the \(\mathcal{F}\)-shifts seems more pronounced than that of the \(\mathcal{S}\)-shifts in standard PE. This effect on the \(\mathcal{F}\)-shifts is, however, reduced by the incorporation of external effective field effects, so that for a 4 A environment of _p_NA, the average \(\mathcal{F}\)-shift is similarly well described with and without differential polarization. For individual snapshots, however, the effect of differential polarization can be sizeable, both with and without including external effective field effects. In these cases, external field effects improve the agreement with the supersystem reference. We could further show that the severe electron-spill-out issues preventing traditional PE calculations on the highly anionic pFTAA dye with sodium ions in close proximity could be largely reduced by atomic pseudopotentials on the sodium ions. All in all, we find a similar performance for FDE and PE on excitation energies as well as average oscillator strengths. Accurate oscillator strengths with PE, however, required the incorporation of external effective field effects. For the anionic pFTAA example, further effective core potentials on nearby cations were essential to avoid electron-spill-out effects. ## Author contributions MJ: Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing-Original Draft, Visualization; PR: Molecular Dynamics for _p_NA in water; EDH: Conceptualization, Validation, Formal Analysis, Writing - Review & Editing; CK: Conceptualization, Validation, Formal Analysis, Writing - Review & Editing, Supervision, Project Administration. ## Acknowledgments This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Young Group Leader Programme (project KO 5423/1-1). EDH thanks The Villum Foundation, Young Investigator Program (grant no. 29412), the Swedish Research Council (grant no. 2019-04205), and Independent Research Fund Denmark (grant no. 0252-00002B and grant no. 2064-00002B) for support.
2303.15443
GeoNet: Benchmarking Unsupervised Adaptation across Geographies
In recent years, several efforts have been aimed at improving the robustness of vision models to domains and environments unseen during training. An important practical problem pertains to models deployed in a new geography that is under-represented in the training dataset, posing a direct challenge to fair and inclusive computer vision. In this paper, we study the problem of geographic robustness and make three main contributions. First, we introduce a large-scale dataset GeoNet for geographic adaptation containing benchmarks across diverse tasks like scene recognition (GeoPlaces), image classification (GeoImNet) and universal adaptation (GeoUniDA). Second, we investigate the nature of distribution shifts typical to the problem of geographic adaptation and hypothesize that the major source of domain shifts arise from significant variations in scene context (context shift), object design (design shift) and label distribution (prior shift) across geographies. Third, we conduct an extensive evaluation of several state-of-the-art unsupervised domain adaptation algorithms and architectures on GeoNet, showing that they do not suffice for geographical adaptation, and that large-scale pre-training using large vision models also does not lead to geographic robustness. Our dataset is publicly available at https://tarun005.github.io/GeoNet.
Tarun Kalluri, Wangdong Xu, Manmohan Chandraker
2023-03-27T17:59:34Z
http://arxiv.org/abs/2303.15443v1
# GeoNet: Benchmarking Unsupervised Adaptation across Geographies ###### Abstract In recent years, several efforts have been aimed at improving the robustness of vision models to domains and environments unseen during training. An important practical problem pertains to models deployed in a new geography that is under-represented in the training dataset, posing a direct challenge to fair and inclusive computer vision. In this paper, we study the problem of geographic robustness and make three main contributions. First, we introduce a large-scale dataset **GeoNet** for geographic adaptation containing benchmarks across diverse tasks like scene recognition (GeoPlaces), image classification (GeoImNet) and universal adaptation (GeoUniDA). Second, we investigate the nature of distribution shifts typical to the problem of geographic adaptation and hypothesize that the major source of domain shifts arise from significant variations in scene context (context shift), object design (design shift) and label distribution (prior shift) across geographies. Third, we conduct an extensive evaluation of several state-of-the-art unsupervised domain adaptation algorithms and architectures on GeoNet, showing that they do not suffice for geographical adaptation, and that large-scale pre-training using large vision models also does not lead to geographic robustness. Our dataset is publicly available at [https://tarun005.github.io/GeoNet](https://tarun005.github.io/GeoNet). ## 1 Introduction In recent years, domain adaptation has emerged as an effective technique to alleviate dataset bias [80] during training and improve transferability of vision models to sparsely labeled target domains [27, 36, 40, 42, 49, 50, 68, 69, 87, 90]. While being greatly instrumental in driving research forward, methods and benchmark datasets developed for domain adaptation [56, 57, 64, 84] have been restricted to a narrow set of divergences between domains. However, the geographic origin of data remains a significant source of bias, attributable to several factors of variation between train and test data. Training on geographically biased datasets may cause a model to learn the idiosyncrasies of their geographies, preventing generalization to novel domains with significantly different Figure 1: **Summary of our contributions. (a): Training computer vision models on geographically biased datasets suffers from poor generalization to new geographies. We propose a new dataset called GeoNet to study this problem and take a closer look at the various types of domain shifts induced by geographic variations. (b) Prior unsupervised adaptation methods that efficiently handle other variations do not suffice for improving geographic transfer. (c) We highlight the limitations of modern convolutional and transformer architectures in addressing geographic bias, exemplified here by USA\(\rightarrow\)Asia transfer on GeoImNet.** geographic and demographic composition. Besides robustness, this may have deep impact towards fair and inclusive computer vision, as most modern benchmark datasets like ImageNet [63] and COCO [47] suffer from a significant US or UK-centric bias in data [24, 73], with poor representation of images from various other geographies like Asia. In this paper, we study the problem of geographic adaptation by introducing a new large-scale dataset called GeoNet, which constitutes three benchmarks - GeoPlaces for scene classification, GeoImNet for object recognition and GeoUniDA for universal domain adaptation. These benchmarks contain images from USA and Asia, which are two distinct geographical domains separated by various cultural, economic, demographic and climatic factors. We additionally provide rich metadata associated with each image, such as GPS location, captions and hashtags, to facilitate algorithms that leverage multimodal supervision. GeoNet captures the multitude of novel challenges posed by varying image and label distributions across geographies. We analyze GeoNet through new sources of domain shift caused by geographic disparity, namely (i) _context shift_, where the appearance and composition of the background in images changes significantly across geographies, (ii) _design shift_, where the design and make of various objects changes across geographies, and (iii) _prior shift_, caused by different per-category distributions of images in both domains. We illustrate examples of performance drop caused by these factors in Fig. 0(a), where models trained on images from USA fail to classify common categories such as _running track_ and _mailbox_ due to context and design shifts, respectively. GeoNet is an order of magnitude larger than previous datasets for geographic adaptation [58, 61], allowing the training of modern deep domain adaptation methods. Importantly, it allows comparative analysis of new challenges posed by geographic shifts for algorithms developed on other popular adaptation benchmarks [56, 57, 64, 84]. Specifically, we evaluate the performance of several state-of-the-art unsupervised domain adaptation algorithms on GeoNet, and show their limitations in bridging domain gaps caused by geographic disparities. As illustrated in Fig. 0(b) for the case of DomainNet [56] vs. GeoNet, state-of-the-art models on DomainNet often lead to accuracies even worse than a source only baseline on GeoNet, resulting in negative _relative gain_ in accuracy (defined as the gain obtained by an adaptation method over a source-only model as a percentage of gap between a source-only model and the target-supervised upper bound). Furthermore, we also conduct a study of modern architectures like vision transformers and various pre-training strategies, to conclude that larger models with supervised and self-supervised pre-training offer improvements in accuracy, which however are not sufficient to address the domain gap (Fig. 0(c)). This highlights that the new challenges introduced by geographic bias such as context and design shift are relatively under-explored, where our dataset may motivate further research towards this important problem. In summary, our contribution towards geographic domain adaptation is four-fold: * A new large-scale dataset, GeoNet, with benchmarks for diverse tasks like scene classification and object recognition, with labeled images collected from geographically distant locations across hundreds of categories (Sec. 3). * Analysis of domain shifts in geographic adaptation, which may be more complex and subtle than style or appearance variations (Sec. 3.4). * Extensive benchmarking of unsupervised adaptation algorithms, highlighting their limitations in addressing geographic shifts (Sec. 4.2). * Demonstration that large-scale pretraining and recent advances like vision transformers do not alleviate these geographic disparities (Sec. 4.3). ## 2 Related Works **Domain Adaptation** Unsupervised domain adaptation enables training models on a labeled source domain along with unlabeled samples from a different target domain to improve the target domain accuracy. A large body of prior works aim to minimize some notion of divergence [4, 5] between the source and target distributions based on MMD [77, 78, 49, 51, 79] adversarial [9, 81, 82, 83, 93, 13, 70], generative [8, 36, 70], class-level [89, 55, 44, 52, 55, 41] or instance-level alignment [87, 74, 85] techniques. Clustering [23, 41, 42, 54, 39] and memory-augmentation approaches [40] have also been shown to be effective. However, most of these works are shown to improve performance using standard datasets such as Office-31 [64], visDA [57], OfficeHome [84] or DomainNet [56], where the distribution shifts typically arise from unimodal variations in style or appearance between source and target. While prior works also study semantic shift [6] and sub-population shift [10], we aim to address a more practical problem of geographic domain adaptation with more complex variations not covered by prior works. **Geographic Robustness** Many prior works study biases of CNNs towards 3D poses [1, 95], textures [29], styles [35], natural variations [7, 79, 60] and adversarial inputs [35], but robustness of computer vision towards shift induced by geography is relatively under-explored. While algorithms for bridging geographic domain gaps have been proposed in [86, 41, 18], they are restricted to road scenes with limited number of classes. A major hindrance has been the lack of suitable benchmark datasets for geographic adaptation, so several datasets have been recently proposed to address this issue [24, 58, 61, 72]. Datasets based on dollar street images [61] highlight the geographic differences induced by income disparities between various countries, Ego4D [30] contains egocentric videos with actions from various geogra plies, while researchers in [58] design an adaptation dataset with images from YFCC-100M [26] to analyze geographic shift. Adding to these efforts, we propose a much larger-scale dataset for geographic adaptation consisting of more diverse categories for place and object classification, across factors of variation beyond income disparities. ## 3 Dataset Creation and Analysis We present the overall summary of various datasets in our benchmark in Tab. 1, including the number of images and categories from each of our settings. In this paper, we broadly consider US and Asia as the two domains, as these two geographies have considerable separation in terms of underlying cultural, environmental and economical factors, while also providing the appropriate level of abstraction and leaving enough data from each domain to perform meaningful analysis. Although Asia is less homogeneous than USA with greater within-domain variance, our adopted geographical granularity follows from the amount of data we could retrieve from different countries using concepts in GeoNet, where we observed general paucity in images from many low-resource countries on Flickr. We also note that the domain shifts caused by geographic disparities are not restricted to these regions, and use images from Africa to show similar observations of domain gaps in the supplementary. ### GeoPlaces We propose GeoPlaces to study geographic adaptation in scene classification, which involves predicting the semantic category of the place or location present in the image [96]. In contrast to object classification, it is necessary to accurately identify and understand various interactions and relationships between the objects and people in the scene to predict the appropriate scene category. In spite of rapid progress in datasets [96, 88] and methods [14] for this task, robustness of scene classification networks to unseen domains in general, and across geographies in particular, has received little attention, for which we propose a suitable benchmark. **Selecting Concepts and Images** We use the 205 scene categories from Places-205 [96] to build GeoPlaces, as these semantic categories cover a wide range of real world scenes commonly encountered in most geographies. We build our GeoPlaces benchmark from the labeled Places-205 dataset [97]. We first collect the unique Flickr identifier (Flickr-id) associated with each image in the Places-205 dataset, and then use the publicly available Flickr API1 to extract the GPS location of the image. Since only a fraction of images belong to Flickr and a further smaller fraction \begin{table} \begin{tabular}{l l c c c} \hline \hline & Split & GeoPlaces & GeoImNet & GeoUniDA \\ \hline \multirow{2}{*}{USA} & Train & 178110 & 154908 & 100136 \\ & Test & 17234 & 16784 & 25034 \\ \hline \multirow{2}{*}{Asia} & Train & 187426 & 68722 & 33912 \\ & Test & 26923 & 9636 & 8478 \\ \hline \multirow{2}{*}{classes-shared classes-private} & 205 & 600 & 62 \\ & - & - & 138 \\ \hline \hline \end{tabular} \end{table} Table 1: **Summary of GeoNet** Number of images in train and test splits in each of our benchmarks. While GeoPlaces and GeoImNet are developed for unsupervised adaptation, GeoUniDA is developed for universal domain adaptation across geographies. Figure 2: **Class distribution in GeoNet** Percentage of images per class from USA and Asia domains shown for the GeoPlaces benchmark in (a) and GeoImNet benchmark in (b). The label distributions are long-tailed in both, and the dominant and tail classes are widely different across geographies in each setting indicating a strong prior shift. (Best viewed in color, zoom in to see the class names). contain valid geotags, we end up with around 400k images from 205 classes with associated geographical information. Of these, 190k images are from the US domain, and we use 178k of them for training and 17k for testing. In Asia domain however, we obtain only 27k images. To match the scale of images from both domains, we perform an additional step and manually collect more images as explained next. **Additional Data** Due to the inherent US-centric bias of photo-sharing websites like Flickr, a major portion of images are US-based. In order to collect more images from the Asia domain, we directly scrape images from Flickr using the 205 category names from Places-205 as the _seed concepts_. As many Asian users often post descriptions and tags for pictures in languages other than English, we use translations of these seed concepts in English to 6 Asian languages, namely {Hindi, Korean, Japanese, Chinese, Russian, Hebrew}, and use these along with the original concepts, as the augmented or _expanded concepts_. Then, we search Flickr for images which match the criterion that (i) they are geotagged in Asia, and (ii) the tags associated with the image match with exactly one of the categories in the expanded concept list (which we assign as the label). We collect around 190k images this way, and use this as the training set. Since images collected from web tend to be nosier than human labeled ones, we use the manually labeled 27k images from Places-205 as the test set for Asia domain to ensure robust benchmarking. ### GeoImNet We propose the GeoImNet benchmark to investigate the domain shift due to geographical disparities on object classification. Different from existing object-level datasets for domain adaptation [56, 57, 64, 84], GeoImNet provides domain shifts induced by geographic disparities. **Dataset curation** We collect images in the GeoImNet benchmark from the WebVision dataset [46], which itself is scraped from Flickr using queries generated from 5000 concepts in the Imagenet-5k dataset [22]. We then follow the same pipeline as explained above for GeoPlaces benchmark, and identify the GPS coordinates of each images using its Flickr-id. **Concept Selection** Although the original dataset contains 5000 classes, many of these classes are indigenous to a particular geography. For example, _Bengal Tigers_ are found in Indian subcontinent, and _Bald Eagle_ is a North-American bird. Since unsupervised domain adaptation typically demands matching label spaces across source and target, we select 600 categories out of the original 5000 with at least 20 images in each domain from each category. We then assign Figure 4: **Design Shift in GeoNet We show examples illustrating the design shifts for the cases of _castle_ from GeoPlaces and _candle_ from GeoImNet. Note that differences in designs of castles as well as the variety of objects like candles found across geographies lead to design shifts between the domains.** Figure 3: **Context Shift in GeoNet A few examples showing the nature of context shifts across categories from GeoPlaces benchmark in (a), and GeoImNet benchmark in (a), arising due to a variety of differences between geographical disparity. For example, outdoor scenes (shopfront, marketplace) reflect the demographics across geographies, indoor-scenes (living rooms, cafeteria) reflect cultural and economic variations and wildlife images reflect the habitat and climatic variations.** roughly 15% of images from each domain into the test set and use the remaining as the training images. **Dataset filtering** WebVision is _webb supervised_[16], which does not guarantee object-centric images or clean labels. Therefore, we remove all the images from the dataset which have more than one tag that match our selected concepts (the 600 chosen categories) to handle multi-labeled images. Furthermore, we manually quality-check all the test images and remove all the images with noisy labels. Finally, we perform de-duplication to remove images from the training set which are very similar to those in the test set. More insights into each step of our data collection and filtering process is provided in the supplementary material. The final label distribution for both US and Asia domains in both our benchmarks is shown in Fig. 2. ### GeoUniDA Universal Domain Adaptation (UniDA) [91] facilitates domain adaptation between source and target domains that have few private classes, in addition to shared classes which are common to both. While this is a realistic problem, prior works [91, 65, 45, 67] use benchmarks created from existing UDA datasets for evaluation. However, our proposed geographical adaptation setting gives us an unique opportunity to design benchmarks for UniDA such that the private categories from the source and the target are a natural reflection of the presence or absence of these categories in the respective geographical domains. In order to select the shared and private categories for our Geo-UniDA benchmark, we first start with the 1000 categories in the original Imagenet-1k dataset [63], and select top 200 categories each in the USA and Asia domains that have the most number of images from the WebVision dataset. Out of these, we use the 62 common classes as the shared categories, and the remaining 138 as the private classes in each domain. ### Analysis of Distribution Shifts We denote the source dataset using \(D_{s}\)=\(\{X_{s},Y_{s}\}\), and assume that \(X_{s}\)\(\sim\)\(P_{s}(x)\) and \((X_{s},Y_{s})\)\(\sim\)\(P_{s}(x,y)\) where \(P_{s}(x)\) and \(P_{s}(x,y)\) are the image marginal and image-label joint distribution respectively. Target dataset \(D_{t}=\{X_{s},Y_{s}\}\) and target distributions \(P_{t}(x)\) and \(P_{t}(x,y)\) are defined similarly, and the domain discrepency assumption states that \(P_{s}(x,y)\neq P_{t}(x,y)\). In order to formulate domain shift across geographies, we define \(f_{x}\) as the part of image referring to the foreground objects (corresponds to the salient objects in a scene) and \(b_{x}\) to be the rest of the image corresponding to the background regions (corresponding to the surrounding regions or context). For example, for the task of classifying _living room_ in Fig. 2(a) from GeoPlaces, common objects like sofa and table are foreground, while floor, roof and walls are backgrounds. We make a simplifying assumption that an image is completely explainable using its foreground and background and replace the class-conditional distribution of the images \(P(x|y)\) with the joint class-conditional \(P(b_{x},f_{x}|y)\). Further, we also assume that given a class label, the background is conditionally independent of the foreground. Then, \[P(x,y) = P(x|y)\cdot P(y)\] \[= P(b_{x},f_{x}|y)\cdot P(y)\] \[= P(b_{x}|y)\cdot P(f_{x}|b_{x},y)\cdot P(y)\] \[\implies P(x,y) = \underbrace{P(b_{x}|y)}_{\text{context}}\cdot\underbrace{P(f_{x} |y)}_{\text{design}}\cdot\underbrace{P(y)}_{\text{prior}} \tag{1}\] We define the class-conditional background distribution \(P(b_{x}|y)\) as context, class-conditional object distribution \(P(f_{x}|y)\) as design and the label distribution \(P(y)\) as prior. Note that standard covariate shift assumption [4] assumes uniform domain discrepency across all the images (\(P_{s}(x)\)\(\neq\)\(P_{t}(x)\)), which does not hold for geographic adaptation due to the diverse source of variations. We analyze each of these from a geographic adaptation perspective next. **Context Shift** We define context shift to be the changes in the context around an object or scene given by \(P_{s}(b_{x}|y)\neq P_{t}(b_{x}|y)\). Deep learning models are generally sensitive to object contexts and backgrounds, and learn spurious correlations that impede their ability to recognize objects and scenes in novel contexts [19, 20, 62, 75]. In geographic adaptation, context shift can be caused by differences in cultural or economic factors across geographies, and few examples illustrating context shift from GeoPlaces and GeoImNet are shown in Fig. 3. While prior works already introduce context shift for domain adaptation [58], a key difference lies in their modeling assumption that the context is irrelevant while training, while in our case context might play a key role in improving scene classification on GeoPlaces. **Design Shift** We define "design" shift as the change in object structure, shape and appearance, where the foreground objects belonging to the same semantic category look different across geographies, given by \(P_{s}(f_{x}|y)\neq P_{t}(f_{x}|y)\). Few examples are shown in Fig. 4, where categories like _castle_ from GeoPlaces and _candle_ from GeoImNet datasets look widely different due to high intra-class variance, although they belong to the same semantic category. It is important to note that context and design shifts might also occur within a domain or within a geography. However, it is easier to account for intra-domain variations on labeled source datasets than ensuring robustness to new and unlabeled geographies. **Prior Shift** The label distributions across the domains in our benchmarks widely differ due to natural prominence or rarity of the classes according to the geography, as shown in Fig. 2, where the head classes of one domain might be tail classes in another. This leads to a prior shift where \(P_{s}(y)\neq P_{t}(y)\). For example, categories like _railway station_, _outdoor markets_, _monasteries_ are common in Asia while _baseball stadiums_ are more common in USA. Prior works examining prior shift or label shift across domains [28, 2, 48, 92, 3] generally assume that the class conditionals remain the same, which is not true in the case of geographic adaptation due to context and design shifts as illustrated above. ## 4 Experiments ### Domain Shifts in Proposed Datasets We illustrate the severity of domain differences across geographies using the drop in accuracy caused by cross-geography transfer in Tab. 2. Specifically, we train a Resnet-50 [34] model using images only from one domain, and compute the accuracies on both within-domain and cross-domain test sets. Since a lot of categories in GeoNet are close (example, _train station_ vs. _subway station_), we use both top-1 and top-5 accuracies to report the performance. We observe a significant drop in accuracy caused by direct transfer of models across domains which can be attributed to the geographic bias in the training data. For example, a model trained on GeoPlaces benchmark on US images gives 56.35% Top-1 accuracy on US images, but only 36.27% on images from Asia with a notable drop of 20%. On the GeoImNet benchmark, within-domain testing on images collected from USA gives 56.35% top-1 accuracy while cross-domain testing on Asia images gives only 36.98% with a drop of 19.37%. The 36.98% accuracy is also much inferior to the supervised accuracy on the Asia domain (60.37%) which can be considered as the target upper bound. and 15.4% on GeoImNet, compared to 20.08% and 19.37% on the original datasets, showing that non-trivial accuracy drops caused by context and design shifts still exist even after accounting for label imbalance between the domains. ### Benchmarking Domain Adaptation We study the effectiveness of prior unsupervised adaptation algorithms in bridging novel notions of domain gaps like context shift and design shift on GeoNet. We review various standard as well as current state-of-the-art domain adaptation methods to examine their geographical robustness. **Architecture and training details** We follow the standard protocol established in prior works [40, 50, 69] and use an ImageNet pre-trained Resnet-50 [34] as the feature extractor backbone and a randomly initialized classifier layer. We use a batch size of 32 and SGD with a learning rate of 0.01 for the classifier head and 0.001 for the already pretrained backbone. We report the top-1 and top-5 accuracy numbers using the test splits from each benchmarks. We perform comparisons between traditional adversarial methods (DANN [27], CDAN [50]), class-aware adaptation methods (MCC [38], MDD [94]), non-adversarial methods (SAFN [90], MCD [69]) as well as recent state-of-the-art (ToAlign [87], MemSAC [40]). We train prior works using their publicly available code and adopt all hyper-parameters as recommended in the respective papers. **Existing UDA methods do not suffice on GeoNet** We show the Top-1 and Top-5 accuracies of all the transfer settings from GeoNet in Tab. 4. A key observation is that most of the domain adaptation approaches are no better, or sometimes even worse, than the baseline model trained only using source domain data, indicating their limitations for geographic domain adaptation. For example, on GeoPlaces, training using data from USA achieves a top-1 accuracy of 36.27% on test data from Asia test images, while the best adaptation method (MemSAC) obtains lesser accuracy of 34.7%, indicating negative transfer. Likewise, on GeoImNet, a USA-trained source model achieves 36.98% on test images from Asia which is comparable to the best adaptation accuracy of 36.71%. To further illustrate this, we define relative accuracy gain as the improvement in accuracy obtained by a method over a source-only model as a percentage of gap between a source-only model and the target-supervised upper bound (which is 100% if the method achieves the target supervised upper bound). From Fig. 0(b), it is notable that the same adaptation methods that yield significantly high relative accuracy gains on DomainNet [56] yield negative relative accuracy gains on GeoNet, highlighting the unique the nature of distribution shifts in real-world settings like geographic adaptation that challenge existing methods. These observations also suggest that future research should focus on context-aware and object-centric representations in addition to domain invariant features to improve cross-domain transfer amidst context and design shifts. **Universal domain adaptation on Geo-UniDA** We run SOTA universal domain adaptation methods (You et.al. [91], DANCE [66] and OvaNET [67]) on the Geo-UniDA benchmark of GeoNet. Following prior works [67], we adopt the H-score metric which is a harmonic mean of closed-set and open-set accuracies giving equal importance to closed set transfer as well as open set accuracy. In Tab. 5, we \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{GeoPlaces} & \multicolumn{4}{c}{GeoImNet} \\ \cline{2-9} & USA \(\rightarrow\) Asia & Asia \(\rightarrow\) USA & USA & USA \(\rightarrow\) Asia & Asia \(\rightarrow\) USA \\ \hline \hline \multirow{3}{*}{Source Only} & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 \\ \cline{2-9} & **36.27** & **63.27** & **21.03** & **44.81** & **36.98** & **63.43** & **40.43** & **64.6** \\ DANN [27] & 29.58 & 55.23 & 16.59 & 35.32 & 32.88 & 57.77 & 38.42 & 62.90 \\ CDAN [50] & 30.48 & 55.94 & 17.01 & 36.26 & 35.94 & 60.21 & 39.88 & 63.74 \\ MCC [38] & 30.09 & 55.85 & 17.17 & 36.85 & 35.71 & 60.48 & 39.86 & 64.00 \\ SAFN [90] & 32.50 & 57.93 & 14.34 & 35.68 & 32.40 & 58.43 & 36.26 & 61.58 \\ MDD [94] & 34.18 & 59.10 & 17.81 & 36.44 & 36.26 & 62.13 & 40.15 & 63.91 \\ MCD [69] & 33.49 & 59.41 & 16.57 & 34.74 & 25.60 & 48.45 & 36.69 & 60.68 \\ ToAlign [87] & 29.86 & 56.16 & 16.32 & 33.58 & 32.13 & 58.64 & 37.98 & 63.17 \\ MemSAC [40] & 34.68 & 60.52 & 15.75 & 32.83 & 36.71 & 63.16 & 40.34 & 64.40 \\ \hline \hline \multicolumn{9}{l}{Tgt. Supervised} & 49.63 & 78.45 & 56.35 & 85.15 & 60.37 & 80.22 & 56.35 & 77.95 \\ \hline \hline \end{tabular} \end{table} Table 4: **UDA on GeoNet** Top-1 and Top-5 accuracies of various unsupervised adaptation methods on GeoNet. Most of the methods fail to sufficiently handle cross-geography transfer on both GeoPlaces and GeoImNet benchmarks and often give lower accuracies even compared to a baseline model trained only using source data calling attention to the need for novel methods that can handle domain shifts beyond style and appearance. \begin{table} \begin{tabular}{l c c c|c} Method & closed-set & open-set & H-Score & Target Sup. \\ \hline UniDA [91] & 27.64 & 43.93 & 33.93 & \\ DANCE [66] & 38.54 & 78.73 & 51.75 & 70.70\% \\ OVANet [67] & 36.54 & 66.89 & 47.26 & \\ \hline \hline \end{tabular} \end{table} Table 5: **Universal domain adaptation methods on GeoUniDA.**_closed-set_ and _open-set_ refer to the closed set and open set accuracies, and _H-Score_ is the harmonic-mean of the two. Note the significant gap that still exists with target supervised accuracy on closed-set labels with the best adaptation method DANCE [66]. show that DANCE [66] outperforms both You et.al. [91] and OVANet [67] on the Geo-UniDA benchmark. We also show that a significant gap still exists between target supervised accuracy when trained using supervision (70.7%) and best adaptation accuracy (38.5%) on our benchmark, highlighting the limitations of existing methods to efficiently address universal adaptation in a geographic context. ### Large-scale pre-training and architectures It is common to use large scale self-supervised [11, 12, 15, 17, 32, 33] and weakly-supervised [37, 53, 76] pre-trained models as starting points in various downstream applications. While recent works explored role of pre-training on domain robustness [43], we are interested in the extent to which large scale pre-training effectively preserved robustness when fine-tuned on geographically under-represented datasets. We investigate the performance of a variety of methods on GeoNet in terms of backbone architectures, pre-training strategies and supervision. **Experimental setup** Our backbone architectures include Resnet50 [34] as well as the small (ViT-S), base (ViT-B) and large (ViT-L) vision transformers [25]. In terms of supervision, in addition to the standard supervised pre-training on ImageNet-1k, we also consider self-supervised methods MoCo-V3 [17], SwAV [11], DINO [12], MAE [32] trained on ImageNet-1k, the weakly supervised SWAG [76] trained on 3.6B uncurated instagram images and CLIP [59] trained on 400M image-language pairs [71]. We denote {Backbone-Supervision-Data} for different model choices (for example, Resnet50-sup-IN1k indicates a Resnet50 pre-trained on supervised data from ImageNet-1k). For evaluating geographic robustness of these models, we first take the pre-trained model and fine-tune it on training data from a "source" geography, then evaluate the performance on test data from the "target" geography. We show the results using USA as the source and Asia as the target from the GeoPlaces benchmark in Fig. 7, and GeoImNet benchmark in Fig. 1c. For reference, we also report accuracy after fine-tuning on labeled data from the target geography for each {Backbone-Supervision-Data} pair (denoted as target-supervised), which serves as an upper bound for the transfer performance. **Large-scale pretraining is not geographically robust** From Fig. 7, we make a few observations. Firstly, comparison between Resnet50 and ViT-S which have roughly the same number of parameters suggests the superiority of the vision transformer architectures over CNNs. For example, ViT-S-sup-IN1k is better than Resnet50-sup-IN1k, and ViT-S-moco-IN1k is better than Resnet50-moco-IN1k, indicating that global reasoning using self-attention layers in vision transformers benefits context-dependent tasks like GeoPlaces. Next, comparing different pre-training strategies, we observe that MoCo gives best accuracy on ViT-S and ViT-B, while supervised pre-training outperforms other approaches on large models like ViT-L. However, the gap between target supervised accuracy and the best adaptation accuracy achieved using either Resnet50 or any of the vision transformers is still high, highlighting the need for better transfer strategies. In terms of data, weakly-supervised pre-training using billion-scale dataset IG3.6B (ViT-B-swag-3B) shows significant improvements over self-supervised training methods like MAE (ViT-B-mae-IN1k) and DINO (ViT-B-dino-IN1k). But despite training on massive-scale data, ViT-L-swag-3B and ViT-L-clip-400M are still inferior to the target supervised accuracies, revealing the limitations of current pre-training strategies towards robust cross-geography transfer after fine-tuning. While the success of large-scale pre-training strategies are well-documented on popular datasets like ImageNet, our results indicate that similar benefits might not be observed when application domains significantly differ from pre-training or fine-tuning datasets [21]. ## 5 Conclusion We introduce a new dataset called GeoNet for the problem of geographic adaptation with benchmarks covering the tasks of scene and object classification. In contrast to existing datasets for domain adaptation [56, 57, 64, 84], our dataset with images collected from different locations contains domain shifts captured by natural variations due to geographies, cultures and weather conditions from across the world, which is a novel and understudied direction in domain adaptation. Through GeoNet, we analyze the sources of domain shift caused by changes in geographies such as context and design shift. We conduct extensive benchmarking on GeoNet and highlight the limitations of current domain adaptation methods as well as large-scale pretraining methods towards geographical robustness. Finally, in spite of geographical diversity in GeoNet, we note a possible limitation of indirect bias towards USA as the user-base on photo-sharing sites like Flickr is dominated by the US. Creating datasets that are a more natural reflection of cultures and trends from diverse geographies and devising learning algorithms robust to those Figure 7: We show that most architectures and pre-training strategies exhibit significant cross-domain drops when fine-tuned on geographically biased datasets. Shown for USA\(\rightarrow\)Asia on GeoPlaces, refer Fig. 1c for the plot on GeoImNet and supplementary material for other transfer settings. variations is an exciting proposition for the future. **Acknowledgements** We thank NSF CAREER 1751365, Google AI Award for Inclusion Research and National Research Platform for hardware access.
2309.00773
Deep Reinforcement Learning in Surgical Robotics: Enhancing the Automation Level
Surgical robotics is a rapidly evolving field that is transforming the landscape of surgeries. Surgical robots have been shown to enhance precision, minimize invasiveness, and alleviate surgeon fatigue. One promising area of research in surgical robotics is the use of reinforcement learning to enhance the automation level. Reinforcement learning is a type of machine learning that involves training an agent to make decisions based on rewards and punishments. This literature review aims to comprehensively analyze existing research on reinforcement learning in surgical robotics. The review identified various applications of reinforcement learning in surgical robotics, including pre-operative, intra-body, and percutaneous procedures, listed the typical studies, and compared their methodologies and results. The findings show that reinforcement learning has great potential to improve the autonomy of surgical robots. Reinforcement learning can teach robots to perform complex surgical tasks, such as suturing and tissue manipulation. It can also improve the accuracy and precision of surgical robots, making them more effective at performing surgeries.
Cheng Qian, Hongliang Ren
2023-09-02T01:04:31Z
http://arxiv.org/abs/2309.00773v1
Deep Reinforcement Learning in Surgical Robotics: Enhancing the Automation Level Deep Reinforcement Learning in Surgical Robotics: Enhancing the Automation Level Cheng Qian* and Hongliang Ren* #Department of Electrical Engineering and Information Technology, Technical University of Munich, Germany *Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong Abstract: Surgical robotics is a rapidly evolving field that is transforming the landscape of surgeries. Surgical robots have been shown to enhance precision, minimize invasiveness, and alleviate surgeon fatigue. One promising area of research in surgical robotics is the use of reinforcement learning to enhance the automation level. Reinforcement learning is a type of machine learning that involves training an agent to make decisions based on rewards. This literature review aims to comprehensively analyze existing research on reinforcement learning in surgical robotics. The review identified various applications of reinforcement learning in surgical robotics, including pre-operative, intra-body, and percutaneous procedures, listed the typical studies, and compared their methodologies and results. The findings show that reinforcement learning has great potential to improve the autonomy of surgical robots. Reinforcement learning can teach robots to perform complex surgical tasks, such as suturing and tissue manipulation. It can also improve the accuracy and precision of surgical robots, making them more effective at performing surgeries. Key Words: Surgical robotics, reinforcement learning, surgical autonomy, tissue manipulation, percutaneous procedures, suturing ## I Introduction The use of surgical robots has significantly increased in the last decade, driven by the need for precision, safety, and efficiency in surgeries [1]. Since the appearance of da Vinci robotic-assisted surgical system in 2000 [3], surgical robots have proven to help perform minimally
2310.12214
InferDPT: Privacy-Preserving Inference for Black-box Large Language Model
Large language models (LLMs), like ChatGPT, have greatly simplified text generation tasks. However, they have also raised concerns about privacy risks such as data leakage and unauthorized data collection. Existing solutions for privacy-preserving inference face practical challenges related to computation time and communication costs. In this paper, we propose InferDPT, the first practical framework for the privacy-preserving Inference of black-box LLMs, implementing Differential Privacy in Text generation. InferDPT comprises two key modules: the "perturbation module" utilizes the exponential mechanism to generate a perturbed prompt, facilitating privacy-preserving inference with black-box LLMs, and the "extraction module", inspired by knowledge distillation and retrieval-augmented generation, extracts coherent and consistent text from the perturbed generation result, ensuring successful text generation completion. To address privacy concerns related to previous exponential mechanisms' susceptibility to embedding revision attacks, we introduce RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of InferDPT, which introduces the concept of "RANdom adjacency" for TEXT perturbation within the prompt. Experimental results across three datasets demonstrate that the text generation quality of InferDPT is comparable to that of non-private GPT-4, and RANTEXT surpasses existing state-of-the-art mechanisms, namely, SANTEXT+ and CUSTEXT+ in the trade-off between privacy and utility. Even with an privacy parameter epsilon value of 6.0, RANTEXT achieves an average privacy protection rate exceeding 90% against embedding revision attacks, which is 0.58 times higher than that of SANTEXT+ and 3.35 times higher than that of CUSTEXT+.
Meng Tong, Kejiang Chen, Jie Zhang, Yuang Qi, Weiming Zhang, Nenghai Yu, Tianwei Zhang, Zhikun Zhang
2023-10-18T18:00:11Z
http://arxiv.org/abs/2310.12214v6
# PrivInfer: Privacy-Preserving Inference for Black-box Large Language Model ###### Abstract Large language models (LLMs), such as ChatGPT, have simplified text generation tasks, yet their inherent privacy risks are increasingly garnering attention. Existing solutions for privacy-preserving inference face significant challenges in practical deployment and implementation. In this paper, we propose PrivInfer, the first practical framework for privacy-preserving inference. It comprises two modules specifically designed for black-box LLMs in text generation. The perturbation module, employing differential privacy, generates perturbed prompts, thus enabling privacy-preserving inference with black-box LLMs. The restoration module extracts coherent and meaningful responses from obtained perturbed results, thus ensuring the accomplishment of the text generation tasks. Additionally, to enhance privacy and utility further, we develop RANTEXT, a novel differential privacy mechanism integrated into the perturbation module of PrivInfer. This mechanism is specifically tailored for LLMs and utilizes random adjacency in text perturbations. Experimental results indicate that PrivInfer is comparable to GPT-4 in text generation quality, and RANTEXT outperforms the current leading scheme in privacy protection, even under its adaptive attack, our proposed GPT inference attack. ## 1 Introduction The rapid advancement of large language models has recently garnered widespread attention from the global academic and industrial communities. Among them, ChatGPT, one of the representative large language models, has attracted over 100 million monthly active users within just two months of its launch, making it one of the fastest-growing consumer applications in history [1]. The introduction of ChatGPT has significantly facilitated people's daily work and life. Through ChatGPT, users can send data and instructions to remote servers, thereby completing various daily work and writing tasks more efficiently, from drafting upcoming academic papers and documenting work to preparing information for products slated for market launch. However, with widespread application and increasing popularity, a substantial amount of user privacy information has been unwittingly leaked during usage, leading to irreversible damage to individuals, teams, and even companies. For instance, an employee of Samsung inadvertently leaked the company's meeting records and critical data of unreleased products while using a large language model [7]; previously, the Italian government also confirmed instances of user privacy infringement by ChatGPT, making Italy the first country to ban the use of ChatGPT [8]. In the field of natural language processing (NLP), existing work [9, 10, 11] has investigated the privacy issues of language Figure 1: Overview of PrivInfer. It consists of a perturbation module and a restoration module. models. As shown in Table 1, Yue et al. [2] and Chen et al. [3] leveraged differential privacy techniques [12] to sequentially replace tokens in the text with semantically similar tokens from a fixed word adjacency list. These approaches are used for the privacy-preserving training of models. However, these methods are not suitable for the model inference process of text generation tasks due to the inherent bias introduced by text perturbation. Moreover, these approaches are constrained by fixed features of their word adjacency, and our experimental results show that even with their privacy parameter \(\epsilon\) set to 1.0, privacy adversaries can still recover over one-third of the original text. Hao et al. [13] and Chen et al. [14] applied homomorphic encryption techniques to transformer architecture models for encrypted data inference. Theoretically, they can be used for privacy-preserving text generation tasks. However, homomorphic encryption incurs significant time and communication overheads, as stated by Hou et al. [4]: they were the first to implement homomorphic encryption inference on GPT-2 architecture, but inferring a single token requires 24 minutes and 93 GB of bandwidth, rendering existing schemes impractical. Zhou et al. [5] and Du et al. [6] attempted to split a complete model inference process into local and remote steps, adding noise during data transmission for perturbation. Notably, these methods are not primarily designed for text generation tasks. It is important to note that previous work [15, 16, 17] has shown that privacy adversaries can recover the original text from their perturbed data. Additionally, due to the protection of intellectual property and commercial value, closed-source model owners such as OpenAI [18] and Claude [19] do not disclose information regarding their model architectures, rendering these methods unsuitable for the black-box scenario. To tackle the privacy protection challenges associated with text generation tasks in black-box scenarios, we introduce PrivInfer. Inspired by the property of article writing, which states that texts on correlated topics share common phrases, it consists of a perturbation module and a restoration module. The key part for protecting the privacy of PrivInfer is the perturbation module based on differential privacy. In the perturbation module, we perturb the raw prompt words using a differential privacy algorithm, establishing multiple perturbed prompts to achieve privacy protection. We then submit the perturbed prompts to the remote large language model for inference, obtaining multiple perturbed generated text results; eventually, in the restoration module, the text generation result corresponding to the raw prompt is inferred by feeding numerous perturbed generated text results to a local language model. To enhance the privacy of the perturbation module in PrivInfer, we design RANTEXT, a novel differential privacy mechanism based on random adjacency tailored for large language models. RANTEXT, abiding by differential privacy under the exponential mechanism, redefines word adjacency for each word during each invocation through the Laplace differential privacy mechanism, thereby altering the raw prompt on the token vocabulary of the large language model. While preserving the utility of text generation results, RANTEXT outperforms the current leading schemes, CUSTEXT+ [3] and SANTEXT+ [2], regarding security and privacy protection. We conducted extensive experiments on datasets for open-ended text generation tasks to validate the utility and security of our scheme. Since the previous attack strategies for differential privacy did not work well for RANTEXT, we propose a novel differential privacy attack method--the GPT inference attack to conduct in-depth testing on RANTEXT. The experimental results indicate that the GPT inference attack exhibits a higher success rate than the mask token inference attack [2]. Although the GPT inference attack worked, RANTEXT still maintains the highest privacy protection capability compared to the baseline. Our primary contributions can be summarized as follows: * We propose PrivInfer, the first practical privacy protection framework for LLM text generation tasks in black-box scenarios. PrivInfer employs differential privacy methods to generate perturbed prompts for remote LLMs inference and extracts the meaningful response from the remote perturbed results. This scheme offers a new perspective for privacy research in black-box scenarios of LLM text generation tasks. * We introduce RANTEXT, a novel differential privacy mechanism within the perturbation module of PrivInfer, specifically designed for LLM text generation tasks. It is based \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Method** & **Text Generation** & **Black Box** & **Inference** & **Low Cost** \\ \hline SANTEXT+ [2] & & ✓ & & ✓ \\ CUSTEXT+ [3] & & ✓ & & ✓ \\ CipherGPT [4] & ✓ & & ✓ & \\ TextObfuscator [5] & & & ✓ & ✓ \\ DP-Forward [6] & & & ✓ & ✓ \\ PrivInfer+RANTEXT & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Qualitative comparison among different methods in terms of their support for specific features. A check mark indicates that a feature is supported. on random adjacency. Adhering to differential privacy standards under the exponential mechanism and employing the Laplace differential privacy mechanism, RANTEXT dynamically ascertains random adjacency for tokens within the LLM token vocabulary and substitutes them accordingly. Through this approach, RANTEXT obfuscates the original semantic information while ensuring the utility of the generated text, courtesy of the vector proximity between pre- and post-replacement tokens in the LLM tokenization vocabulary. * We design the GPT inference attack, a novel attack method against text differential privacy techniques. Experimental validation shows that the success rate of the GPT inference attack surpasses that of the existing mask token inference attack, offering a new evaluation angle for the security of text differential privacy techniques. * We perform a series of experiments, including testing the utility of our solutions on three datasets of open-ended text generation tasks and validating their privacy in three different attack experiments. The results show that the PrivInfer scheme provides privacy protection while offering a text generation quality comparable to GPT-4's original. The text quality generated by RANTEXT is comparable to the best baseline solution, yet its privacy protection level significantly outperforms existing baseline solutions. Specifically, when \(\epsilon=10.0\) and top\({}_{K}=250\) are set for the embedding inversion attack, RANTEXT offers a privacy protection rate exceeding 90%. This performance is 19.89 times and 1.76 times superior to existing baseline solutions CUSTEXT+ and SANTEXT+, respectively. ## 2 Preliminaries In the following section, we introduce the foundational concepts vital to comprehending the content of this manuscript. For ease of reference, Table 2 lists the frequently used notations in this paper. ### Inference Service Assume that a user \(U\) aims to invoke the inference function \(\textit{Infer}:P\to G\) of a remote large language model \(S\) to fulfill their text generation requirements, where \(G\) denotes the text generated by \(S\). Here, \(P=I\parallel D\) represents the user-provided prompt. The term \(I\) stands for the fundamental writing instruction, while document \(D=\langle x_{i}\rangle_{i=1}^{L}\) is composed of a sequence of \(L\) tokens \(x_{i}\), each belonging to the vocabulary \(V\) of size \(|V|\). In this scenario, we treat \(S\) as a commercial black-box model. To preserve its commercial value, \(S\) does not publicly disclose its internal architecture or parameters but only exposes its token vocabulary \(V\) to \(U\) for billing verification. ### Objectives and Capabilities of Users **Objectives.** User \(U\) has dual aims: first, to accomplish text generation tasks by invoking the inference function \(\textit{Infer}:P\to G\) of \(S\); second, to ensure the confidentiality of their document \(D\). **Capabilities.** To fulfill these dual objectives, \(U\) employs a randomized mechanism of differential privacy \(M:X\to Y\) to apply \(N\) perturbations to each token \(x_{i}\in V\) within the document. This results in perturbed tokens \(y_{i,j}=M(x_{i})\), \(y_{i,j}\in V\), for \(j=1,2,3,\ldots,N\). Consequently, \(U\) obtains multiple perturbed documents \(D_{j}{}^{\prime}=\langle y_{i,j}\rangle_{i=1}^{L}\), where \(j=1,2,3,\ldots,N\). To perform text generation while preserving privacy, \(U\) will invoke \(S\)'s inference function \(\textit{Infer}:P\to G\)\(N\) times, submitting a perturbed prompt \(P_{j}{}^{\prime}=I\parallel D_{j}{}^{\prime}\) and receiving perturbed generated text \(G_{j}{}^{\prime}=\langle g_{i}{}^{\prime}\rangle_{i=1}^{K}=\textit{Infer}(P_{ j}{}^{\prime})\) in each iteration, where \(j=1,2,3,\ldots,N\). ### Objectives and Capabilities of Server **Objectives.** While \(S\) is committed to executing text generation tasks faithfully, it harbors curiosity about \(U\)'s original document \(D\) and aspires to glean as much information as possible from the perturbed documents \(D_{j}{}^{\prime}\), where \(j=1,2,3,...,N\). **Capabilities.**\(S\) employs privacy invasion techniques to attempt the reconstruction of the original document \(D\) from its perturbed versions \(D_{j}{}^{\prime}\), where \(j=1,2,3,...,N\). ### Defining Differential Privacy Before formally introducing our scheme, we briefly revisit key concepts, including \(\epsilon\)-Differential Privacy (\(\epsilon\)-DP), Exponential Mechanism (EM), and Laplace Mechanism (LM). \begin{table} \begin{tabular}{l l} \hline \hline **Notation** & **Description** \\ \hline \(U\) & User \\ \(S\) & Server of LLM \\ \(D\) & Raw Document \\ \(D^{\prime}\) & Perturbed Document of \(D\) \\ \(D_{j}{}^{\prime}\) & \(j_{th}\) Perturbed Document of \(D\) \\ \(G\) & Text Generation of LLM \\ \(G^{\prime}\) & Perturbed Text Generation of LLM \\ \(G_{j}{}^{\prime}\) & \(j_{th}\) Perturbed Text Generation of LLM \\ \(I\) & Instruction for Writing \\ \(I^{\prime}\) & Instruction for Text Restoration \\ \(V\) & Token Vocabulary of LLM \\ \(M\) & Random Mechanism of Differential Privacy \\ \(u\) & Scoring Function of Differential Privacy \\ \(C_{W}\) & Adjacency of Token(or Word) \\ \(C_{E}\) & Adjacent Embeddings \\ _Infer_ & Inference Function of remote LLM \\ \hline \hline \end{tabular} \end{table} Table 2: Glossary of Frequent Notations Used in the Paper. **Definition 1** (\(\varepsilon\)-differential privacy [12]).: _In \(\varepsilon\)-DP, given a privacy parameter \(\varepsilon\geq 0\), a randomized mechanism \(M\) is \(\varepsilon\)-DP compliant if it satisfies the following condition for all adjacent inputs \(x,x^{\prime}\in X\) and every possible output \(y\in Y\):_ \[\frac{Pr[M(x)=y]}{Pr[M(x^{\prime})=y]}\leq e^{\varepsilon}. \tag{1}\] From the mathematical definition of differential privacy, it is evident that a smaller value of \(\varepsilon\) provides higher privacy protection, albeit at the cost of reduced data utility. Interpreting the meaning conveyed by the formula, \(\varepsilon\)-DP implies that given an output result \(y\), when \(\varepsilon\) is sufficiently small, an adversary with unlimited capabilities is unable to discern the probability distribution of two neighboring input sets \(x\). It is important to note that differential privacy is a robust privacy protection concept. It equally processes all input elements. Additionally, a critical definition here is that of neighboring inputs. In previous NLP research [2, 3, 20], most researchers have posited that any pair of inputs sharing the same output set \(Y\) are adjacent. We observe that such a definition leads to a challenging trade-off between text utility and privacy. In this paper, we will redefine the adjacency in differential privacy from the perspective of probabilistic sampling in subsequent sections. **Definition 2** (exponential mechanism [21] ).: _For a given scoring function \(u:X\times Y\rightarrow\mathbb{R}\), a randomized mechanism \(M(X,u,Y)\) is \(\varepsilon\)-DP compliant if it satisfies:_ \[Pr[y|x]\propto\exp\left(\frac{\varepsilon\cdot u(x,y)}{2\Delta u}\right), \tag{2}\] _where the sensitivity \(\Delta u\) is defined as:_ \[\Delta u=\max_{x,x^{\prime}\in X,y\in Y}|u(x,y)-u(x^{\prime},y)|. \tag{3}\] Typically, we can scale the upper bound of \(u\) to scale \(\Delta u\) to a specific real number. Due to the discrete nature of text, the exponential mechanism (EM) is often employed in the NLP domain for implementing differential privacy perturbations on text. To ensure text utility, when employing the exponential mechanism for differential privacy perturbations on text, it is essential to ensure that the variations in the scoring function are consistent with the semantic similarity between the input \(x\in X\) and the output \(y\in Y\). Specifically, we should ensure that as the semantic closeness between the input \(x\in X\) and the output \(y\in Y\) increases, the value of the corresponding scoring function \(u(x,y)\) also increases. This ensures that semantically similar elements are sampled for output with a higher probability, thereby maintaining text utility before and after the perturbation. Similarly, the smaller the value of \(\varepsilon\) is, the higher the security of privacy protection, but the lower the data utility. When a smaller \(\varepsilon\) is chosen, the scoring function \(u(x,y)\) no longer plays a decisive role in the output probability of any perturbation result. This ensures that when \(\varepsilon\) is small, the probability of neighboring elements being selected for output is very close, and an adversary with unlimited capabilities is unable to determine the input \(x\in X\) by observing the perturbed output \(y\in Y\). **Definition 3** (Laplace mechanism [22] ).: _The Laplace mechanism (LM) is one of the foundational mechanisms for achieving \(\varepsilon\)-differential privacy (\(\varepsilon\)-DP). For any function \(f:X\rightarrow\mathbb{R}^{d}\) and a given privacy parameter \(\varepsilon>0\), the Laplace mechanism ensures \(\varepsilon\)-DP by adding noise to the output of \(f\). Specifically, the noise follows a Laplace distribution centered at zero with a scale parameter \(\frac{\Delta f}{\varepsilon}\), where \(\Delta f\) is the sensitivity of \(f\), defined as the maximum change in \(f\) across all adjacent inputs \(x,x^{\prime}\in X\):_ \[\Delta f=\max_{x,x^{\prime}\in X}\left(|f(x)-f(x^{\prime})|\right). \tag{4}\] _The Laplace mechanism can be formally expressed as:_ \[M(x)=f(x)+\text{Lap}\left(\frac{\Delta f}{\varepsilon}\right), \tag{5}\] _where \(\text{Lap}(\cdot)\) denotes the Laplace noise distribution function._ The Laplace mechanism provides a simple and effective method to protect the privacy of sensitive continuous data while preserving data utility. In the Laplace mechanism, the smaller the value of \(\varepsilon\) is, the higher the level of privacy protection provided, albeit at the cost of reduced data utility. Similar to the exponential mechanism and the definition of \(\varepsilon\)-DP, the Laplace mechanism also balances the relationship between privacy protection and data utility by controlling the magnitude of the added noise through \(\varepsilon\). ## 3 The PrivInfer Framework ### Overview In this section, we pioneer the introduction of the PrivInfer mechanism, a framework meticulously crafted to safeguard privacy in text generation endeavors undertaken by expansive language models within black-box scenarios. The impetus behind PrivInfer is gleaned from a human compositional phenomenon we discerned: semantically congruent paragraphs often materialize within texts traversing multiple intertwined topics. Analogously, we ascertained within language models that semantically akin input texts are predisposed to manifesting similarity within the embedding space [23, 24, 25], thereby engendering semantically consonant paragraphs in the resultant text. This phenomenon provides the theoretical feasibility for PrivInfer. By harnessing the prowess of differential privacy techniques, we effectuate multiple perturbations upon the raw prompts, potentially laden with sensitive information, and consecutively tender these perturbed prompts, thus securing multiple perturbed text generation outcomes from the large language models. We leverage a locally deployed language model to restore the normal text generation result comparable to the raw prompt from these outputs. The PrivInfer framework, depicted in Figure 1, is architectured around two modules: the perturbation module and the restoration module. ``` 1:functionPerturb(\(D,u,\varepsilon,N\)) 2:Input: Document \(D=\langle x_{i}\rangle_{i=1}^{L}\), scoring function \(u\), privacy parameter \(\varepsilon\), number of perturbed documents \(N\) 3:Output: Perturbed documents \([D^{\prime}_{1},D^{\prime}_{2},\ldots,D^{\prime}_{N}]\) 4: Initialize an empty list \(Perturbed\ Documents\) 5:for\(j=1\) to \(N\)do 6:\(D^{\prime}_{j}\leftarrow\emptyset\) 7:for\(i=1\) to \(L\)do 8: Compute \(C_{W}(x_{i})\) 9: denom\(\leftarrow\sum_{x^{\prime}_{i}\in C_{W}(x_{i})}\exp\left(\frac{\varepsilon u \cdot u(x_{i},x_{i}^{\prime})}{2\Delta u}\right)\) 10:for each \(x_{i}^{\prime\prime}\in C_{W}(x_{i})\)do 11:\(p(x_{i}^{\prime\prime}|x_{i})\leftarrow\exp\left(\frac{\varepsilon u\cdot u(x _{i},x_{i}^{\prime\prime})}{2\Delta u}\right)/\text{denom}\) 12: Sample \(x_{i}^{\prime}\sim p(x_{i}^{\prime\prime}|x_{i})\) 13: Append \(x_{i}^{\prime}\) to \(D^{\prime}_{j}\) 14: Append \(D^{\prime}_{j}\) to \(Perturbed\ Documents\) 15:return\(Perturbed\ Documents\) ``` **Algorithm 1** Perturbation Module ### Perturbation Module Before delineating the perturbation module inherent in our scheme, we present Definition 4 and Assumption 1. We aim to provide a quantitative exposition of the observed phenomenon. **Definition 4** (\(\varepsilon\)-document privacy).: _For a given privacy parameter \(\varepsilon>0\), scoring function \(u(\cdot)\), token vocabulary \(V\), raw document \(D=\langle x_{i}\rangle_{i=1}^{L}\) and perturbed document \(D_{1}{}^{\prime}=\langle x_{j}^{\prime}\rangle_{j=1}^{L}\), \(D_{2}{}^{\prime}=\langle x_{k}^{\prime}\rangle_{k=1}^{L}\), where \(x_{i}\in V\) and \(x_{j}^{\prime},x_{k}^{\prime}\in V\), the randomized mechanism \(M(\cdot)\) is \(\varepsilon\)-document privacy if it meets the following conditions:_ \[d(x_{i},x_{j}^{\prime})\geq d(x_{i},x_{k}^{\prime})\Rightarrow u(x_{i},x_{j} ^{\prime})\leq u(x_{i},x_{k}^{\prime}), \tag{6}\] \[\forall x^{\prime}\in\{x_{j}{}^{\prime},x_{k}{}^{\prime}\},\quad Pr[x^{\prime }|x]\propto\exp\left(\frac{\varepsilon\cdot u(x,x^{\prime})}{2\Delta u}\right), \tag{7}\] _where \(i,j=1,2,3,\ldots,L\), \(Pr[\cdot]\) denotes the probability output of the randomized mechanism \(M(\cdot)\), and \(d(\cdot)\) is a function used to measure the semantic similarity between two tokens (the smaller the output of \(d(\cdot)\) is, the more semantically similar the two words)._ Definition 1 endeavors to quantitatively delineate the association between differential privacy and the semantic alteration of documents, thereby aiding our explication of Assumption 1. As the privacy parameter \(\varepsilon\) adopts lower values, a probabilistic sampling standpoint assures the semantic divergence between texts before and after perturbation, consequently augmenting privacy safeguards, albeit compromising document utility. Within PrivInfer, the perturbation module receives document \(D=\langle x_{i}\rangle_{i=1}^{L}\) from the raw prompt \(P\) as input. Given a privacy parameter \(\varepsilon>0\), predicated on the scoring function \(u(\cdot)\) and randomized mechanism \(M(\cdot)\) within the EM framework, the perturbation module iteratively perturbs and substitutes each word \(x_{i}\) in document \(D=\langle x_{i}\rangle_{i=1}^{L}\), where \(C_{W}(x_{i})\) is the word adjacency of word \(x_{i}\). The result is \(N\) perturbed documents \(D_{j}{}^{\prime}=\langle x_{i}^{\prime}\rangle_{i=1}^{L}\), with \(j=1,2,3,\ldots N\). The algorithmic rendition of the perturbation module is depicted in Algorithm 1. In our paper, we have adopted three differential privacy algorithms that comply with Definition 4: SANTEXT+, CUSTEXT+, and RANTEXT, which will be delineated in Section 4. ### Black-box Inference In the inference stage, user \(U\) recurrently invokes the inference function \(\mathit{Infer}(\cdot)\) of the remote large language model \(S\), each time uploading a distinct perturbed prompt \(P^{\prime}_{j}=I\parallel D^{\prime}_{j}\), where \(j=1,2,3,\ldots N\). Via this mechanism, \(U\) acquires the perturbed text generation outcome \(G^{\prime}_{j}=\langle g_{i,j}{}^{\prime}\rangle_{i=1}^{K}=\mathit{Infer}(P^{ \prime}_{j})\) from inference on the perturbed prompt \(P^{\prime}_{j}\). For a comprehensive understanding of the prompt within the inference stage, please refer to Appendix A. ### Restoration Module Our objective in the Restoration Module stage is to re-cuperate the original text generation outcome \(G=\langle g_{i}\rangle_{i=1}^{K}=\mathit{Infer}(P)\) from the acquired \(N\) perturbed text generation results \(G^{\prime}_{j}=\langle g_{i,j}{}^{\prime}\rangle_{i=1}^{K}=\mathit{Infer}(P^{ \prime}_{j})\). The design of the Restoration Module is based on the following phenomena we observed: **Assumption 1**.: _Consider a scoring function \(u(\cdot)\), a randomized function \(M(\cdot)\), a writing instruction \(I\), a vocabulary \(V\), and an inference function \(\mathit{Infer}(\cdot)\) of a Large Language Model (LLM). Given a document \(D\) perturbed through \(u(\cdot)\) and \(M(\cdot)\), it yields \(D^{\prime}\). If the randomized mechanism \(M(\cdot)\) satisfies \(\varepsilon\)-document privacy, then there exists a probability that \(g_{i}\in G\) also satisfies \(g_{i}\in G^{\prime}\), and \(\mathit{corr}(\varepsilon\,\ |\ \{g_{i}\ |\ g_{i}\in G^{\prime}\}|)>0\). Here, "corr" measures how two variables move together, ranging from -1 to 1; \(g_{i},g^{\prime}_{i}\in V\), \(G=\langle g_{i}\rangle_{i=1}^{K_{1}}=\mathit{Infer}(P)\), \(G^{\prime}=\langle g^{\prime}_{i}\rangle_{i=1}^{K_{2}}=\mathit{Infer}(P^{ \prime})\), \(P=I\|D\), \(P^{\prime}=I\|D^{\prime}\), and \(\varepsilon>0\)._ Assumption 1 posits: should two documents \(D\) and \(D^{\prime}\) exhibit similarity subsequent to differential privacy perturbation, and both proffer identical writing instructions \(I\), the ensuing text contents \(G\) and \(G^{\prime}\) generated via the large language model inference function \(\mathit{Infer}(\cdot)\) ought to bear considerable similarity. Assumption 1 further substantiates that the PrivInfer framework is capable of proficiently executing text generation tasks while preserving privacy. ``` 1:function\(\textsc{restore}(\hat{P})\) 2:Input: Prompt \(\hat{P}=I\|I^{\prime}\|D\|G^{\prime}_{1}\|G^{\prime}_{2}\|\ldots\|G^{\prime}_{N}\) 3:Output: Generation text G corresponding to the raw prompt \(P=I\|D\) 4: Initialize an empty list \(G\) 5:for each \(G^{\prime}_{j}\) in \(\{G^{\prime}_{1},G^{\prime}_{2},\ldots,G^{\prime}_{N}\}\)do 6: Extract segments from \(G^{\prime}_{j}\) as directed by \(I^{\prime}\) 7: Append the extracted segments to segments 8: Stitch segments together to form the text \(G\) corresponding to the raw prompt \(P=I\|D\) 9: Align \(G\) with the central theme of \(D\) as specified in \(I^{\prime}\) 10:return\(G\) ``` **Algorithm 2** Restore Function Based on Assumption 1, perturbed text generation outcomes contain elements from the text generation corresponding to the raw prompt. In the restoration module, we extract text from multiple perturbed text generation results and reorganize them into the generated text corresponding to the raw prompt by stitching the text segments and aligning them with the central theme. Specifically, thanks to open-source language models [26, 27], we can accomplish simple command interpretation and text generation tasks at a low deployment cost. Our recovery module strategy involves deploying a lightweight language model locally on \(\mathbf{U}\) to perform the text extraction and stitching tasks in \(G^{\prime}_{j}\). Explicitly, \(\mathbf{U}\) invokes the \(\mathrm{restore}(\cdot)\) function to execute the inference procedure on its locally deployed model. The input argument for the inference of \(\mathrm{restore}(\cdot)\) function is denoted as prompt \(\hat{P}=I\|I^{\prime}\|D\|G^{\prime}_{1}\|G^{\prime}_{2}\|\ldots\|G^{\prime}_{N}\), Where \(I\) denotes the fundamental writing instruction, \(I^{\prime}\) encompasses directives for text extraction, text stitching, and theme alignment, \(D\) refers to the raw document, and \(G^{\prime}_{j}\) represents the outcome of perturbed text generation. Finally, the local model of \(\mathbf{U}\) will complete the text generation task corresponding to the raw prompt \(P=I\|D\) through text extraction, text stitching, and topic alignment according to the instructions provided. The specific \(\mathrm{restore}(\cdot)\) algorithm is shown in Algorithm 2. For detailed guidance on the prompt related to the restoration module, see Appendix B. PrivInfer uses the exponential mechanism in differential privacy to ensure privacy-preserving text generation by black-box LLMs. It is widely applicable and works with any differential privacy algorithm that supports \(\epsilon\)-document privacy, as stated in Assumption 1. In Section 5, we tested PrivInfer with three different privacy algorithms and found strong support for Assumption 1. In order to further enhance privacy protection in the perturbation module, we designed RANTEXT, a novel differential privacy mechanism within the perturbation module. RANTEXT is specifically designed for LLM, and it comprises two main steps: Computing Random Adjacency and Sampling from Adjacency. We are going to formally introduce it in the following Section 4. ## 4 The RANTEXT Mechanism ### Overview In this section, we design a novel text differential privacy mechanism for large language models, denominated RANTEXT, predicated on random adjacency. Drawing inspiration from the Byte Pair Encoding (BPE) algorithm delineated in Section 3.3, RANTEXT is crafted to engage with the model language composed of tokens, as discerned by large language models. We have amended the word adjacency design in preceding works, namely SANTEXT+ [2] and CUSTEXT+ [3], and discerned that RANTEXT strikes a more favorable equilibrium between security and utility. RANTEXT primarily encapsulates two segments: compute random adjacency and sample from adjacency, as depicted in Figure 2. ### Token Vocabulary In order to better understand the idea of RANTEXT, we introduce the concepts of tokenizer and token vocabulary [28]. In the black-box setting, \(S\) undertakes the computation of the \(\mathit{Infer}(\cdot)\) function. To facilitate comprehension of the user input prompt \(P\) by the large language model, the \(\mathit{Infer}(\cdot)\) function initially invokes the corresponding model's \(\mathit{tokenizer}(\cdot)\) algorithm. This is necessary because language models interact not directly with human language but with a language formulated by token combinations from a specified token vocabulary \(V\), with diverse language models utilizing different \(V\). For instance, GPT-4 [29] operates with the token vocabulary cl100k_base [28]. The \(\mathit{tokenizer}(\cdot)\) algorithm predominantly employs BPE [30] to partition the input prompt into tokens that are intelligible to the LLM. In the inference stage, the calculation of \(\mathit{tokenizer}(\cdot)\) is executed by \(S\) in the black-box scenario. \(S\) invokes the \(\mathit{tokenizer}(\cdot)\) function, procuring \(T=\langle t_{i}\rangle_{i=1}^{Z}=BPE(P)\), where \(t_{i}\in V\), \(P=I\parallel D\), and \(V\) constitutes the token vocabulary employed for model training. After that, \(T\) is input into the black-box LLM, eventually returning the generated text \(G=\langle g_{i}\rangle_{i=1}^{K}\) to \(U\). Nevertheless, in practical scenarios such as the service provided by GPT-4, \(S\) discloses its token vocabulary \(V\) to \(U\) for the purpose of billing verification. This necessary disclosure consequently enables \(C\) to independently execute the \(\mathit{tokenizer}(\cdot)\) calculation of LLM locally. ### Workflow of RantEXT Before delving into the workflow of RANTEXT, we extend Assumption 1 by introducing the following new assumption: **Assumption 2**.: _Given a scoring function \(u(\cdot)\), a randomized function \(M(\cdot)\), a writing instruction \(I\), and an inference function \(\text{Infer}(\cdot)\) trained on a token vocabulary \(V\) of a Large Language Model (LLM), consider a document \(\hat{D}=\langle t_{i}\rangle_{i=1}^{L}\). Perturbing \(\hat{D}\) through \(u(\cdot)\) and \(M(\cdot)\) yields \(D^{\prime}=\langle t_{i}^{\prime}\rangle_{i=1}^{L}\), where \(t_{i}\),\(t_{i}^{\prime}\in V\). If the randomized mechanism \(M(\cdot)\) satisfies \(\varepsilon\)-document privacy, then there exists a probability that for any \(g_{i}\in G\), it also holds that \(g_{i}\in G^{\prime}\), and \(corr(\varepsilon\,,\,|\{g_{i}\mid g_{i}\in G^{\prime}\}|)>0\). Here, "corr" measures the co-movement between two variables, ranging from -1 to 1; \(g_{i},g_{i}^{\prime}\in V\), \(G=\langle g_{i}\rangle_{i=1}^{K_{1}}=\text{Infer}(P)\), \(G^{\prime}=\langle g_{i}\rangle_{i=1}^{K_{2}}=\text{Infer}(P^{\prime})\), \(P=I\|\hat{D}\), \(P^{\prime}=I\|D^{\prime}\), and \(\varepsilon>0\)._ Assumption 2 aims to illustrate that if two documents \(D\) and \(D^{\prime}\) are semantically similar in the token vocabulary of the large language model, when the writing instructions \(I\) submitted by both are the same, the corresponding text \(G\) and \(G^{\prime}\) generated by the large language model inference function \(\text{Infer}(\cdot)\) should also be broadly similar. In fact, the set of words predicted by the large language model is determined by the token vocabulary used during its training. That is, the language actually understood and generated by the large language model is governed by its token vocabulary. In RANTEXT, for a certain large language model \(LLM\), let the corresponding token vocabulary set be \(V\). For the original document input by user \(D=\langle x_{i}\rangle_{i=1}^{L}\), RANTEXT uses the function \(tokenizer:X\to V\) to transform the original document \(D\) into a set of tokens corresponding to \(V\), \(\hat{D}=\langle t_{i}\rangle_{i=1}^{K}\), and subsequently computes the random adjacency \(C_{W}(t_{i})\) corresponding to every token \(t_{i}\), where \(i=1,2,3,...K\). Then, RANTEXT, based on its differential privacy mechanism scoring function \(u(\cdot)\) and randomized mechanism \(M(\cdot)\), perturbs token \(t_{i}\) in \(\hat{D}\) in sequence and selects \(t_{i}^{\prime}\) from its random adjacency \(C_{W}(t_{i})\) to replace \(t_{i}\), thereby obtaining the perturbed document \(\hat{D}^{\prime}=\langle t_{i}^{\prime}\rangle_{i=1}^{K}\). The RANTEXT algorithm is shown as Algorithm 3. ``` 1:function\(\text{RANTEXT}(D,LLM,tokenizer,u,M)\) 2:Input: Original document \(D\), Large Language Model \(LLM\), Tokenizer \(tokenizer\), Scoring function \(u(\cdot)\), Random mechanism \(M(\cdot)\) 3:Output: Perturbed document \(D^{\prime}\) 4:\(\hat{D}=\langle t_{i}\rangle_{i=1}^{L}=tokenizer(D)\triangleright\) Tokenize the document 5:for\(i=1\) to \(K\)do 6:\(C_{W}(t_{i})\)\(\triangleright\) Compute adjacency for each token 7:for\(i=1\) to \(K\)do 8:\(p_{i}=M(t_{i},u,C_{W}(t_{i}))\) 9: Normalize \(p_{i}\) such that \(\sum p_{i}=1\) 10:\(t_{i}^{\prime}=\text{sample}(C_{W}(t_{i}),p_{i})\) 11:\(D^{\prime}=\langle t_{i}^{\prime}\rangle_{i=1}^{K}\)\(\triangleright\) Construct the perturbed document 12:return\(D^{\prime}\) ``` **Algorithm 3** RANTEXT Mechanism ### Computing Random Adjacency To achieve a balance between utility and privacy for the perturbed document \(\hat{D}^{\prime}=\langle t_{i}^{\prime}\rangle_{i=1}^{K}\) derived from the raw document \(\hat{D}=\langle t_{i}\rangle_{i=1}^{K}\) through the exponential mechanism of differential privacy perturbation in RANTEXT, the designed random adjacency function \(C_{W}(\cdot)\) primarily adheres to the following two theorems: **Theorem 1**.: _Consider arbitrary tokens \(t_{j}\),\(t_{k}\in V\) and any \(t_{i}\in V\). For the random adjacency mechanism \(C_{W}(\cdot)\) employed Figure 2: The workflow of RANTEXT comprises two main steps: Computing Random Adjacency and Sampling from Adjacency. by RANTEXT, the following inequality should hold:_ \[d(t_{i},t_{j})\leq d(t_{i},t_{k})\implies\mathbb{E}[t_{j}\in C_{W}(t_{i})]\geq \mathbb{E}[t_{k}\in C_{W}(t_{i})] \tag{8}\] Theorem 1 aims to illustrate that, in RANTEXT, the algorithm \(C_{W}(\cdot)\) should ensure from a probabilistic perspective that for any \(t_{i}\in V\), words semantically closer to it will appear in its token adjacency \(C_{W}(t_{i})\) with a higher expected probability, to ensure the utility of the final perturbed text. **Theorem 2**.: _Considering an arbitrary token \(t_{i}\in V\), under repeated applications of the random adjacency mechanism \(C_{W}(\cdot)\), the maximum possible cardinality of the intersection set \(|\{\bigcap C_{W}(t_{i})_{j}\}|\) must satisfy the following condition:_ \[\max\left(|\{\bigcap C_{W}(t_{i})_{j}\}|\right)=|V|. \tag{9}\] Theorem 2 aims to illustrate that, for any \(t_{i}\in V\), the random adjacency function \(C_{W}(\cdot)\) will not provide adversaries with additional knowledge to narrow down the possible range of attacks on the raw text, thereby ensuring the privacy of the raw document. In fact, the definition of the random adjacency function \(C_{W}(\cdot)\) in RANTEXT satisfies the above two theorems to ensure both the utility and privacy of the perturbed text. After introducing the complete definition of random adjacency function \(C_{W}(\cdot)\) later in this section, we will go back and prove these two theorems. ``` 1:procedureCompute Random Adjacent Embeddings(\(i_{t},\hat{\phi},\hat{\phi},d_{e},\epsilon\)) 2:Input: token \(t_{i}\), embedding function \(\phi\), perturbed embedding function \(\hat{\phi}\), distance function \(d_{e}\), privacy parameters \(\epsilon\) 3:Output: set of random adjacent embeddings \(C_{E}(t_{i})\) 4: Initialize \(C_{E}(t_{i})\leftarrow\emptyset\) 5: Compute embedding \(emb_{i}\leftarrow\phi(t_{i})\) 6: Compute perturbed embedding \(emb_{i}\leftarrow\hat{\phi}(t_{i})\) 7:\(d_{\text{threshold}}\gets d_{e}(emb_{i},emb_{i})\) 8:for each \(emb\in\mathbb{R}^{N}\)do 9:if\(d_{e}(emb,emb_{i})\leq d_{\text{threshold}}\)then 10:\(C_{E}(t_{i})\gets C_{E}(t_{i})\cup\{emb\}\) 11:return\(C_{E}(t_{i})\) ``` **Algorithm 4** Random Adjacent Embeddings Then, to introduce the random adjacency function \(C_{W}(\cdot)\) in RANTEXT, we propose the following new definition for better understanding of it: **Definition 5** (**random adjacent embeddings**).: _For a given token \(t_{i}\in V\), its random adjacent embeddings are obtained through the function \(C_{E}(\cdot)\) as follows:_ \[C_{E}(t_{i})=\left\{emb\,|\,d_{e}\left(emb,\phi(t_{i})\right)\leq d_{e}\left( \hat{\phi}(t_{i}),\phi(t_{i})\right)\right\}. \tag{10}\] _where \(emb\in\mathbb{R}^{N}\) represents an \(N\)-dimensional vector within the real number domain. The function \(d_{e}(\cdot)\) is utilized to compute the distance between two vectors and is defined as \(d_{e}(a,b)=\sqrt{\sum_{i=1}^{n}(a_{i}-b_{i})^{2}}\). The function \(\phi:t_{i}\rightarrow\mathbb{R}^{N}\) maps any given token to a vector in the \(N\)-dimensional real number vector space (Note: The set of tokens used by \(\phi(\cdot)\) should be consistent with \(\text{Infer}(\cdot)\)). The function \(\hat{\phi}(t_{i})=\phi(t_{i})+Y\), where \(Y\sim L(0,\frac{\Delta f}{\epsilon})\), represents noise following the Laplace mechanism._ The algorithm for \(C_{E}(\cdot)\) is outlined in Algorithm 4. For any \(t_{i}\in V\), its \(C_{E}(t_{i})\) is determined by adding noise to the corresponding vector in the \(N\)-dimensional real number vector space. The notion of \(C_{E}(t_{i})\) aims to obtain a randomized adjacent set for \(t_{i}\) in the vector space through a differential privacy algorithm adhering to the Laplace mechanism. To ensure the utility of random adjacency, it is also imperative to note that the token vocabulary utilized by \(\phi(\cdot)\) should align with \(\mathit{Infer}(\cdot)\). This is because, according to Section 4.2, the right training tokens could theoretically make LLMs understand our perturbed prompts better. We will derive the definition of random adjacency based on the random adjacent embeddings. **Definition 6** (**random adjacency**).: _For a given token \(t_{i}\in V\), we define its random adjacency as the following set obtained through the function \(C_{W}(\cdot)\):_ \[C_{W}(t_{i})=\left\{t_{i}^{\prime}|\phi(t_{i}^{\prime})\in C_{E}(t_{i})\right\}. \tag{11}\] Our definition of random adjacency comes from random adjacent embeddings. Specifically, the random adjacency set for any given token consists of tokens whose embeddings are part of its random adjacent embeddings in the vector space. After introducing the definition of random adjacency in RANTEXT, we can give proofs of Theorem 1 and Theorem 2. These theorems offer theoretical insights into how RANTEXT effectively balances utility and privacy in the perturbation of text: **Proof of Theorem 1.**_Given \(t_{j},t_{k}\in V\) and any \(t_{i}\in V\), we define \(d(t_{i},t_{j})=d_{e}(\phi(t_{i}),\phi(t_{j}))\) and \(d(t_{i},t_{k})=d_{e}(\phi(t_{i}),\phi(t_{k}))\). Assuming \(d(t_{i},t_{j})\leq d(t_{i},t_{k})\), we have:_ \[d_{e}(\phi(t_{i}),\phi(t_{k}))-d_{e}(\phi(t_{i}),\phi(t_{j}))\geq 0. \tag{12}\] _For \(\mathbb{E}[t_{j}\in C_{W}(t_{i})]\), we use Equation (11) and introduce Laplace noise \(Y\sim L(0,\frac{\Delta f}{\epsilon})\) that is independently distributed over an \(N\)-dimensional real vector space:_ \[f(Y_{i})=\frac{\epsilon}{2\Delta f}\exp\left(-\frac{\epsilon|Y_{i}|}{\Delta f} \right). \tag{13}\] _For \(t_{k}\) and \(\mathbb{E}[t_{k}\in C_{W}(t_{i})]\), we obtain:_ \[\mathbb{E}[t_{k}\in C_{W}(t_{i})]=\mathbb{E}[t_{k}\in\{t_{i}^{ \prime}|\phi(t_{i}^{\prime})\in C_{E}(t_{i})\}] \tag{14}\] \[=\int_{d_{e}(\phi(t_{k}),\phi(t_{i}))\leq d_{e}(\phi(t_{i}),\phi(t _{i}))}\prod_{i=1}^{N}f(Y_{i})\,dY_{1}\ldots dY_{N}. \tag{15}\] _Similarly, for \(t_{j}\) and \(\mathbb{E}[t_{j}\in C_{W}(t_{i})]\), we obtain:_ \[\mathbb{E}[t_{j}\in C_{W}(t_{i})]=E[t_{j}\in\{t^{\prime}_{i}|\phi(t ^{\prime}_{i})\in C_{E}(t_{i})\}] \tag{16}\] \[=\int_{d_{e}(\phi(t_{j}),\phi(t_{i}))\leq d_{e}(\phi(t_{i}),\phi(t _{i}))}\prod_{i=1}^{N}f(Y_{i})\,dY_{1}\ldots dY_{N}\] (17) \[=\int_{d_{e}(\phi(t_{i}),\phi(t_{i}))\leq d_{e}(\phi(t_{i}),\phi(t _{i}))}\prod_{i=1}^{N}f(Y_{i})\,dY_{1}\ldots dY_{N}\] \[+\int_{d_{e}(\phi(t_{j}),\phi(t_{i}))\leq d_{e}(\phi(t_{i}),\phi(t _{i}))\leq d_{e}(\phi(t_{k}),\phi(t_{i}))}\prod_{i=1}^{N}f(Y_{i})\,dY_{1}\ldots dY _{N}. \tag{18}\] _Since \(f(Y_{i})>0\), when subtracting \(\mathbb{E}[t_{k}\in C_{W}(t_{i})]\) from \(\mathbb{E}[t_{j}\in C_{W}(t_{i})]\), we find:_ \[\mathbb{E}[t_{j}\in C_{W}(t_{i})]-\mathbb{E}[t_{k}\in C_{W}(t_{i})]\] \[=\int_{d_{e}(\phi(t_{j}),\phi(t_{i}))\leq d_{e}(\phi(t_{k}),\phi( t_{i}))}\prod_{i=1}^{N}f(Y_{i})\,dY_{1}\ldots dY_{N}\geq 0. \tag{19}\] _Thus, we have proven that the random adjacency mechanism \(C_{W}(\cdot)\) in RANTEXT satisfies Objective 1._ **Proof of Theorem 2.**_In RANTEXT, the random adjacency mechanism \(C_{W}(\cdot)\) probabilistically selects the adjacency for any given input \(t_{i}\) by utilizing Laplace differential privacy noise. Since the range of the Laplace noise function \(f(Y_{i})\) is \([0,\infty)\), this ensures that for any \(t^{\prime}_{i}\in V\), it is possible for \(t^{\prime}_{i}\in C_{W}(t_{i})\) to be satisfied. Thus, we have:_ \[\max\Big{(}|\{\bigcap C_{W}(t_{i})_{j}\}|\Big{)}=|V|. \tag{20}\] The application of the \(C_{W}(\cdot)\) algorithm within RANTEXT does not furnish adversaries with extra knowledge that would narrow the potential range of attacks on the original text. We ultimately demonstrate that the \(C_{W}(\cdot)\) algorithm's architecture satisfies Objective 2. ### Sampling from Random Adjacency In RANTEXT, to guarantee the utility of the perturbed document, the scoring function \(u(\cdot)\) is required to conform to Equation (6): \[d(x_{i},x^{\prime}_{j})\geq d(x_{i},x^{\prime}_{k})\Rightarrow u(x_{i},x^{ \prime}_{j})\leq u(x_{i},x^{\prime}_{k}), \tag{21}\] Given the imperative to yield, for each \(t_{i}\in V\), an adjacency \(C_{W}(t_{i})\) populated predominantly by semantically similar words, RANTEXT employs a scoring function aligned with token semantic similarity. The scoring function \(u(\cdot)\) in RANTEXT is formulated as follows: For \(\phi(t_{i})=\hat{\phi}(t_{i})\), the function \(u(t_{i},t^{\prime}_{i})\) is set to 1. In the case where \(\phi(t_{i})\neq\hat{\phi}(t_{i})\), the function takes the form \[u(t_{i},t^{\prime}_{i}) =\frac{d_{E}(\phi(t_{i}),\hat{\phi}(t^{\prime}_{i}))-d_{E}(\phi(t^ {\prime}_{i}),\phi(t_{i}))}{d_{E}(\phi(t_{i}),\hat{\phi}(t^{\prime}_{i}))} \tag{22}\] \[=\frac{d_{E}(\phi(t^{\prime}_{i}),\hat{\phi}(t^{\prime}_{i}))}{d _{E}(\phi(t_{i}),\hat{\phi}(t^{\prime}_{i}))}, \tag{23}\] where the ultimate functional values for \(u(\cdot)\) are consequently given by: \[u(t_{i},t^{\prime}_{i})=\begin{cases}\frac{d_{E}(\phi(t^{\prime}_{i}),\hat{ \phi}(t_{i}))}{d_{E}(\phi(t_{i}),\hat{\phi}(t_{i}))}&\text{if }\phi(t_{i})\neq\hat{\phi}(t_{i}),\\ 1&\text{if }\phi(t_{i})=\hat{\phi}(t_{i}).\end{cases} \tag{24}\] The function \(d_{E}(\cdot)\) is derived by normalizing the function \(d_{e}(\cdot)\), as demonstrated by equation \[d_{E}(\phi(t^{\prime}_{i}),\hat{\phi}(t_{i}))=\frac{d_{e}(\phi(t^{\prime}_{i}),\hat{\phi}(t_{i}))-\min(d_{e})}{\max(d_{e})-\min(d_{e})}, \tag{25}\] where \(\min(d_{e})\) and \(\max(d_{e})\) are defined as \[\min(d_{e})=\min_{t^{\prime}_{i}\in C_{W}(t_{i})}d_{e}(\phi(t^{\prime}_{i}), \hat{\phi}(t_{i})), \tag{26}\] \[\max(d_{e})=\max_{t^{\prime}_{i}\in C_{W}(t_{i})}d_{e}(\phi(t^{\prime}_{i}), \hat{\phi}(t_{i})). \tag{27}\] Given that \(d_{E}(\cdot)\in[0,1]\), and in accordance with the definition of \(\Delta u\), we conclude that \(\Delta u=1\). Utilizing the scoring function \(u(\cdot)\), and in reference to Equation (2), for any given input \(x\in X\) and output \(y\in Y\), the stochastic mechanism \(M(\cdot)\) within RANTEXT adheres to the following probabilistic output relationship: \[Pr[y|x]=\frac{\exp\left(\frac{\varepsilon\cdot u(x,y)}{2\Delta u}\right)}{\sum_{y \in Y}\exp\left(\frac{\varepsilon\cdot u(x,y)}{2\Delta u}\right)}. \tag{28}\] By capitalizing on RANTEXT's specialized design for large language models and the introduction of the pivotal concept of random adjacency, experiments in Section 5 provide evidence that RANTEXT surpasses existing differential privacy methods designed for black-box scenarios in both text utility and privacy. Moreover, RANTEXT fulfills the subsequent theorems, thereby validating that it aligns with the foundational preconditions set forth in Assumption 2. **Theorem 3.** For any privacy parameter \(\varepsilon\geq 0\), RANTEXT ensures \(\varepsilon\)-differential privacy. **Theorem 4.** For any privacy parameter \(\varepsilon\geq 0\), RANTEXT guarantees \(\varepsilon\)-document privacy. **Proof of Theorem 3.**_Given inputs \(x,x^{\prime}\in X\) and output \(y\in Y\), and utilizing the scoring function \(u(\cdot)\) and randomization mechanism \(M(\cdot)\) in RANTEXT, we establish the following equation:_ \[\frac{\mathrm{Pr}[y|x]}{\mathrm{Pr}[y|x^{\prime}]} =\frac{\frac{\exp\left(\frac{\epsilon\cdot u(x,y)}{2\Delta u} \right)}{\frac{\sum_{y\in Y}\exp\left(\frac{\epsilon\cdot u (x,y)}{2\Delta u}\right)}}}{\frac{\exp\left(\frac{\epsilon\cdot u(x^{\prime},y) }{2\Delta u}\right)}{\sum_{y\in Y}\exp\left(\frac{\epsilon\cdot u (x^{\prime},y)}{2\Delta u}\right)}} \tag{29}\] \[=\frac{\exp\left(\frac{\epsilon\cdot u(x,y)}{2\Delta u}\right)}{ \exp\left(\frac{\epsilon\cdot u(x^{\prime},y)}{2\Delta u}\right)} \tag{30}\] _By substituting the value range of the scoring function \(u(\cdot)\) and \(\Delta u\) in RANTEXT, we derive the following inequality:_ \[\frac{Pr[y|x]}{Pr[y|x^{\prime}]}\leq\exp\left(\frac{\epsilon\cdot u(x,y)}{2} \right)\leq e^{\epsilon}. \tag{31}\] **Proof of Theorem 4.**_From Theorem 1, we know that RANTEXT satisfies Equation (7). In RANTEXT, we use \(d_{E}(\cdot)\) to represent the function \(d(\cdot)\). Given \(d_{E}(x_{i},x_{j}^{\prime})\geq d_{E}(x_{i},x_{k}^{\prime})\), we substitute into RANTEXT's computation for the function \(u(\cdot)\), obtaining \(u(x_{i},x_{j}^{\prime})\leq u(x_{i},x_{k}^{\prime})\). Thus, we have completed the proof of Theorem 4._ ## 5 Experiments ### Experiment Setup In our experiment, the utility and privacy of our proposals were assessed on open-ended text generation tasks. Since there has been no prior privacy protection solution designed for black-box LLMs in text generation tasks, we are the first to propose a practical solution to this problem formally. Consequently, we did not establish a baseline in the PrivInfer framework experiment. Within the perturbation module, implementations were made for SANTEXT+ [2], CUSTEXT+ [3], and our proposed RANTEXT scheme. In the inference stage, GPT-4 [29] was selected as our remote black-box large language model with its temperature parameter set to 0.5. For the Restoration Module, Vicuna-7B [31] was chosen as our local language model, with its temperature parameter set to 0 to mitigate Vicuna-7B's inherent capabilities' impact on the experiment. The experiments adhered to the default recommended parameters for SANTEXT+ and CUSTEXT+. For the function \(\phi(\cdot)\) in the RANTEXT scheme, the text-embedding-ada-002 [32] was selected, utilizing the same token vocabulary as employed in GPT-4 training. They were trained using the cl100k_base vocabulary, with RANTEXT deploying the initial 11,000 tokens of cl100k_base to ensure the vector space computation within a 32 GB memory space. In the utility assessment, PrivInfer was employed on three disparate datasets for open-ended text generation tasks, and the outcomes were juxtaposed with those obtained from the GPT-4 model under a temperature parameter of 0.5. Ablation experiments were conducted to elucidate the local model's impact further, solely utilizing the Vicuna-7B model set at a temperature of 0 for text generation. As per the configurations in [33, 34, 35], we define the document \(D\) as the first 50 tokens of the articles, and then use GPT-4 [29] to generate the next 100 tokens. The token count of the text was determined using the GPT-2 tokenizer method as described in [36]. In the privacy assessment, assaults were initiated on the perturbed prompt \(P_{j}^{\prime}\) uploaded by user \(U\), initially utilizing the mask token inference attack as per the selections in [2, 3]. Given that the algorithm's transparency might furnish adversaries with additional insights, embedding inversion attacks [37] were selected for adaptive assaults based on the token adjacency information within the algorithm. Moreover, a scheme for assaulting RANTEXT, termed the GPT inference attack, was proposed. Given the remote model choice of GPT-4 and RANTEXT's perturbations on GPT-4's training token set cl100k_base, theoretically, RANTEXT can facilitate an enhanced understanding of perturbed prompts for GPT-4. Hence, the GPT inference attack endeavors to exploit GPT-4's superior text analysis and comprehension capabilities, uploading the perturbed prompts to GPT-4 for autonomous analysis and restoration of the original documents, thereby achieving the attack's objective. All assessments were orchestrated on a server furnished with two Intel Xeon Gold 6130 2.10 GHz CPUs and one NVIDIA RTX A6000 graphics card. ### Utility Evaluation To validate the utility of our proposals, the schemes RANTEXT+, CUSTEXT+, and RANTEXT were selected within our perturbation module, with privacy parameters set to \(\epsilon=1,2,3\). Open-ended text generation tasks were executed on three distinct datasets harboring private information. These datasets are elucidated as follows: * **CNN/Daily Mail**[38] is a prevalent dataset of news articles, encompassing approximately 300,000 unique pieces penned by journalists from CNN and Daily Mail. It includes a vast array of real individuals and events. * **Wikitext-103-v1**[39] is a wiki dataset boasting over 100 million words and 260,000 unique tokens, meticulously extracted from authenticated, high-quality, and featured Wikipedia articles. It includes numerous real names and events. * **ArXiv Dataset**[40] is a scientific corpus that aggregates a plethora of academic papers and scholarly achievements. With several million research papers, this dataset epitomizes researchers employing large language models to draft unpublished, key-technology-centric research papers. Aligning with [34], three metrics were employed to appraise the quality of the generated text: **Diversity.** This metric gauges the text's diversity by computing the n-gram repetition rates as follows: \[\text{DIV}=\sum_{n=2}^{4}\frac{|\text{unique n-grams}(d_{cont})|}{|\text{ total n-grams}(d_{cont})|}.\] A diminished diversity score suggests repetition affliction in the model, whereas an augmented score denotes lexical diversity in the generated text. **MAUVE.**[41] Employed to assess the resemblance between language model-generated and human-authored text, a higher score is desirable in this metric. **Coherence.** Adhering to the methodologies of coherence [34, 42], coherence was approximated by computing the cosine similarity between sentence embeddings of the prompt prefix \(d_{pre}\) and the generated continuation \(d_{cont}\): \[COH(d_{pre},d_{cont})=\frac{\text{SimCSE}(d_{pre})\cdot\text{SimCSE}(d_{cont}) }{\|\text{SimCSE}(d_{pre})\|\cdot\|\text{SimCSE}(d_{cont})\|},\] where \(\text{SimCSE}(x)\) represents the pretrained SimCSE sentence embedding [43]. To elucidate the impact of various schemes in the perturbation module on prompt perturbation, the **Levenshtein distance**[44] was harnessed to quantify the disparity between the pre-and postperturbation prompts \(p\) and \(p^{\prime}_{j}\). An escalated value in this metric implies diminished similarity between the two strings. Table 3 delineates the principal outcomes of our strategies juxtaposed with baseline methodologies in the text utility experiments. Several pivotal observations emanate from these data: (1) Relative to the text directly generated by GPT-4, PrivInfer sustains the inherent text generation quality of GPT-4 while ensuring privacy. Our findings accentuate that PrivInfer retains a text generation quality on par with GPT-4 across the datasets, thereby reinforcing the viability of the PrivInfer approach. (2) The text generated by PrivInfer demonstrates better quality compared to the local model within the restoration module. In the CNN/Daily Mail and ArXiv Datasets, PrivInfer outperforms the local Vicuna-7B model in terms of diversity and coherence. Additionally, in the Wikitext-103-v1 and ArXiv Datasets, PrivInfer scores higher than Vicuna-7B \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**CNN/Daily Mail**} & \multicolumn{3}{c|}{**Wikitext-103-v1**} & \multicolumn{3}{c}{**ArXiv Dataset**} \\ \cline{2-10} & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) \\ \hline SANTEXT\({}^{+}\) & 223.24 & 213.31 & 201.50 & 225.93 & 217.59 & 205.61 & 255.19 & 242.90 & 227.95 \\ CUSTEXT\({}^{+}\) & 945.81 & 853.35 & 785.50 & 939.07 & 881.45 & 846.58 & 906.18 & 870.59 & 787.36 \\ RANTEXT & 322.16 & 308.63 & 289.78 & 329.81 & 3313.79 & 299.26 & 359.29 & 343.15 & 319.66 \\ \hline \hline \end{tabular} \end{table} Table 4: Edit distance scores across methods and datasets under varying privacy settings (\(\epsilon=1,2,3\)). \begin{table} \begin{tabular}{l|c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**diversity\(\uparrow\)**} & \multicolumn{3}{c|}{**MAUVE\(\uparrow\)**} & \multicolumn{3}{c}{**coherence\(\uparrow\)**} \\ \cline{3-10} & & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=3\) \\ \hline \multirow{4}{*}{**CNN/Daily Mail**} & GPT-4 & \multicolumn{3}{c|}{0.983} & \multicolumn{3}{c|}{0.671} & \multicolumn{3}{c}{0.632} \\ & Vicuna-7B & \multicolumn{3}{c|}{0.945} & \multicolumn{3}{c|}{0.648} & \multicolumn{3}{c}{0.669} \\ \cline{2-10} & PrivInfer+SANTEXT\({}^{+}\) & 0.970 & **0.971** & **0.971** & **0.598** & **0.609** & 0.636 & **0.828** & **0.828** & **0.828** \\ & PrivInfer+CUSTEXT\({}^{+}\) & 0.963 & 0.960 & 0.958 & 0.552 & 0.555 & 0.575 & 0.803 & 0.809 & 0.819 \\ & PrivInfer+RANTEXT & **0.971** & 0.970 & **0.971** & 0.566 & 0.597 & **0.659** & **0.828** & **0.828** & **0.828** \\ \hline \hline \multirow{4}{*}{**Wikitext-103-v1**} & GPT-4 & \multicolumn{3}{c|}{0.987} & \multicolumn{3}{c|}{0.453} & \multicolumn{3}{c}{0.650} \\ & Vicuna-7B & \multicolumn{3}{c|}{0.965} & \multicolumn{3}{c|}{0.142} & \multicolumn{3}{c}{0.880} \\ \cline{2-10} & PrivInfer+SANTEXT\({}^{+}\) & 0.953 & 0.954 & 0.954 & **0.443** & 0.451 & **0.468** & **0.874** & **0.874** & **0.874** \\ \cline{1-1} & PrivInfer+CUSTEXT\({}^{+}\) & **0.959** & **0.957** & **0.958** & 0.377 & 0.389 & 0.404 & 0.823 & 0.829 & 0.834 \\ \cline{1-1} & PrivInfer+RANTEXT & 0.954 & 0.954 & 0.955 & 0.389 & **0.458** & 0.467 & **0.874** & **0.874** & **0.874** \\ \hline \hline \multirow{4}{*}{**ArXiv Dataset**} & GPT-4 & \multicolumn{3}{c|}{0.935} & \multicolumn{3}{c|}{0.736} & \multicolumn{3}{c}{0.726} \\ & Vicuna-7B & \multicolumn{3}{c|}{0.928} & \multicolumn{3}{c|}{0.400} & \multicolumn{3}{c}{0.833} \\ \cline{1-1} & PrivInfer+SANTEXT\({}^{+}\) & 0.938 & 0.939 & **0.939** & 0.558 & 0.594 & 0.601 & **0.864** & 0.864 & 0.864 \\ \cline{1-1} & PrivInfer+CUSTEXT\({}^{+}\) & **0.939** & **0.940** & **0.939** & 0.592 & **0.634** & **0.655** & **0.864** & **0.865** & **0.865** \\ \cline{1-1} & PrivInfer+RANTEXT & **0.939** & 0.939 & **0.939** & **0.612** & 0.633 & 0.641 & **0.864** & 0.864 & 0.864 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison on open-ended text generation tasks across different methods, datasets, and privacy settings (\(\epsilon=1,2,3\)), evaluated based on diversity, MAUVE, and coherence. on the MAUVE metric, indicating that the comparable text quality of PrivInfer with GPT-4 is not due to the local model. (3) Employing RANTEXT as the perturbation module demonstrates improved text quality compared to CUSTEXT+ and comparable quality to SANTEXT+. Across various metrics, RANTEX outperforms the CUSTEXT+ scheme in open-ended text generation tasks across the three datasets and shows performance on par with SANTEXT+. Moreover, Table 4 indicates that both RANTEX and SANTEXT+ have smaller Levenshtein distances between the pre-and postperturbation prompts \(p\) and \({p_{j}}^{\prime}\), which is notably lower than that of CUSTEXT+. This analysis supports the observations made earlier. ### Privacy Evaluation In the **CNN/Daily Mail** dataset, we performed multiple adjustments to the privacy parameter \(\epsilon\), with values of 0.01, 2.0, 6.0, 10.0, 14.0, and 18.0. We employed three distinct text recovery methodologies to execute privacy attacks. The success rate (ASR) of these attacks was computed, and \(1-\text{ASR}\) was utilized as the metric for assessing privacy protection (Privacy). **Mask token inference attack [2].** As depicted in Figure 3, RANTEXT manifests superior privacy protection capability under this attack compared to other methods. The experimental outcomes suggest that, within an \(\epsilon\) value spectrum of 0.01-18.0, the RANTEXT scheme invariably furnishes over 90% privacy safeguarding. This is primarily attributed to the construction-oriented toward the GPT-4 token vocabulary in the RANTEXT scheme. It was discerned that BERT faces difficulties in recognizing the text information conveyed within the RANTEXT scheme, thereby proving challenging to decipher and retrieve the original textual information encapsulated within RANTEXT. Conversely, regarding the baseline schemes, it is evident that their privacy safeguarding levels are subpar compared to RANTEXT. With \(\epsilon\) values of 0.01, 2.0, and 6.0, CUSTEXT+ exhibits a marginally superior privacy protection efficacy compared to SANTEXT+; however, with the escalation of \(\epsilon\), its privacy shielding capacity swiftly degrades. On the other hand, at \(\epsilon\) values of 10.0, 14.0, or 18.0, the privacy-safeguarding efficacy of SANTEXT+ markedly surpasses that of CUSTEXT+. **Embedding inversion attack [37].** Propounded by Qu et al., this attack computes the distance between the embedding of each token in the perturbed document and the embeddings of other words in the vocabulary, returning the proximate top \(K\) tokens as the attack outcome. For the vocabulary, we deployed the word vector compilations of the differential privacy algorithms inherent in each scheme. Experiments were conducted under the conditions of top \(K=250\) and 500. Figures 4 and 5 illustrate that, under both conditions, SANTEXT+ and CUSTEXT+ are susceptible to embedding inversion attacks, indicating a relatively lower level of privacy protection. Even at \(\epsilon=0.01\), these methods could only provide privacy protection for over 40% of the original documents. As the top \(K\) changes from 250 to 500, the Figure 4: Results of embedding inversion attack (\(K=250\)). Figure 5: Results of embedding inversion attack (\(K=500\)). Figure 3: Results of mask token inference attack. privacy protection capability of SANTEXT+ and CUSTEXT+ remains largely unchanged. On the other hand, RANTEXT, benefiting from its random token adjacency design, prevents attackers from utilizing adjacency information for successful attacks, thus demonstrating a stronger privacy protection capability under this attack. **GPT inference attack.** Given that RANTEXT executes perturbations on the token vocabulary of the GPT-4 model, it is hypothesized that the perturbed text engendered through this perturbation technique should be more discernible to GPT-4. Accordingly, we devised the GPT inference attack. Within this methodology, the attacker capitalizes on GPT-4's text comprehension property by feeding the perturbed text into the GPT-4 model and instructs the model to recuperate each token. The attack is considered successful if the recuperated tokens coincide with the original tokens. For details on the prompt involved in GPT inference attacks, see Appendix C. Figure 6 shows the outcomes of the GPT inference attack: in contrast to BERT, GPT-4 manifests a heightened attack success rate across all schemes. We conjecture that this might stem from GPT-4's more substantial model scale, enhanced text generation, and understanding capacities, rendering it more adept in decoding and recovering the perturbed text. Confronted with the GPT inference attack, we discerned a pronounced downturn in the privacy preservation levels of both SANTEXT+ and CUSTEXT+, trailing RANTEXT. This accentuates RANTEXT's tenacity in navigating intricate attack landscapes. It is noteworthy that despite RANTEXT's design for better synergy with the GPT-4 model, it retains a high defensive fortitude against GPT-4. Under the configuration of \(\epsilon=18.0\), it continues to uphold a 90% privacy preservation rate. In summary, through the appraisal of diverse privacy attack techniques, it can be inferred that RANTEXT exemplifies superior privacy protection prowess across an array of as-saults, while other methodologies, notably SANTEXT+ and CUSTEXT+, exhibit relative vulnerability under certain as-saults. These experimental outcomes robustly validate that our advocated RANTEXT methodology, while facilitating high-caliber text generation, also ascertains the safeguarding of user privacy. ## 6 Discussion This section elucidates the textual generation outcomes of both the PrivInfer and RANTEXT schemes under varying configurations. Specifically, for PrivInfer, we centered our attention on the potential ramifications of the quantity of Perturbed Prompts (\(N\)) on the resultant text generation; in the RANTEXT scheme, our focus was directed toward understanding the influence of different privacy parameters (\(\epsilon\)) on the quality of the produced text. As illustrated in Table 5, an increment in the number of Perturbed Prompts from 1 to 5 did not manifest a notable alteration in PrivInfer across the three metrics of diversity, MAUVE, and coherence. This might insinuate that although PrivInfer stands as the pioneer privacy protection inference scheme for large language models in black-box settings, achieving a degree of privacy protection and enhancement in text quality, it might not have exploited all the viable information within Perturbed Prompts fully. This indication also propels the notion that a considerable volume of exploratory space and research potential resides in this domain, evoking anticipation for more profound explorations in ensuing studies. Concerning the RANTEXT scheme, Table 6 shows that with a progressive augmentation of the privacy parameter (\(\epsilon\)), there is a gradual ascent in the MAUVE, while the diversity and coherence metrics maintain stability across varying \(\epsilon\) values. This observation resonates with our anticipations. With the amplification of \(\epsilon\), perturbed prompts incrementally converge toward the raw prompt, naturally bolstering the similarity between the perturbed generated text and the raw prompt phrases and subsequently elevating the MAUVE. ## 7 Related Work Feyisetan et al. [20] formally defined a differential privacy perturbation algorithm for text. The authors achieved text privacy by directly adding noise to the word vectors in a high-dimensional space defined by a word-embedding model and then substituting the words with their nearest word. In their implementation, they assumed that any two words in Figure 6: Results of GPT inference attack. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Metric** & **N=1** & **N=2** & **N=3** & **N=4** & **N=5** \\ \hline diversity & 0.971 & 0.971 & 0.971 & 0.971 & 0.971 \\ \hline MAUVE & 0.659 & 0.658 & 0.660 & 0.659 & 0.658 \\ \hline coherence & 0.828 & 0.828 & 0.828 & 0.828 & 0.828 \\ \hline \end{tabular} \end{table} Table 5: Performance of RANTEXT with various \(N\) values. the vocabulary are adjacent. Their categorization experiments demonstrated the practical utility of their method, especially in training binary classifiers. However, their approach does not theoretically guarantee that semantically similar words are replaced with a higher probability; it only ensures that words that are proximate in vector space are more likely to be selected. Yue et al. [2] extended the framework originally proposed by Feysietan et al. They presented a version termed SANTEXT+. This method incorporates differential privacy using the exponential mechanism. A notable characteristic of SANTEXT+ is its approach to word substitution. Rather than replacements with nearest words, SANTEXT+ gives priority to substituting semantically similar words, aiming for higher probability during each operation. Consistent with prior research, this approach has been evaluated on classification tasks. Chen et al. [3] argued that the assumption that any two words are adjacent limits the ability of the SANTEXT+ scheme to find a satisfactory trade-off between privacy and utility. They proposed CUSTEXT+, which considers two words in a fixed small set as adjacent. In CUSTEXT+, only the top K nearest words in the word vector space are selected as adjacency. In their experiments, K was set to a maximum of 50, ensuring greater semantic similarity in the replaced words. Similar to previous works, the CUSTEXT+ scheme was also only evaluated on classification tasks. Other researchers have attempted to divide a complete model inference process into local and remote parts, adding noise during data transmission. Du et al. [6] perturbed the forward-pass embeddings of each sequence to achieve privacy. Zhou et al. [5] also introduced random perturbations in functionally similar representations, making it difficult for privacy attackers to distinguish between them. Their methods also focused on classification tasks and needed the model owners to disclose partial model architecture, which conflicted with black-box settings. Additionally, works such as [13, 4, 14] have tried applying homomorphic encryption on transformer-based [45] models to perform encrypted inference, although the computational overhead remains a significant challenge. ## 8 Conclusion This study explores the challenge of privacy leakage in text generation tasks executed by black-box large language models and introduces PrivInfer as a proposed solution. Additionally, we propose RANTEXT, a differential privacy algorithm designed for large language models following the exponential mechanism to enhance user privacy protection. These methodologies aim to contribute to user privacy protection while maintaining text generation quality. Through multi-disturbance prompt inference, PrivInfer strives for privacy-protected text generation in black-box scenarios. On the other hand, RANTEXT aims to improve privacy protection levels in text generation tasks by introducing the concept of random token adjacency and devising a perturbation scheme on the token set of large language models in compliance with differential privacy. Furthermore, we introduce a new privacy attack strategy named the GPT inference attack, offering a tool for evaluating privacy protection schemes. Experimental results show that, compared to the existing mask token inference attack strategy, the GPT inference attack has a higher attack success rate, supporting its use in evaluating privacy schemes. In summary, this research aims to provide technical insights into the current privacy challenges and sheds light on potential future explorations in privacy protection within large language models. As the field of artificial intelligence continues to develop, privacy protection is expected to emerge as a key research topic, with a likely increase in the number of researchers working collaboratively in this domain to further the advancement of privacy protection technology. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline **Metric** & \(\epsilon=0.01\) & \(\epsilon=2.0\) & \(\epsilon=6.0\) & \(\epsilon=10.0\) & \(\epsilon=14.0\) & \(\epsilon=18.0\) \\ \hline diversity & 0.971 & 0.971 & 0.971 & 0.971 & 0.971 \\ \hline MAUVE & 0.556 & 0.603 & 0.660 & 0.670 & 0.695 & 0.735 \\ \hline coherence & 0.828 & 0.828 & 0.828 & 0.828 & 0.828 \\ \hline \end{tabular} \end{table} Table 6: Performance of RANTEXT with various \(\epsilon\) values.
2301.04337
Sensitivity of $CP$ Violation of $Λ$ decay in $J/ψ\to Λ \barΛ$ at STCF
The process of $J/\psi \to \Lambda \bar{\Lambda}$ is studied using $1.0\times10^{12}$ $J/\psi$ Monte Carlo (MC) events at $\sqrt{s}$=3.097 GeV with a fast simulation software at future Super Tau Charm Facility (STCF). The statistical sensitivity for $CP$ violation is determined to be the order of $\mathcal{O} (10^{-4})$ by measuring the asymmetric parameters of the $\Lambda$ decay. Furthermore, the decay of $J/\psi \to \Lambda \bar{\Lambda}$ also serves as a benchmark process to optimize the detector responses using the interface provided by the fast simulation software.
Yue Xu, Xiaorong Zhou, Xiaodong Shi, Yongxin Guo, Kuiyong Liu, Li Gong, Xiaoshen Kang
2023-01-11T07:19:31Z
http://arxiv.org/abs/2301.04337v1
# Sensitivity of \(Cp\) Violation of \(\Lambda\) decay in \(J/\psi\to\Lambda\bar{\Lambda}\) at STCF ###### Abstract The process of \(J/\psi\to\Lambda\bar{\Lambda}\) is studied using \(1.0\times 10^{12}\)\(J/\psi\) Monte Carlo (MC) events at \(\sqrt{s}\)=3.097 GeV with a fast simulation software at future Super Tau Charm Facility (STCF). The statistical sensitivity for \(CP\) violation is determined to be the order of \(\mathcal{O}\) (\(10^{-4}\)) by measuring the asymmetric parameters of the \(\Lambda\) decay. Furthermore, the decay of \(J/\psi\to\Lambda\bar{\Lambda}\) also serves as a benchmark process to optimize the detector responses using the interface provided by the fast simulation software. ## 1 Introduction The electromagnetic force, weak nuclear force, and strong nuclear force are addressed with the Standard Model (SM), which is established as a well-tested physics theory. Although SM is so successful, there are still some unresolved issues including the source of \(CP\) violation [1]. In SM, \(CP\) violation can be included by introducing a complex phase in the quark mixing matrix, which is named Cabibbo-Kobayashi-Maskawa (CKM) matrix. Experimentally, starting in 1964, people subsequently observed \(CP\) violation in the weak decay process of the \(K\), \(B\), and \(D\) meson systems [2, 3, 4, 5, 6, 7]. The CKM quark mixing matrix can give a wonderful explanation of the observed \(CP\) violation in the meson systems. However, the magnitude of \(CP\) violation predicted by the SM cannot explain the matter-antimatter asymmetry in the universe [8, 9, 10]. Moreover, many extensions of the SM imply that the CKM matrix may not be the only source of \(CP\) violation [11, 12]. So more experimental studies are required to further test the \(CP\) violation mechanism in SM and search for other sources of \(CP\) violation. In 1956, _Lee_ and _Yang_ first proposed the violation of parity (\(P\)) conservation in the weak decays of baryons [13]. The degree of violation can be expressed in terms of the asymmetry parameters, \(\alpha=2Re\left(s*p\right)\)/ (\(|s|^{2}+|p|^{2}\)), where \(s\) and \(p\) stand for the parity-violating \(s\)-wave and parity-conserving \(p\)-wave amplitudes in the weak decay. In 1986, theoretical physicist _Pakvasa_ proposed that the observable quantity of \(CP\) violation could be constructed using asymmetric parameters in the decay of baryons, and predicted that the \(CP\) violation of baryons in the SM is \(\mathcal{O}\) (\(10^{-5}\)) [14, 15]. The processes of pionic decays of hyperons provide a good place to explore \(CP\) violation as they have a large branch ratio close to 1 [16, 17]. The \(CP\) asymmetry can be described as \(A_{CP}=\frac{\alpha+\bar{\alpha}}{\alpha-\bar{\alpha}}\), and the asymmetric parameters are \(CP\)-odd for the charge conjugate decay of \(B/\bar{B}\) (\(B\) is a spin-1/2 baryon). Therefore, if \(CP\) is conserved, \(\alpha=-\bar{\alpha}\), \(A_{CP}\) is equal to 0 [16, 17]. The Fermilab has specially designed HyperCP (E871) experiment to study \(CP\) violation of baryons in charged-\(\Xi\) and \(\Lambda\) hyperon decays. They have analyzed 11.7\(\times 10^{7}\)\(\Xi^{-}\rightarrow\Lambda\pi^{-}\to p\pi^{-}\pi^{-}\) and 4.1\(\times 10^{7}\)\(\Xi^{+}\rightarrow\Lambda\pi^{+}\to p\pi^{+}\pi^{+}\) events to determine the products \(\alpha_{\Xi}\alpha_{\Lambda}\) and \(\bar{\alpha}_{\Xi}\bar{\alpha}_{\Lambda}\)[18]. The sum \(A^{\Lambda}_{CP}\)+\(A^{\Xi}_{CP}\) was estimated to be \((0.0\pm 5.1\pm 4.4)\times 10^{-4}\)[18]. In 2019, by studying the quantum entanglement of baryon pairs in the \(J/\psi\rightarrow\Lambda\bar{\Lambda}\) process and using a multi-dimensional fitting method, the BESIII experiment obtained an independent measurement of \(A^{\Lambda}_{CP}\) with matching precision: \(A^{\Lambda}_{CP}=-0.006\pm 0.012\pm 0.007\), under the statistics of 0.4\(\times 10^{6}\)\(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{p}\pi^{+}\) events [19]. Recently, the asymmetries from the direct and subsequent \(J/\psi\rightarrow\Xi^{-}\bar{\Xi}^{+}\) decays were measured for the first time at BESIII and found to be \(A^{\Xi}_{CP}=-0.0029\pm 0.0133\pm 0.0057\) and \(\Delta\phi_{\Xi}=-0.0075\pm 0.0137\pm 0.0037\) rad [20]. Despite these, the \(CP\) violation measurement accuracy of the current experiment still does not meet the prediction of the SM and is mainly dominated by statistics uncertainty [14, 15]. To test for the existence of new sources of \(CP\) violation other than SM, a hyperon sample with larger statistics is required. The STCF is a futural high-luminosity collider and also one of major options for the accelerator-based high-energy project in China in the post-BEPCII era. The center-of-mass energy (\(\sqrt{s}\)) of the STCF collision will cover \(2\sim 7\) GeV, which has been doubled compared to BEPCII. The peaking luminosity is expected to be over \(0.5\times 10^{35}\) cm\({}^{-2}\) s\({}^{-1}\) or higher at \(\sqrt{s}=4\) GeV. It is expected to provide more than \(1.0\times 10^{12}\)\(J/\psi\) events per year and has great potential for improving luminosity and realizing beam polarization. So STCF will be an ideal place to study \(CP\) violation of \(\Lambda\) decay. In this analysis, we performed the sensitivity study of decay asymmetries of \(\Lambda\) decay and the decay channel is \(e^{+}e^{-}\to J/\psi\rightarrow\Lambda\) (\(\to p\pi^{-}\))\(\bar{\Lambda}\) (\(\to\bar{n}\pi^{0}\)) with the statistics of \(1.0\times 10^{12}J/\psi\) MC events. The amplitude of the signal process follows the helicity amplitude method which is described explicitly, as shown in Eq. 6. Furthermore, the final states of \(\Lambda\to p\pi^{-}\) decay have one low-momentum \(\pi^{-}\) particle, which plays a key role in limiting the overall reconstruction efficiency. Therefore, it is essential to improve the reconstruction efficiency of the low-momentum \(\pi^{-}\) to get better sensitivity, so the decay of \(J/\psi\rightarrow\Lambda\bar{\Lambda}\) is also used as a benchmark process in this analysis to perform optimization of detector performance design. ## 2 Formalism The production process \(e^{+}e^{-}\to J/\psi\rightarrow\Lambda\bar{\Lambda}\) is described in the c.m. system of \(J/\psi\). The scattering angle \(\theta\) of \(\Lambda\) is defined by \[\cos\theta=\mathbf{\hat{p}\cdot\hat{k}}, \tag{1}\] where \(p\) and \(k\) are the three momenta of outgoing \(\Lambda\) and initial positron, respectively. The scattering plane with the vector \(p\) and \(k\) is used to form the \(xz\)-plane, and the corresponding \(y\)-axis is perpendicular to the scattering plane. The right-handed coordinate system is defined as follows: \[\begin{split}\mathbf{e_{x}}&=\frac{1}{ \sin\theta}(\mathbf{\hat{k}\times\hat{p}})\times\mathbf{\hat{p} },\\ \mathbf{e_{y}}&=\frac{1}{\sin\theta}(\mathbf{\hat{k}\times\hat{p}}),\\ \mathbf{e_{z}}&=\mathbf{\hat{p}}. \end{split} \tag{2}\] The spin density matrix for a two spin 1/2 particle state can be expressed in terms of a set of 4 \(\times\) 4 matrices obtained from the outer product, \(\otimes\), of \(\sigma_{\mu}\) and \(\sigma_{\bar{\nu}}\)[21]: \[\rho=\frac{1}{4}\sum_{\mu\bar{\nu}}C_{\mu\bar{\nu}}\sigma_{\mu}^{\Lambda} \otimes\sigma_{\bar{\nu}}^{\bar{\Lambda}}, \tag{3}\] where \(\sigma_{\mu,\bar{\nu}}\) with \(\mu,\bar{\nu}\)= 0, 1, 2, 3, represent spin-1/2 base matrices for baryon \(\Lambda\)/\(\bar{\Lambda}\) in the rest frame. The 2 \(\times\) 2 matrices are \(\sigma_{0}=1_{2}\), \(\sigma_{1}=\sigma_{x}\), \(\sigma_{2}=\sigma_{y}\), and \(\sigma_{3}=\sigma_{z}\). In particular, the spin matrices \(\sigma_{\mu}\) and \(\sigma_{\bar{\nu}}\) are given in the helicity frames of the baryons \(\Lambda\) and \(\bar{\Lambda}\), respectively. We define the coordinate system for \(\Lambda\bar{\Lambda}\) decay, as shown in Fig. 1. The real coefficients \(C_{\mu\bar{\nu}}\) for \(e^{+}e^{-}\to J/\psi\rightarrow\Lambda\bar{\Lambda}\) with non-polarized inject beams are given by Eq. 4, \[C_{\mu\bar{\nu}}= \tag{4}\] \[\begin{pmatrix}1+\alpha\cos^{2}\theta&0&\beta\sin\theta\cos\theta&0 \\ 0&\sin^{2}\theta&0&\gamma\sin\theta\cos\theta\\ -\beta\sin\theta\cos\theta&0&\alpha\sin^{2}\theta&0\\ 0&-\gamma\sin\theta\cos\theta&0&-\alpha-\cos^{2}\theta\end{pmatrix}\] , where \(\beta=\sqrt{\,(1-\alpha^{2})}\sin\ \left(\Delta\Phi\right)\) and \(\gamma=\sqrt{\,(1-\alpha^{2})}\cos\ \left(\Delta\Phi\right)\), are functions of the scattering angle \(\theta\) of \(\Lambda\). In the real coefficients \(C_{\mu\bar{\nu}}\) of Eq. 4, there are two parameters related to the production process of \(e^{+}e^{-}\to J/\psi\to\Lambda\bar{\Lambda}\), the ratio of two helicity amplitudes \(\alpha\), and the relative phase of the two helicity amplitudes \(\Delta\Phi\). After considering the subsequent two-body weak decays into \(p\pi^{-}/\bar{n}\pi^{0}\), the joint angular distribution of the \(p/\bar{n}\) pair is given within the present formalism as [21]: \[Tr\rho_{p\bar{n}}\propto\sum_{\mu,\bar{\nu}=0}^{3}C_{\mu\bar{\nu}}\ (\theta)a_{\mu 0}^{ \Lambda}a_{\bar{\nu}0}^{\bar{\Lambda}}, \tag{5}\] where the \(a_{\mu 0}^{\Lambda}\ (\theta_{1},\phi_{1};\alpha_{1})\) and \(a_{\bar{\nu}0}^{\bar{\Lambda}}\ (\theta_{2},\phi_{2};\alpha_{2})\) represent the correlation of the spin density matrices in the sequential decays and the full expressions can be found in Ref. [21]. \(\alpha_{1}/\alpha_{2}\) are the decay asymmetries for \(\Lambda\to p\pi^{-}/\bar{\Lambda}\to\bar{n}\pi^{0}\). The variables \(\theta_{1}\) and \(\phi_{1}\) are the proton spherical coordinates in the \(\Lambda\) helicity frame with the axes \(\mathbf{x_{1}},\mathbf{y_{1}},\mathbf{z_{1}}\) defined in Fig. 1. The variables \(\theta_{2}\) and \(\phi_{2}\) are the anti-neutron spherical angles in the \(\bar{\Lambda}\) helicity frame with the axes \(\mathbf{x_{2}},\mathbf{y_{2}},\mathbf{z_{2}}\). An event of the reaction \(e^{+}e^{-}\to J/\psi\to\Lambda\ (\to p\pi^{-})\bar{\Lambda}\ (\to\bar{n}\pi^{0})\) is specified by the five-dimensional vector \(\xi=\ (\theta,\Omega_{1}\ (\theta_{1},\phi_{1}),\Omega_{2}\ (\theta_{2},\phi_{2}))\), and the joint angular distribution \(\mathcal{W}\ (\xi)\) can be expressed as: \[\mathcal{F}_{0}\ (\xi) =\mathcal{F}_{0}\ (\xi)+\alpha\mathcal{F}_{5}\ (\xi) \tag{6}\] \[+\alpha_{1}\alpha_{2}\ (\mathcal{F}_{1}\ (\xi)+\sqrt{1-\alpha^{2}}\cos\ ( \Delta\Phi)\mathcal{F}_{2}\ (\xi)+\alpha\mathcal{F}_{6}\ (\xi))\] \[+\sqrt{1-\alpha^{2}}\sin\ \left(\Delta\Phi\right)\ (-\alpha_{1} \mathcal{F}_{3}\ (\xi)+\alpha_{2}\mathcal{F}_{4}\ (\xi))\] with a set of angular functions \(\mathcal{F}_{i}\ (\xi)\) defined as: \[\mathcal{F}_{0}\ (\xi) =1\] \[\mathcal{F}_{1}\ (\xi) =\sin^{2}\theta\sin\theta_{1}\sin\theta_{2}\cos\phi_{1}\cos\phi_{ 2}-\cos^{2}\theta\cos\theta_{1}\cos\theta_{2}\] \[\mathcal{F}_{2}\ (\xi) =\sin\theta\cos\theta\ (\sin\theta_{1}\cos\theta_{2}\cos\phi_{ 1}-\cos\theta_{1}\sin\theta_{2}\cos\phi_{2})\] \[\mathcal{F}_{3}\ (\xi) =\sin\theta\cos\theta\sin\theta_{1}\sin\phi_{1}\] \[\mathcal{F}_{4}\ (\xi) =\sin\theta\cos\theta\sin\theta_{2}\sin\phi_{2}\] \[\mathcal{F}_{5}\ (\xi) =\cos^{2}\theta\] \[\mathcal{F}_{6}\ (\xi) =\sin^{2}\theta\sin\theta_{1}\sin\theta_{2}\sin\phi_{1}\sin\phi_{ 2}-\cos\theta_{1}\cos\theta_{2}. \tag{7}\] There are four terms in Eq. 6: the first two (\(\mathcal{F}_{0}+\alpha\mathcal{F}_{5}\)) describe the production angular distribution, and the third and fourth terms give the spin correlation and polarization, respectively. The polarization is in the \(\mathbf{e_{y}}\) direction and is related to the phase \(\Delta\Phi\) via [22] \[P_{y}=-\frac{\sqrt{1-\alpha^{2}}\sin\theta\cos\theta}{1+\alpha\cos^{2}\theta} \sin\ (\Delta\Phi). \tag{8}\] The polarization can only occur when \(\Delta\Phi\) is not equal to 0. As a consequence, the decay asymmetries can be determined with nonzero \(\Delta\Phi\). Using this conclusion, the BESIII experiment used the angular distribution analysis method to observe the nonzero relative phase \(\Delta\Phi\) of \(\Lambda\) in the baryon system for the first time, and then measured the decay asymmetry of \(\Lambda\) decay [19]. ## 3 Detector and MC simulations The design structure of the STCF detector from the interaction point to the outside mainly includes a tracking system, a particle identification (PID) system, an electromagnetic calorimeter (EMC), a superconducting solenoid and a muon detector (MUD). Figure 1: The reaction system with the defined helicity angles in \(\Lambda\bar{\Lambda}\) decay. The detailed conceptual design of each sub-detector can be found in [23, 24]. The STCF detector and offline software system are under research and development at present. In order to study the physical potential of STCF and further optimize the detector design, a fast simulation software package dedicated to STCF detectors has been developed [23, 24] and it has proven to be a useful tool for analysis in STCF. The fast simulation is simple to use and can simulate the response of objects in each sub-detector without \(Geant4\), including variables such as efficiency, and resolution (space, momentum, energy, time, etc.). By default, all the parameterized parameters for each sub-detector performance are based on the BESIII performance [25], but can be adjusted flexibly by scaling a factor according to the expected performance of the STCF detector, or by implementing a special interface to model any performance described with an external histogram, an input curve, or a series of discrete data [23]. In this analysis, the default scale factor is set to 1.0, which can be used to optimize the detector design according to physical requirements. ## 4 Analysis of \(J/\psi\rightarrow\Lambda\bar{\Lambda}\) with fast simulation The \(J/\psi\rightarrow\Lambda\bar{\Lambda}\) reaction is identified with the \(\Lambda\) subsequently decaying into \(p\pi^{-}\) and \(\bar{\Lambda}\) decay into \(\bar{n}\pi^{0}\) resulting in a final state of \(p\pi^{-}\bar{n}\gamma\gamma\). So, the candidate events are required to have at least two oppositely charged tracks and at least three showers. The combination of positive and negative charged tracks closest to the PDG mass of \(\Lambda\) was chosen as the \(\Lambda\) candidate [26]. In addition, the two daughter tracks are constrained to originate from a common decay vertex. The most energetic shower with energy deposition greater than 350 MeV is selected as \(\bar{n}\). The two showers except the \(\bar{n}\) candidate are consistent with photons and are used to reconstruct the \(\pi^{0}\) candidates. At least, one good \(\pi^{0}\) is required. In order to select the \(J/\psi\rightarrow\Lambda\)\((p\pi^{-})\bar{\Lambda}\)\((\bar{n}\pi^{0})\) candidate events, a two-constrained (2C) kinematic fit was performed, where \(\bar{n}\) is treated as a missed particle with mass fixed to 0.938 GeV [26], and the constraints including the four-momentum conservation of \(J/\psi\) and an additional constraint of photon pair to have an invariant mass equal to \(\pi^{0}\). Furthermore, \(\theta_{\bar{n}}\) is required to be less than 5\({}^{\circ}\), where \(\theta_{\bar{n}}\) is defined as the angle between the \(\bar{n}\) direction obtained from kinematic fit and the most energetic shower. To further suppress the background, \(\Lambda\) and \(\bar{\Lambda}\) candidates are required to be within 1.110 GeV/\(c^{2}<M_{p\pi^{-}}<1.120\) GeV/\(c^{2}\) and 1.098 GeV/\(c^{2}<M_{\bar{n}\pi^{0}}<1.127\) GeV/\(c^{2}\). The \(1.0\times 10^{6}\) events of the \(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) process were generated to optimize the selection criteria and evaluate the selection efficiencies for the baryon pair production. Based on the above selection conditions, with the help of fast simulation software, 129575 candidate events of \(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) were selected. The step-by-step selection efficiency is shown in Table 1. Furthermore, these MC samples also are used to optimize the detector response and \(1.0\times 10^{12}\) events of signal process were generated to test the sensitivity of \(CP\) violation. To analyze the potential background process, \(1.0\times 10^{6}\) events of \(J/\psi\to anything\) were generated as the inclusive MC. After the above event selection criteria were applied on the inclusive MC and by topology analysis, the \(J/\psi\rightarrow\Lambda\bar{\Sigma}^{0}\to p\pi^{-}\bar{n}\pi^{0}\gamma\) process has be shown to be the dominant background. So, \(1.0\times 10^{12}\) events of the \(J/\psi\rightarrow\Lambda\bar{\Sigma}^{0}\to p\pi^{-}\bar{n}\pi^{0}\gamma\) process were generated to do the background test in the next chapter. Furthermore, 0.7\(\times 10^{9}\) events of the signal process were generated using the phase space (PHSP) generator to estimate the normalization coefficient in Maximum Likelihood (MLL) fit. ## 5 Optimization of detector performance After the above event selection, the final selection efficiency is about 12.96%. The performance of the detector can be optimized from the following aspects: the selection efficiency of the charged tracks, the momentum resolution of the charged tracks, and the position resolution of the photons. Utilizing the signal MC sample and with the help of fast simulation software tools, the optimized results of the detector response are as follows: _a.Tracking efficiency_ The charged particles in the final state that can be identified by the detector include electrons, muons, pions, kaons, and protons. These charged particles have a wide range of momentum, some can be as high as 3.5 GeV/c, and some can be less than 1 GeV/c. This situation requires the detector to have the ability to cover a large momentum range and high-reconstruction efficiency. In the part of track system design of STCF, different materials or advanced tracking algorithms can be used to further improve the ability of low-momentum track reconstruction. The \(J/\psi\to\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) decay has low-momentum final state particle \(\pi^{-}\), which is a good choice for optimizing the detector response, improving the resolution of low-momentum particles. In this analysis, we gradually adjusted the scale factor of tracking efficiency from 1.0 to 2.0. It can be seen from Fig. 2 that the final selection efficiency has increased significantly in the range from 1.0 to 1.1 of the scale factor, and the selection efficiency will increase from 12.96% to 13.67%. _b.Momentum resolution of the charged tracks_ The momentum resolution of the charged tracks can also be optimized by the fast simulation. \(\sigma_{xy}\) and \(\sigma_{z}\) are the spatial resolutions of tracks in the \(xy\)-plane and \(z\)-direction. By default, \(\sigma_{xy}=130\)\(\mu\)m and \(\sigma_{z}\)=2480 \(\mu\)m. Optimizing \(\sigma_{xy}\) from 52 \(\mu\)m to 130 \(\mu\)m, and the corresponding \(\sigma_{z}\) is optimized from 992 \(\mu\)m to 2480 \(\mu\)m. There is no significant change in efficiency, as shown in Fig. 3. In addition, the transverse momentum \(P_{T}\) and polar angle \(\cos\theta\) are two characteristic quantities of track reconstruction in MDC. They are related to the level of track bending and hit positions of tracks in the MDC. The optimization curve of the transverse momentum of low-momentum \(\pi^{-}\) is shown in Fig. 4, where the black and red points represent the ratio of signal efficiency to MC truth before and after all the above optimization, respectively. _c.Position resolution of photon_ The decay of \(J/\psi\to\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) has a final state particle \(\pi^{0}\), \(\pi^{0}\) is reconstructed by two photons, so this process is also very sensitive to the EMC performance. With the increase in the resolution of the \(\pi^{0}\), there will be a better signal-to-background ratio and higher detection efficiency. Optimizing the signal-to-background ratio can provide a reference for the EMC design. In this analysis, the signal process Figure 3: Momentum resolution of charged tracks versus the selection efficiency. Figure 2: Charged track efficiency scale versus the selection efficiency. \(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) and the main background process \(J/\psi\rightarrow\Lambda\bar{\Sigma^{0}}\to p\pi^{-}\bar{n}\pi^{0}\gamma\) were studied. By fitting the distribution of the invariant mass of M\({}_{\bar{n}\pi^{0}}\), a 3\(\sigma\) mass interval of M\({}_{\bar{n}\pi^{0}}\) is obtained to further reduce the impact of the background process. Figure 5 shows the signal selection efficiency and background rejection under the change of photon position resolution. The scale factor of the position resolution of photon varies from 0.4 to 1.0. The red and blue points represent the case of using the nominal \(\bar{\Lambda}\) mass window and the optimized \(\bar{\Lambda}\) mass window, respectively. Although this will lose some signal events, it can reduce more background and make the signal cleaner. It is appropriate to set the scale factor to 0.7, which corresponds to the position resolution of 4 mm. The signal selection efficiency will get increase from 12.96% to 15.11%, while the main background will be reduced from 3.27% to 3.17%. After all the optimization of detector responds, the events of signal MC will increase from 12.96% to 15.97%, while the events of main background (\(J/\psi\rightarrow\Lambda\bar{\Sigma^{0}}\to p\pi^{-}\bar{n}\pi^{0}\gamma\)) will reduce from 3.27% to 3.09%. The selection efficiency is as shown in Table 1. ## 6 Extraction of the parameters In this analysis, the parameters can be extracted by applying an unbinned MLL fit. The probability density function of the \(i\)th event can be expressed by \[{\cal P}\ (\xi_{i};pars)={\cal W}\ (\xi_{i};pars)\epsilon\ (\xi_{i})/{\cal N}\ (pars) \tag{9}\] , where \(\epsilon\ (\xi_{i})\) is the efficiency of each event, \(\xi_{i}\) and \(pars\) are a set of angular vectors and parameters: \(\xi_{i}=\ (\theta,\Omega_{1},\Omega_{2})\), \(pars\)= \((\alpha,\alpha_{1},\alpha_{2},\Delta\Phi)\), as described in Sec. 2. The joint probability density for observing \(N\) events in the data sample is [27]: \[\begin{split}{\cal P}\ (\xi_{1},\xi_{2},...,\xi_{N};pars)=\prod_{i=1}^{N }{\cal P}\ (\xi_{i};pars)\\ =\prod_{i=1}^{N}\frac{{\cal W}\ (\xi_{i};pars)\epsilon\ (\xi_{i})}{{ \cal N}\ (pars)}.\end{split} \tag{10}\] By taking the natural logarithm of the joint probability density, the efficiency function can be separated \[\ln{\cal P}\ (\xi_{1}...,\xi_{N};pars)=\sum_{i}^{N}\ln\frac{{\cal W}\ (\xi_{i};pars)}{{\cal N}\ (pars)}+\sum_{i}^{N}\ln\epsilon\ (\xi_{i}). \tag{11}\] Figure 4: The Optimization curve of the transverse momentum of \(\pi\). Figure 5: The change of signal selection efficiency and background rejection with position resolution of the photon. Usually, the minimization of -\(\ln\)\(\mathcal{L}\) is performed by using MINUIT [28] \[-\ln\!\mathcal{L}=-\sum_{i}^{N}\ln\!\frac{\mathcal{W}\ (\xi_{i};pars)\epsilon\ (\xi_{i})}{ \mathcal{N}\ (pars)} \tag{12}\] , where \(\mathcal{N}\) is the normalization factor, given by \[\mathcal{N}=\int\mathcal{W}\ (\xi)\epsilon\ (\xi_{i})d\cos\theta d\Omega_{1}d \Omega_{2}. \tag{13}\] For a certain set of pars, \(\mathcal{N}\) (pars) can be rewritten as the integration on each \(\mathcal{F}_{i}\) term according to Eq. 6. To test the statistical sensitivity, the fitting was applied on \(J/\psi\) samples with different statistics. The precision for the decay parameters is shown in Fig. 6. It is found that the precision of the parameters is proportional to the square root of the \(J/\psi\) sample. The correlation matrix among the parameters is shown in Table 2. According to Eq. 6, the moment of \(\sin\theta_{1}\sin\phi_{1}\) is \begin{table} \begin{tabular}{c c c c} \hline \hline \(pars\) & \(\alpha\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\Delta\Phi\) \\ \(\alpha\) & 1.000 & -0.089 & 0.104 & 0.339 \\ \(\alpha_{1}\) & -0.089 & 1.000 & 0.853 & -0.120 \\ \(\alpha_{2}\) & 0.104 & 0.853 & 1.000 & 0.058 \\ \(\Delta\Phi\) & 0.339 & -0.120 & 0.058 & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 2: Correlation Matrix for the parameters, obtained with MINUIT. \begin{table} \begin{tabular}{c c c c} \hline \hline & **No optimized eff. (\%)** & **Optimized eff. (\%)** & \begin{tabular}{c} **Increased efficiency after** \\ **optimization in step (\%)** \\ \end{tabular} \\ \hline \begin{tabular}{c} Charged tracks \\ \(\Lambda\) reconstruction \\ \end{tabular} & 66.27 & 70.88 & 4.61 \\ \begin{tabular}{c} Good showers \\ \(\pi^{0}\) 1C fit (N\({}_{\gamma}\geq\)2) \\ \end{tabular} & 28.76 & 29.93 & 1.17 \\ \begin{tabular}{c} Kinematic 2-C fit \\ \end{tabular} & 25.33 & 27.31 & 1.98 \\ \begin{tabular}{c} Energy deposition of \(\bar{n}>\)0.35 GeV \\ \end{tabular} & 21.18 & 22.88 & 1.70 \\ \begin{tabular}{c} \(\theta_{\bar{n}}<5^{\circ}\) \\ \end{tabular} & 14.54 & 18.34 & 3.80 \\ \begin{tabular}{c} \(\Lambda\) and \(\bar{\Lambda}\) mass window \\ \end{tabular} & 12.96 & 15.97 & 3.01 \\ \hline \hline \end{tabular} \end{table} Table 1: Events selection efficiency. Figure 6: The statistical sensitivity of \(J/\psi\) samples with different statistics. given by \[\langle\sin\theta_{1}\sin\phi_{1}\rangle =\frac{1}{N_{norm}}\int\mathcal{W}\ (\xi)\sin\theta_{1}\sin\phi_{1}d\Omega_{1}d\Omega_{2} \tag{14}\] \[\approx-\frac{\sqrt{1-\alpha^{2}}\alpha_{1}\sin\ (\Delta\Phi)}{3+\alpha}\sin\theta\cos\theta.\] In the analysis of experimental data, \(\langle\sin\theta_{1}\sin\phi_{1}\rangle\) can be calculated by the average of \(\sin\theta_{1}\sin\phi_{1}\) in each \(\cos\theta\) bin. The moment of \(\sin\theta_{1}\sin\phi_{1}\) can be connected with the polarization according to Eq. 8, \[\langle\sin\theta_{1}\sin\phi_{1}\rangle\approx\frac{(1+\alpha\cos^{2}\theta) \alpha_{1}}{3+\alpha}P_{y}. \tag{15}\] The distribution of polarization versus \(\cos\theta\) as shown in Fig. 7 and the events are not corrected with detection efficiency. ## 7 Prospect of \(Cp\) sensitivity at STCF The asymmetry parameters used to observe \(CP\) violation are affected by statistics and proportional to the \(\sqrt{N_{J/\psi}}\), where \(N_{J/\psi}\) is the number of \(J/\psi\) events. By generating a \(1.0\times 10^{12}\) MC sample, after event selection and detector optimization, the statistical accuracy of \(CP\) violation is \(10^{-4}\). The STCF has great potential in improving luminosity and realizing beam polarization. It is expected that more than 1 ab\({}^{-1}\) experimental data and 3.4\(\times 10^{13}\)\(J/\psi\) events will be obtained per year, with the substantial increase in statistics, larger data samples will be generated on STCF, and in the future, it will hopefully reach a level of accuracy and theoretical prediction compatibility. ## 8 Summary and prospect With the fast simulation software package, the MC samples of \(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\) process were generated. After the optimization of detector performance, the events selection efficiency of the signal process is increased by 23.22% compared to the unoptimized and the main background process is reduced by 5.5%. Furthermore, the \(1.0\times 10^{12}\)\(J/\psi\) MC was used to pre-studied the sensitivity of \(CP\) violation of the \(J/\psi\rightarrow\Lambda\bar{\Lambda}\) process at the future STCF. The statistical accuracy of \(CP\) violation of \(\Lambda\) hyperon is \(10^{-4}\), which is close to the prediction of SM of \(CP\) violation in \(\Lambda\) hyperon decay [29]. ## Acknowledgments The authors would like to thank the USTC Supercomputing Center and the Hefei Comprehensive National Science Center for their strong support. This work is supported in part by National Natural Science Foundation of China (NSFC) under Contracts Nos. 11872030, 11905092, 11972177, 12122509, 11625523, 12105132, 11705078. The international partnership program of the Chinese Academy of Sciences under Grant No. 211134KYSB20200057 and by USTC Research Funds of the Double First-Class Initiative and the Fundamental Research Funds for the Central Universities. The Doctoral Scientific Research Foundation of Liaoning Province No. 2019-BS-113, the Foundation of Liaoning Educational Committee Figure 7: Polarization as a function of \(\cos\theta\) for \(J/\psi\rightarrow\Lambda\bar{\Lambda}\to p\pi^{-}\bar{n}\pi^{0}\). The points with error bars are the signal MC, and the blue dashed histogram is the no-polarization scenario of PHSP MC. No. LQN201902, the Natural Science Foundation of Liaoning Provincial Department of Education No. LCJ202003. China Postdoctoral Science Foundation under Contracts Nos. 2021M693181. The PhD Start-up Fund of Natural Science Foundation of Liaoning Province of China under Contracts No. 2019-BS-113. Scientific research Foundation of Liaoning Provincial Department of Education under Contracts No. LQN201902. Foundation of Innovation team 2020, Liaoning Province. Opening Foundation of Songshan Lake Materials Laboratory, Grants No.2021SLABFK04.
2308.10649
Reinforcement Learning Based Sensor Optimization for Bio-markers
Radio frequency (RF) biosensors, in particular those based on inter-digitated capacitors (IDCs), are pivotal in areas like biomedical diagnosis, remote sensing, and wireless communication. Despite their advantages of low cost and easy fabrication, their sensitivity can be hindered by design imperfections, environmental factors, and circuit noise. This paper investigates enhancing the sensitivity of IDC-based RF sensors using novel reinforcement learning based Binary Particle Swarm Optimization (RLBPSO), and it is compared to Ant Colony Optimization (ACO), and other state-of-the-art methods. By focusing on optimizing design parameters like electrode design and finger width, the proposed study found notable improvements in sensor sensitivity. The proposed RLBPSO method shows best optimized design for various frequency ranges when compared to current state-of-the-art methods.
Sajal Khandelwal, Pawan Kumar, Syed Azeemuddin
2023-08-21T11:36:54Z
http://arxiv.org/abs/2308.10649v1
# Reinforcement Learning Based Sensor Optimization for Bio-markers ###### Abstract Radio frequency (RF) biosensors, in particular those based on inter-digitated capacitors (IDCs), are pivotal in areas like biomedical diagnosis, remote sensing, and wireless communication. Despite their advantages of low cost and easy fabrication, their sensitivity can be hindered by design imperfections, environmental factors, and circuit noise. This paper investigates enhancing the sensitivity of IDC-based RF sensors using novel reinforcement learning based Binary Particle Swarm Optimization (RLBPSO), and it is compared to Ant Colony Optimization (ACO), and other state-of-the-art methods. By focusing on optimizing design parameters like electrode design and finger width, the proposed study found notable improvements in sensor sensitivity. The proposed RLBPSO method shows best optimized design for various frequency ranges when compared to current state-of-the-art methods. 1International Institute of Information Technology C R Rao Road, Gachibowli Hyderabad, 500032, India [email protected] ## Introduction Radio frequency (RF) bio-sensors have become a crucial component in various fields, including biomedical diagnosis, remote sensing, and wireless communication. Among different types of RF sensors, inter-digitated capacitors (IDCs) have emerged as a popular choice due to their low cost, simple design, and ease of fabrication. However, the sensitivity of IDC-based RF sensors can be limited by various factors, such as non-idealities in device design, environmental interference, and circuit noise. Several methods have been proposed to overcome these limitations and improve the sensitivity of IDC-based RF sensors, including the use of heuristic optimization algorithms such as Binary Particle Swarm Optimization (BPSO) [12], Ant Colony Optimization (ACO) [1], Artificial Bee Colony (ABC) [1], Simulated Annealing (SA) [13], Ant Lion Optimization (ALO) [14], and Reinforcement Learning based Binary Particle Swarm Optimization(RLBPSO) [15]. These algorithms can analyze and optimize parameters affecting IDC-based RF bio-sensor sensitivity, such as the inter-digitated electrode design, the number of fingers, finger width, finger gap, and the electrode material. This research paper presents a comprehensive study on enhancing the sensitivity of IDC-based RF sensors using heuristic optimization algorithms with modification. The parameters that impact the sensitivity of the sensor are analyzed, and these algorithms are applied to identify the optimal design parameters. In this work, the performance of the optimized IDC-based RF sensor is compared with the work published in [14], and a significant improvement in sensitivity is demonstrated. The proposed approach has the potential to advance the development of highly sensitive RF sensors, enabling their use in diverse applications such as environmental monitoring, medical diagnostics, and wireless communication. The optimized sensor design may pave the way for new and innovative applications by improving the sensitivity, accuracy and reliability of RF sensors. ## Methodology ### Initial IDC Design The initial design of the IDC (Inter-Digitated Capacitor) was inspired by the work done in [14] but with modifications in dimensions. The IDC sensor was fabricated on a 1.6 mm FR-4 substrate measuring \(94.5\times 36\) mm2. An epoxy-resin-based mask measuring \(87\times 36\) mm2 was deposited on the IDC side. A sample chamber made of grey PLA material with dimensions of \(27\times 19.5\) mm2 and a cavity of \(24\times 16.5\) mm2 was 3D printed. To achieve impedance matching, the width of the microstrip line was set at 3.13 mm to obtain an input characteristic impedance of 50 ohms. The figure shows the IDC-based RF sensor used for simulation and how the inter-digSTATEd region is divided into a cell pattern of \(16\times 11\) cells, each with a size of 1.5 mm. A specific design structure is created by combining k cells, where \(k\) is the total number of unknown parameters (96) in the optimization problem for IDC. The cell pattern of IDC is kept in an anti-symmetric structure across the horizontal axis, similar to the conventional IDC structure. IDC behaves like a band-pass filter, and the S11 parameter is focused on observing the sensor's resonant frequency change. ### Cost Function Designing an efficient IDC RF-based biosensor involves several parameters that must be optimised to achieve optimal performance. To this end, we propose a methodology for optimising an IDC-based biosensor using a heuristic optimisation algorithm. The methodology aims to improve the sensitivity of the biosensor and enhance its accuracy in detecting analytes. The sensitivity of the biosensor is directly related to the shift in resonating frequency(\(\Delta F\)), which increases with a more significant resonating shift and decreases with a lower one. \[\Delta f=f_{\text{sam}}-f_{\text{ref}}.\] The normalised frequency (NF) of the sensor is [2]: \[NF=\frac{\Delta f}{f_{\text{ref}}}.\] Since the sensitivity is directly proportional to the normalised frequency of the sensor. So, in this work, the inverse of NF is taken as the optimisation algorithm's objective function(OF). Now, optimisation algorithms aim to find the minima of the OF. Now, The OF is defined as: \[OF=\frac{f_{\text{ref}}}{\Delta f}=\frac{f_{\text{ref}}}{f_{\text{sam}}-f_{ \text{ref}}}, \tag{1}\] where \(f_{\text{ref}}\) and \(f_{\text{sam}}\) are the resonating frequency of the sensor when reference and sample are placed on the sensor, respectively. ### Optimisation Algorithms The biosensor sensitivity optimization process depends on multiple factors. So, this is a complex problem to solve. Heuristic optimization algorithms are commonly used for complex and large-scale problem spaces because they offer efficient and practical approaches to finding better or near-optimal solutions. In this work, we cover the popular heuristic algorithms, which are well-proven and cover the latest optimization in this area. #### Binary Particle Swarm Optimisation [1] The Particle Swarm Optimization (PSO) [1] algorithm is a powerful metaheuristic optimization technique that draws inspiration from the social behaviour of birds and insects. This study uses a Binary Particle Swarm Optimization (BPSO), with a modification update in the particle's position in PSO, as mentioned in [1]. It searches for the optimal cell pattern that maximizes the fitness value and minimizes the cost value. The optimization process involves initializing the swarm of particles with random positions and velocities within the search space, evaluating the cost of each particle in the swarm, updating personal and global best positions and cost, updating velocities and positions of the particles, and repeating the process until the convergence criteria met. The optimal cell pattern of the IDC sensor is obtained from the global best position of the swarm. In BPSO, finding the best solution involves a balance between exploration and exploitation. The swarm size directly affects the exploration and exploitation process, as mentioned in [1]. The swarm size is taken as 25 in this study, which is near optimum as mentioned in [1]. The BPSO equations are \[{V_{n}}^{t+1} ={w{V_{n}}^{t}}+{c_{1}}{r_{1}}{({P_{n}}^{t}-{X_{n}}^{t})} \tag{2}\] \[+{c_{2}}{r_{2}}({G^{t}}-{X_{n}}^{t}),\] where \(V_{n}\), \(X_{n}\) and \(P_{n}\) are the velocity, position and personal best of the \(n^{th}\) particle, respectively, \(G\) is the global best solution of the whole swarm, \(w\) is called the inertia weight, \(c_{1}\) and \(c_{2}\) are personal and social acceleration coefficient. Here, \(V\), \(X\), \(P\), and \(G\) are \(k\)-dimensional vectors. The values of \(w\), \(a_{1}\), \(a_{2}\) and \(k\) are \(1\), \(2\), \(2\), and \(96\) respectively. The values of \(w\), \(c_{1}\) and \(c_{2}\) are selected for the better optimal solution as mentioned in [1]. \[\texttt{TF}({V_{n}}^{t+1})=\begin{cases}\frac{2}{1+\exp(-a{V_{n}}^{t+1})}-1,&{ V_{n}}^{t+1}>0\\ 1-\frac{2}{1+\exp(-a{V_{n}}^{t+1})},&{V_{n}}^{t+1}\leq 0,\end{cases} \tag{3}\] where \(a\) in 3 is defined as \[a=e-((e-d)/i), \tag{4}\] where \(\texttt{TF}({V_{n}}^{t+1})\) is the transfer function of the velocity of \(n^{th}\) particle at \((t+1)\) iterations, as mentioned in [1], \(a\) is an iterative parameter called transfer factor, \(i\) is the number of iterations, \(e\) and \(d\) are the maximum and minimum transfer factors, respectively. The values of \(e\) and \(d\) are \(2\) and \(1\), respectively, for early convergence of the algorithm. The position of the \(n^{th}\) particle at the \((t+2)\) iteration is given by \[{X_{n}}^{t+2}=\begin{cases}1,&\texttt{TF}({V_{n}}^{t+1})>r,\,{X_{n}}^{t+1}=0\\ 0,&\texttt{TF}({V_{n}}^{t+1})>r,\,{X_{n}}^{t+1}=1\\ {X_{n}}^{t+1},&\texttt{TF}({V_{n}}^{t+1})\leq r.\end{cases} \tag{5}\] The simulation results from Table 1 show that the BPSO algorithm effectively optimizes the sensitivity of the IDC sensor by optimizing the cell pattern. The optimal cell pattern obtained from the algorithm maximizes the sensitivity of the IDC sensor by maximizing the normalized frequency. The proposed approach can be used for designing and optimizing IDC sensors for various applications. #### Artificial Bee Colony Algorithm Artificial Bee Colony (ABC) optimization [1] is a metaheuristic algorithm based on the foraging behaviour of honeybees. The algorithm mimics the food source exploration of bees and uses a population of artificial bees to search for the optimal solution in a given search space. The ABC algorithm works by initializing a population of artificial bees and assigning them to search for food sources (candidate solutions). Three types of bees are used in the algorithm: employed bees, onlookers bees and scouts bees. Each food source represents a possible solution to the optimization problem: a combination of cell patterns for the IDC sensor. During the search process, the bees can exploit the information contained in the previously found food sources and explore new ones using local and global search strategies. The global search is performed by scout bees that randomly generate new solutions, while the local search is performed by employed bees that exploit the information contained in the food sources already found. The ABC algorithm iteratively performs the search process until a termination criterion is met. At the end of the search process, the best solution found is returned as the optimal cell pattern for the IDC sensor. **Ant Colony Optimisation.** Ant Colony Optimization (ACO) (Dorigo, Birattari, and Stutzle 2006) is another meta-heuristic algorithm inspired by the foraging behaviour of ants. The ACO algorithm works by initializing a population of artificial ants and assigning them to search for a food source (candidate solution). Each food source represents a possible solution to the optimization problem, which is a combination of cell patterns for the IDC sensor. During the search process, the ants can exploit the information in the previously found food sources and explore new ones using local and global search strategies. The global search is performed by pheromone trails, which represent the information about the quality of food sources found so far. The ants are more likely to choose food sources with higher pheromone concentrations, which indicates better solutions. The local search is performed by the ants' ability to evaluate the quality of the cell pattern and adjust it accordingly. The ACO algorithm updates the quality of each food source (candidate solution) by using the fitness function. The algorithm also employs the concept of pheromones to encourage the exploration of new solutions. The amount of pheromone deposited on a food source is proportional to its quality, and the ants are more likely to choose food sources with high pheromone concentrations. The ACO algorithm iteratively performs the search process until a termination criterion is met, such as a maximum number of iterations or reaching a predefined fitness value. At the end of the search process, the best solution found is returned as the optimal cell pattern for the IDC sensor. **Simulated Annealing.** Simulated Annealing (SA) Optimisation (Nikolaev and Jacobson 2010) is a probabilistic optimization algorithm inspired by the annealing process in metallurgy. The SA algorithm works by initializing a random solution (a cell pattern for the IDC sensor) and evaluating its fitness value (the normalized frequency). The algorithm then randomly perturbs the current solution and evaluates the fitness value of the new solution. If the new solution has a better fitness value, it is accepted as the current solu tion. However, if the new solution has a worse fitness value, it may still be accepted with a certain probability that decreases over time. The acceptance probability is controlled by a parameter called the temperature, which decreases over time according to a cooling schedule. The cooling schedule determines the rate at which the temperature decreases and is crucial for the success of the algorithm. At high temperatures, the algorithm is allowed to accept solutions with worse fitness values, which can help the algorithm escape from local optima. As the temperature decreases, the algorithm becomes more selective and only accepts solutions that improve the fitness value. The SA algorithm iteratively performs the perturbation and acceptance process until a termination criterion is met, such as a maximum number of iterations or reaching a predefined fitness value. At the end of the search process, the best solution found is returned as the optimal cell pattern for the IDC sensor. ``` 1:Initialize population of ants with random cell patterns 2:Calculate cost for each solution 3:Initialize pheromone matrix 4:Initialize global best solution and its cost 5:foriteration\(=1,2,\ldots,\texttt{maxIter}\)do 6:foreachAnt do 7: Construct a solution using pheromone trail and heuristic information 8: Calculate Cost of new position 9:ifcost\(\_\)new\(<\)globalBestCost then 10: Update Global best solution 11:endif 12: Update pheromone trail using eq. [needs to be added] 13:endfor 14: Update pheromone trail using elitist strategy 15:endfor 16:Return global best solution ``` **Algorithm 3** Ant Colony Optimization Ant Lion Optimization.The Ant Lion Optimisation (ALO) [11] algorithm is inspired by the interaction between antlons and ants, where antlons construct traps to capture ants, and ants exhibit gradient-based movement to avoid falling into the traps. This interaction is adapted into an optimization algorithm that efficiently explores the search space. The ALO algorithm initializes a population of antlons and ants, with each individual representing a candidate solution (cell pattern) for the IDC sensor. Antlons update their positions based on the fitness values of the current solutions. Stronger antlons, corresponding to higher fitness values, construct traps in regions likely to contain better solutions. Ants move within the search space using a gradient-based strategy, adjusting their positions towards regions with higher fitness values. This movement encourages the exploration of potential solutions. The fitness function evaluates each ant's cell pattern's fitness (normalized frequency). This step provides the reinforcement signal for the subsequent updates. Ants perform local searches in the vicinity of their positions, further exploring the search space. This local search enables the exploitation of regions with high fitness, enhancing the algorithm's convergence capabilities. Based on their local search and the positions of the traps, ants update their positions to avoid falling into traps and navigate towards high-fitness regions. ``` 1:Initialize the positions of antlons 2:Calculate the cost values of antlons 3:Save the best antlon and its position (elite antlion) 4:foriteration\(=1,2,\ldots,\texttt{maxIter}\)do 5:foreachAntlon do 6: Select antlion using the roulette wheel method 7: Slide randomly walking ants in a trap 8: Generate ant's random walk route around elite antlion 9: Generate the ant's random walk route around the selected antlion 10: Normalize random walks 11: Calculate the position of the ant 12:endfor 13: Calculate the cost values of ants 14: Combine ants and antlions 15: Sort according to their costs and take the first population size 16: Update the elite antlion 17:endfor 18:Return best antlion ``` **Algorithm 5**: ALO Reinforcement Learning based Binary Particle Swarm Optimization.The Reinforcement Learning-based Binary Particle Swarm Optimization (RLBPSO) algorithm is an advanced hybrid optimization method that combines the strengths of reinforcement learning and the Binary Particle Swarm Optimization (BPSO) algorithm. This method is used to guide the adaption of parameters of BPSO in a way that improves the overall performance of the algorithm. The flow of this algorithm involves states, action space, reward function, learning algorithm, and online adaption process. The state is defined by three parameters which are iteration percentage, diversity function and the no improvement iteration, where iteration percentage means the percentage of the iterations during the execution of the BPSO, and it ranges from 0-1, diversity function can be defined as the average of the Euclidean distances between each particle and the best particle and the no improvement iteration means the stagnant growth duration input, as stated by Equations 7, 8, 9 and 10 in [23]. The action or the output of the action network is used to control the BPSO parameters \(w,c_{1}\) and \(c_{2}\), referenced from Equations 11 and 12 in [23]. The reward function for the RL agent is used to measure the reward value after the execution of an action to find the better global optimum. The reward function is mentioned in Equation 13 of [23]. This technique employs two neural networks to enhance the performance of the optimization process: the actor-network and the action-value network. The actor network is specifically trained to assist the particles in the PSO in selecting optimal parameters for their current states. It takes as input three components: the proportion of iterations, the percentage of iterations without improvements, and the diversity of the swarm. Based on this information, the actor-network divides all particles into several groups, each with its action. In PSO, actions typically control parameters such as \(w,c_{1},\) and \(c_{2}\), but they can control any parameter if necessary. The action-value network, on the other hand, is responsible for evaluating the performance of the actor network. It provides gradients for training the actor network and helps to fine-tune its performance. A reward function is used to train both networks. The design of this function is straightforward and aims to motivate the PSO to produce better solutions with each iteration. The velocity update equation, after calculating the parameters from the action network, is \[V(t+1)_{i}^{d} =w\times V(t)_{i}^{d} \tag{6}\] \[+c_{1}\times r_{1}\times(\texttt{pbest}_{fi(d)}^{d}-X_{i}^{d})\] (7) \[+c_{2}\times r_{2}\times(\texttt{gbest}_{i}^{d}-X_{i}^{d})\] (8) \[+c_{3}\times r_{3}\times(\texttt{pbest}_{i}^{d}-X_{i}^{d}). \tag{9}\] ## Numerical Experiments ### Experimental Setup In this work, we are using CST studio suite software to find the sensitivity of the biosensor. The input to this software is a 96-size binary array, which states the design of the biosensor to the software. In optimizing the biosensor's design, we use the inverse of biosensor sensitivity as a cost function for the optimization algorithm. Our optimization goal is to increase biosensor sensitivity, so algorithms aim to find the minimum cost value. Assuming that the time CST software takes to find the cost is f (independent of the time taken by algorithms) and The time complexity of finding the cost of one design pattern is O(f) for all algorithms. ### Comparisons In Table 1 (a), we compare the costs of the proposed method RLBPSO with other methods for 1.5 GHz IDC design and in Table 1 (b), we show for 5GHz IDC design. We find that the proposed method gives the lowest cost among all the methods compared. The second best is achieved by ALO followed by BPSO. Hence, the reinforcement learning approach improves on BPSO method. For SA, random mutations gave better results compared to SA with swap mutation. In Table 2, we compare the time for the proposed method RLBPSO with other methods. We show times for three runs. We observe that SA annealing takes the least time, followed by BPSO and RLBPSO. Although, SA is fastest, the cost achieved by SA is very high as seen in Table 1 for both 1.5GHz and 5GHz IDC design. Hence, there is a trade-off for better sensor design at the cost of higher run time. In Table 3, we show the optimal hyperparameters used for experiments. In BPSO, The particles are randomly initialized in the search space. The time and space complexity of initialization is \(O(ND)\), where \(N\) is the number of particles, and \(D\) is the problem's dimensionality. For each particle, the fitness function is evaluated to determine its quality. So, the time complexity of fitness evaluation will be \(O(Nf)\). In each iteration, the particles update their positions and velocities based on their current positions, velocities, and the best positions. The time complexity of updating the positions and velocities is \(O(ND)\), and for updating the particle's best solution and the global best solution is \(O(N)\). The algorithm continues iterating until a termination condition is met, which could be a maximum number of iterations (maxItr) or reaching a desired fitness value. The time complexity of checking the termination condition is typically \(O(\texttt{maxItr})\). The total time taken by PSO algorithm is \(O(\texttt{maxIt}(O(ND)+O(Nf)+O(N)+O(N)))\). So, the overall time complexity is \(O(\texttt{maxIt}(N(D+f+2))\) and space complexity is \(O(ND)\). ### Abc In the Artificial Bee Colony (ABC) algorithm, Initialize the populations of employed, onlooker and scout bees in the search space, which takes \(O((EB+OB+SB)D)\) space and time complexity, where \(EB\), \(OB\) and \(SB\) are the total numbers of employed, onlooker and scout bees and \(D\) is the space dimensions. For every iteration step, Each employed bee explores a solution within its neighbourhood, calculates its cost function, and shares all information with onlooker bees. Onlooker bees select employer bees' location based on the cost using probabilistic algorithms, perform some local search around the employed bees' solution, and evaluate the cost function of these solutions. And then, the scout bees identify solutions that have yet to improve for a certain number of iterations and randomly generate new solutions. So, the overall time taken by every iteration step is \(O((EB+OB+SB)f)\). Moreover, the maximum number of iterations maxItr is used as a termination criterion. The total time complexity is \(O(\texttt{maxIt}((EB+OB+SB)(f+D)))\) and space complexity is \(O((EB+OB+SB)D)\). ### Aco In Ant Colony Optimization (ACO) algorithm, the ants' population and the pheromone trail matrix are randomly initialized in the search space; both the time and space will take \(O(ND)\) for the ants' population and \(O(D^{2})\) for the pheromone trail matrix, where \(N\) is the total number of ants. \(D\) is the dimensionality of the search space. For every iteration, the ants construct solutions by iteratively choosing the next component based on the pheromone trails and heuristic information and calculating its cost function, which takes \(O(Nf)\). After, all ants complete finding their solutions, the pheromone trails are updated based on their cost function value. Moreover, using the graph traversal method, the pheromone matrix is updated in \(O(N+D)\) complexity. The total number of iterations used is maxIt as termination criteria. So, the overall time taken by ACO is \(O(\texttt{maxIt}(Nf+N+D))\). The time complexity is \(O(\texttt{maxIt}(N(f+1)+D))\) and space complexity is \(O(ND+D^{2})=O(D(N+D))\). ### Sa In the Simulated Annealing (SA) optimization algorithm, the complexity analysis considers several factors, including \begin{table} \end{table} Table 1: Best cost optimization algorithms comparison \begin{table} \end{table} Table 2: Time Comparisons. Time shown in seconds. \begin{table} \end{table} Table 3: Hyperparameters the number of iterations, the dimensionality of the problem space (\(D\)), the chosen cooling schedule, and the generation of neighbouring solutions. The SA algorithm begins by randomly initializing a design pattern. During each iteration, neighbouring solutions are generated by applying perturbations or transformations to the current solution. We used random and swap mutation methods to generate neighbouring solutions in our implementation. The time complexity for finding neighbouring solutions using these methods is \(O(D)\), where \(D\) represents the dimensionality of the problem space. After generating a neighbouring solution, the algorithm evaluates its cost using a cost function. The evaluation of the cost function takes \(O(f)\) time complexity. The algorithm then updates the best cost and best pattern by comparing them with the previous best pattern, which takes \(O(D)\) time complexity. The algorithm terminates after a maximum number of iterations, which serves as a termination criterion. Therefore, the overall time complexity of the SA algorithm can be expressed as \(O(\texttt{maxIt}(2D+f))\), where maxIt denotes the maximum number of iterations. Additionally, the space complexity of the SA algorithm is \(O(D)\) since it requires storing the current pattern and the best pattern during the optimization process. ## 5 Alo In the Ant Lion Optimization (ALO) algorithm, time complexity varies according to the number of ants and ant lions, the roulette wheel selection method to pick the best antlions, dimensionality and the termination condition. The initial population of antlions and ants is randomly initialized in the search space. The time complexity of initialization is \(O(ND)\), where \(N\) is the number of antlions and \(D\) is the dimensionality of the problem, and the space complexity is \(O(2ND)\) for initializing the positions of both ants and antlions. Then, evaluate the cost of each ant lion in the population using the objective function and sort the ant lions based on their fitness values in descending order, which takes \(O(Nf)\) time. The algorithm will create pits for each ant lion in each iteration based on its fitness value. The cost value determines the depth and size of the pit, where a lesser cost value corresponds to a deeper and larger pit. It allows each ant lion to perform a random walk within a certain radius around its current position. As one ant is prey to only one ant lion, a random walk of the ant is followed, and a roulette wheel selection method for selecting the antlon is used based on the antlon's fitness, which takes the worst case \(O(N)\) time complexity. For every iteration, the ants' random walk was observed. So, one antlon is selected using a roulette wheel operator for every ant because one ant is prey to only one antlon. An ant updates its position based on the random walk of selected antlon and elite antlon. This step takes \(O(N^{2}+ND)\). Now, calculate the cost of each ant based on its updated position and updates the antlon positions with its prey position, which takes \(O(Nf)\). And then, calculate the best antlon position based on the fitness value and updates the elite or the best solution, which takes \(O(N)\). The algorithm terminates after the termination condition reaches the total number of iterations (master). So, the total time taken by ALO is \(O(\texttt{maxIt}(N^{2}+ND+Nf+N))\). The overall time complexity is \(O(\texttt{maxIt}(N(N+D+f+1)))\) and space complexity is \(O(2ND)\).
2303.05650
Kramers-Kronig relation in gravitational lensing
The Kramers-Kronig relation is a well-known relation, especially in the field of optics. The key to this relation is the causality that output comes only after input. We first show that gravitational lensing obeys the causality in the sense that (electromagnetic/gravitational) waves emitted from the source arrive at an observer only after the arrival of the signal in geometrical optics. This is done by extending the previous work which is based on the thin lens approximation. We then derive the Kramers-Kronig relation in gravitational lensing, as the relation between real and imaginary parts of the amplification factor, which is the amplitude ratio of the lensed wave to the unlensed wave. As a byproduct, we find a new relation that equates integration of the square of the real part of the amplification factor over frequency to that for the imaginary part of the amplification factor. We also obtain a sum rule which relates the integral of the imaginary part of the amplification factor with the magnification of the first arrival image in geometrical optics. Finally, we argue that an incorrect separation of the observed gravitational waveform into the amplification factor and the unlensed waveform generically leads to the violation of the Kramers-Kronig relation. Our work suggests that examining the violation of the Kramers-Kronig relation may be used for correctly extracting the lensing signal in the gravitational wave observations.
So Tanaka, Teruaki Suyama
2023-03-10T01:56:16Z
http://arxiv.org/abs/2303.05650v2
# Kramers-Kronig relation in gravitational lensing ###### Abstract The Kramers-Kronig relation is a well-known relation, especially in the field of optics. The key to this relation is the causality that output comes only after input. We first show that gravitational lensing obeys the causality in the sense that (electromagnetic/gravitational) waves emitted from the source arrive at an observer only after the arrival of the signal in geometrical optics. This is done by extending the previous work which is based on the thin lens approximation. We then derive the Kramers-Kronig relation in gravitational lensing, as the relation between real and imaginary parts of the amplification factor, which is the amplitude ratio of the lensed wave to the unlensed wave. As a byproduct, we find a new relation that equates integration of the square of the real part of the amplification factor over frequency to that for the imaginary part of the amplification factor. We also obtain a sum rule which relates the integral of the imaginary part of the amplification factor with the magnification of the first arrival image in geometrical optics. Finally, we argue that an incorrect separation of the observed gravitational waveform into the amplification factor and the unlensed waveform generically leads to the violation of the Kramers-Kronig relation. Our work suggests that examining the violation of the Kramers-Kronig relation may be used for correctly extracting the lensing signal in the gravitational wave observations. ## 1 Introduction Light passing through a gravitational field is bent. This phenomenon is known as gravitational lensing (GL) [1, 2], and gravitational waves (GWs) are also subject to this effect [3, 4]. One of the prominent features of GWs over light is their long wavelength nature. Because of this, in some cases, geometrical optics which only deals with null geodesics breaks down and the propagation of GWs should be described by wave optics [5, 6]. In the regime of wave optics, the lensed GWs carry more information about a lens object since they pass through a more extended region due to diffraction [7]. Recently, there is a discussion about the arrival time difference between light and GWs due to GL [8, 9, 10]. It was argued in [9] that GWs never arrive earlier than light if they depart from the source at the same time. This can be rephrased as that GL signal in wave optics regime comes only after the signal in geometrical optics. This fact gives us inspiration that the Kramers-Kronig (K-K) relation, which is satisfied as long as any system under consideration satisfies the causality condition that output comes only after input, also holds in GL. The K-K relation is directly derived from causality, and actually, it is the relation between real and imaginary parts of a response function [11]. It is often used in the field of optics, for example, as a relation between the refractive index and the extinction coefficient. The typical application is optical data inversion [12]: we can obtain data of the refractive index from that of the extinction coefficient, or vice versa. However, to the best of our knowledge, the K-K relation has never been discussed in the context of GL. This observation is sufficient to motivate us to clarify the K-K relation in GL and to investigate potential applications to the observations of GL of GWs. In this paper, we first revisit the causality of GL. While the previous study used the so-called thin-lens approximation [13] to show the causality, we provide an explicit proof that the causality holds true without resorting to the thin-lens approximation. In Sec. 3, we derive the K-K relation in GL which gives a non-trivial relation between the real part of and the imaginary part of the amplification factor. We also derive some relations which directly follow from the K-K relation. In Sec. 4, we argue that an incorrect separation of the observed gravitational waveform into the amplification factor and the unlensed waveform generically leads to the violation of the Kramers-Kronig relation. Given that it is observationally challenging to discern a lensing effect from a characteristic of a source [14], examining the violation of the Kramers-Kronig relation has a potential to correctly extract the lensing signal in the gravitational wave observations. ## 2 Causality of gravitational lensing In this section, we investigate the causality of GL, which is needed to derive the K-K relation. Propagating waves are either GWs or electromagnetic waves both of which have polarization degrees and the waves can be written as a product of wave amplitude \(\phi\) and polarization vector/tensor. The change of the polarization vector/tensor due to GL is suppressed by the gravitational potential (\(\ll 1\)) [2] and we ignore the polarization in this paper. The background metric \(g^{B}_{\mu\nu}\) on which the wave propagates is given by \[ds^{2}=g^{B}_{\mu\nu}dx^{\mu}dx^{\nu}=-(1+2\Phi)dt^{2}+(1-2\Phi)d\mathbf{x}^{2}, \tag{2.1}\] where \(\Phi\) is the gravitational potential of the lensing objects #1. The wave equation for \(\phi\) is that for a massless scalar field [15] Footnote #1: Here we ignore the expansion of the Universe because it is not important in this discussion. \[\partial_{\mu}(\sqrt{-g^{B}}g^{\mu\nu}_{B}\partial_{\nu}\phi)=0. \tag{2.2}\] he lensed wave \(\phi_{L}\) is a solution of this equation. To represent the effects of GL, it is customary to move to the frequency domain where the lensed waveform is simply given a product of the unlensed waveform and the amplification factor \(F(\omega)\): \[\phi_{L}(\omega)=F(\omega)\phi_{0}(\omega), \tag{2.3}\] where \(\phi_{L}(\omega)/\phi_{0}(\omega)\) are the lensed/unlensed wave in the frequency domain, evaluated at the observer's position. Then Eq. (2.2) can be solved in terms of \(F\) and the formal solution is given by the path integral [5]: \[F(\omega)=\int\mathcal{D}\boldsymbol{\theta}(r)e^{i\omega T[\boldsymbol{ \theta}]}, \tag{2.4}\] where \[T[\boldsymbol{\theta}]=\int_{0}^{r_{0}}dr\Bigg{[}\frac{1}{2}r^{2}\bigg{(} \frac{d\boldsymbol{\theta}}{dr}\bigg{)}^{2}-\Phi(r,\boldsymbol{\theta}(r)) \Bigg{]}, \tag{2.5}\] and \(\boldsymbol{\theta}\) is a two-dimensional angular vector perpendicular to the line of sight, \(r\) is a radial coordinate along the line of sight (the observer is located at \(r=0\) and the source is at \(r=r_{0}\)) (see Fig. 1). The time dependence of \(\Phi\), which can arise when the source varies in time, is encoded in \(r\) dependence through \(r-t=\text{const.}\). The first term of Eq. (2.5) represents the deviation of a path from the straight line, while the second term represents the time delay caused by the gravitational potential. What we want to compute is the Fourier transform of \(F(\omega)\), and to do so we first Figure 1: Schematic picture of GL. The dotted line represents a path of the waves, and all paths contribute to the path integral. discretize the path integral. We divide the distance to the source into N parts and define \[r_{j} \equiv j\Delta r, \tag{2.6}\] \[\mathbf{\theta}_{j} \equiv \mathbf{\theta}(r_{j}),\] (2.7) \[\Phi_{j} \equiv \Phi(r_{j},\mathbf{\theta}(r_{j})),\] (2.8) \[T_{j} \equiv \Delta r\Bigg{[}\frac{1}{2}r_{j}^{2}\Bigg{(}\frac{\mathbf{\theta}_{j +1}-\mathbf{\theta}_{j}}{\Delta r}\Bigg{)}^{2}-\Phi_{j}\Bigg{]}, \tag{2.9}\] where \(\Delta r=r_{0}/N\). Then we get \[F(\omega) = \int\Bigg{(}\prod_{j=1}^{N-1}N_{j}d^{2}\theta_{j}\Bigg{)}\exp \Bigg{(}i\omega\sum_{i=1}^{N-1}T_{j}\Bigg{)} \tag{2.10}\] \[= \int\prod_{j=1}^{N-1}d^{2}\theta_{j}F_{j},\] where \[F_{j}\equiv N_{j}\exp\Bigg{(}i\omega\sum_{i=1}^{N-1}T_{j}\Bigg{)}, \tag{2.11}\] and \[N_{j}=\frac{\omega r_{j}^{2}}{2\pi i\Delta r} \tag{2.12}\] is the normalization factor required for F=1 in the absence of the gravitational potential. The Fourier transform of \(F_{j}(\omega)\) is \[f_{j}(t) = \int_{-\infty}^{\infty}\frac{d\omega}{2\pi}F_{j}(\omega)e^{-i \omega t} \tag{2.13}\] \[= \frac{r_{j}^{2}}{2\pi\Delta r}\frac{d}{dt}\delta(t-T_{j}),\] hence that of \(F(\omega)\) becomes \[f(t) = \int_{-\infty}^{\infty}\frac{d\omega}{2\pi}F(\omega)e^{-i\omega t} \tag{2.14}\] \[= \int\Bigg{(}\prod_{j}d^{2}\theta_{j}\Bigg{)}\int_{-\infty}^{ \infty}dt_{2}\cdots dt_{N-1}f_{1}(t-t_{2})\cdots f_{N-2}(t_{N-2}-t_{N-1})f_{ N-1}(t_{N-1})\] \[= \int\Bigg{(}\prod_{j}\frac{r_{j}^{2}}{2\pi i\Delta r}d^{2}\theta _{j}\Bigg{)}\int_{-\infty}^{\infty}dt_{2}\cdots dt_{N-1}\] \[\frac{d}{dt}\delta(t-t_{2}-T_{1})\cdots\frac{d}{dt_{N-2}}\delta( t_{N-2}-t_{N-1}-T_{N-2})\frac{d}{dt_{N-1}}\delta(t_{N-1}-T_{N-1})\] \[= \int\Bigg{(}\prod_{j}\frac{r_{j}^{2}}{2\pi i\Delta r}d^{2}\theta _{j}\Bigg{)}\frac{d^{N-1}}{dt^{N-1}}\delta(t-\sum_{j}T_{j}),\] where we have used the fact that the Fourier transform of a product becomes a convolution integral and the properties of the delta function: \[g(t^{\prime})\frac{d^{n}}{dt^{n}}\delta(t-t^{\prime})=g(t^{\prime})(-1)^{n}\frac{ d^{n}}{dt^{\prime n}}\delta(t^{\prime}-t)=\delta(t^{\prime}-t)\frac{d^{n}}{dt^{n}}g(t). \tag{2.15}\] Taking \(N\rightarrow\infty\) limit, Eq. (2.14) yields \[f(t)\propto\frac{d^{\infty}}{dt^{\infty}}\int\mathcal{D}\boldsymbol{\theta} \ \delta(t-T[\boldsymbol{\theta}]), \tag{2.16}\] thus we conclude that \[f(t)=0\quad\text{if}\quad t<\,T_{min}\equiv\min_{\boldsymbol{\theta}}\{\,T[ \boldsymbol{\theta}]\}. \tag{2.17}\] This means that there is no lensing signal before \(t=T_{min}\). Since \(T_{min}\) is the time delay in the geometric optics limit, we can conclude that GWs never arrive earlier than light if these are emitted at the same time. This is the causality of GL which is crucial to prove the Kramers-Kronig relation for the amplification factor in the next section. ## 3 Kramers-Kronig relation ### Derivation In this subsection, we derive the K-K relation in GL. All that is needed for this is the causality and the asymptotic behavior of \(F(\omega)\)[11]. First, let us verify the analytic behavior of \(F(\omega)\) that is related to the causality. From Eq. (2.17), \(F(\omega)\) can be written as \[F(\omega)=\int_{T_{min}}^{\infty}dt\ f(t)e^{i\omega t}, \tag{3.1}\] or \[F_{ph}(\omega)\equiv F(\omega)e^{-i\omega T_{min}}=\int_{0}^{\infty}dt\ f(t+T _{min})e^{i\omega t}. \tag{3.2}\] Then \(F_{ph}(\omega)\) can be analytically continued to the upper half of the complex \(\omega\)-plane (we shall write \(I_{+}\)): \[F_{ph}(u+iv)=\int_{0}^{\infty}dt\ f(t+T_{min})e^{iut}e^{-vt}, \tag{3.3}\] where \(v>0\). If we assume that \(F_{ph}(\omega)\) does not have any poles on the real axis (this is physically reasonable), then \(F_{ph}(\omega)\) is also regular in \(I^{+}\), because the term \(e^{-vt}\) only improves the convergence of the integral. Furthermore, \(F_{ph}(\omega)\) has its physical meaning. Since the time delay \(T_{min}\) itself is not directly measurable, it is sensible to remove this degree of freedom and to use \(F_{ph}(\omega)\) rather than \(F(\omega)\). From now on, we focus on \(F_{ph}(\omega)\) and use \(F(\omega)\) for \(F_{ph}(\omega)\). Besides, we have to know the asymptotic behavior of \(F(\omega)\). In \(\omega\to\infty\) limit, excepting some special cases#2, \(F(\omega)\) does not diverge: Footnote #2: In the case of the point mass lens with the impact parameter \(y=0\), \(F(\omega)\) diverges as \(F(\omega)\propto\sqrt{\omega}\). However, in this case Eq. (3.7) also holds because \(|G(\omega)|\to 0\). \[|F(\omega)|<C, \tag{3.4}\] where \(C\) is some constant. On the other hand, in \(\omega\to 0\) limit, \(F(0)=1\) because waves with extremely long wavelengths do not feel the gravitational field. This can also be understood from Eq. (2.2). When \(\omega=0\), any time derivatives in Eq. (2.2) disappear and the wave equation coincides with the free propagation equation, thus \(\phi_{L}(0)=\phi_{0}(0)\). From above, if we define \(G(\omega)\) by \[G(\omega)\equiv\frac{F(\omega)-1}{\omega}, \tag{3.5}\] then \(G(\omega)\) has no poles on the real axis #3 and in \(I_{+}\), and in \(\omega\to\infty\) limit \(|G(\omega)|\to 0\). Therefore, by using Cauchy's integral theorem with the path \(\Gamma\) shown in Fig. 2, the following equation holds: Footnote #3: It is possible for \(G(\omega)\) to diverge at \(\omega=0\), for example, in the case that \(F(\omega)\) contains a term like \(\omega\ln\omega\). However, even in this case, there is no divergence on \(\Gamma\) in Fig. 2 since \(|G(\omega)|<1/\omega\) (\(\omega\to 0\)) and improper integral converges. \[G(\omega)=\frac{1}{\pi i}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{ \omega^{\prime}-\omega}G(\omega^{\prime}) \tag{3.6}\] or \[F(\omega)=1+\frac{\omega}{\pi i}\int_{-\infty}^{\infty}\frac{d\omega^{\prime }}{\omega^{\prime}-\omega}\frac{F(\omega^{\prime})-1}{\omega^{\prime}}, \tag{3.7}\] where \[\int_{-\infty}^{\infty}\equiv\lim_{\epsilon\to 0}\left(\int_{-\infty}^{ \omega-\epsilon}+\int_{\omega+\epsilon}^{\infty}\right) \tag{3.8}\] Figure 2: Path of the complex integral. The small semicircle’s contribution is the residue at \(\omega^{\prime}=\omega\), and the larger semicircle’s contribution is 0. denotes Cauchy's principal value. Eq. (3.7) is the K-K relation in GL, and this is the relation between the real and imaginary parts of \(F(\omega)\): \[\operatorname{Re}F(\omega) = 1+\frac{\omega}{\pi}\mathop{\mathchoice{\vbox{\hbox{$-$}} \kern-7.499886pt}{\vbox{\hbox{$-$}}\kern-6.374903pt}{\vbox{ \hbox{$-$}}\kern-4.499931pt}{\vbox{\hbox{$-$}} \kern-3.749943pt}}_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\omega^{\prime}- \omega}\frac{\operatorname{Im}F(\omega^{\prime})}{\omega^{\prime}}, \tag{3.9}\] \[\operatorname{Im}F(\omega) = -\frac{\omega}{\pi}\mathop{\mathchoice{\vbox{\hbox{$-$}} \kern-7.499886pt}{\vbox{\hbox{$-$}}\kern-6.374903pt}{\vbox{ \hbox{$-$}}\kern-4.499931pt}{\vbox{\hbox{$-$}} \kern-3.749943pt}}_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\omega^{\prime}- \omega}\frac{\operatorname{Re}F(\omega^{\prime})-1}{\omega^{\prime}}. \tag{3.10}\] These two are equivalent, and the K-K relation states that the real and imaginary parts must be related to make the system causal. In order to follow the notation used in the literature, let us define \(K(\omega)\) and \(S(\omega)\) by#4 Footnote #4: \(K\) and \(S\) coincide with the magnification and phase shift when \(K,S\ll 1\)[16]. Away from the weak lensing regime, \(K\) and \(S\) just represent the real and imaginary parts of \(F\), respectively. \[\operatorname{Re}F(\omega) \equiv 1+K(\omega), \tag{3.11}\] \[\operatorname{Im}F(\omega) \equiv S(\omega). \tag{3.12}\] In terms of these quantities, the K-K relation becomes \[\frac{K(\omega)}{\omega}=\frac{1}{\pi}\mathop{\mathchoice{ \vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{\hbox{$-$}} \kern-6.374903pt}{\vbox{\hbox{$-$}}\kern-4.499931pt}{\vbox{ \hbox{$-$}}\kern-3.749943pt}}_{-\infty}^{\infty}\frac{d\omega^{\prime}}{ \omega^{\prime}-\omega}\frac{S(\omega^{\prime})}{\omega^{\prime}}, \tag{3.13}\] \[\frac{S(\omega)}{\omega}=-\frac{1}{\pi}\mathop{\mathchoice{ \vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{\hbox{$-$}} \kern-6.374903pt}{\vbox{\hbox{$-$}}\kern-4.499931pt}{\vbox{ \hbox{$-$}}\kern-3.749943pt}}_{-\infty}^{\infty}\frac{d\omega^{\prime}}{ \omega^{\prime}-\omega}\frac{K(\omega^{\prime})}{\omega^{\prime}}. \tag{3.14}\] ### Confirmation of the Kramers-Kronig relation for some examples In this subsection, we consider two examples to confirm that the amplification factor, whose analytic expression is known in the literature, actually obeys the K-K relation derived above. #### 3.2.1 A point-mass lens The first example is the point-mass lens. The analytic expression of \(F(\omega)\) is given by [2] \[F(\omega)=\exp{\left[\frac{\pi w}{4}+\frac{iw}{2}\Bigl{(}\ln{\left(\frac{w}{2} \right)}-2\tau_{min}\Bigr{)}\right]}\Gamma\biggl{(}1-\frac{iw}{2}\biggr{)}{}_{ 1}F_{1}\biggl{(}\frac{iw}{2},1;\frac{iwy^{2}}{2}\biggr{)}, \tag{3.15}\] where \(w\equiv 4GM\omega\) with the lens mass \(M\) and \(y\) is the impact parameter, and \(\tau_{min}\) is the dimensionless time delay \[\tau_{min}=\frac{2}{(y+\sqrt{y^{2}+4})^{2}}-\ln{\left(\frac{y+\sqrt{y^{2}+4}}{ 2}\right)}. \tag{3.16}\] It is obvious from Fig. 3 that \(F(0)=1\) and \(|F(\infty)|<C\). Besides, \(F(u+iv)\) has poles at \(w=-2ni\) (\(n=1,2,\dots\)) that come from \(\Gamma\big{(}1-\frac{iw}{2}\big{)}\) but does not have any poles in \(I^{+}\). There is also a branch cut that comes from \(\ln w\), but this must be placed on the lower half of the complex \(\omega\)-plane in order to satisfy \(F(-\omega)=F^{*}(\omega)\) which comes from the condition that the wave in time domain is not complex but real. Therefore, \(F\) is regular in \(I^{+}\) and satisfies the K-K relation. #### 3.2.2 Born approximation The second example is the weak lensing in which the amplification factor is computed to linear order in \(\Phi\)[17]. In this approximation, \(K\) and \(S\) are given by (we also use the thin lens approximation for simplicity): \[K(\omega) = \int\frac{d^{2}k_{\perp}}{(2\pi)^{2}}\frac{\tilde{\Sigma}(\mathbf{k}_ {\perp})}{\Sigma_{0}}\frac{\sin{(r_{F}^{2}k_{\perp}^{2}/2)}}{r_{F}^{2}k_{ \perp}^{2}/2}, \tag{3.17}\] \[S(\omega) = \int\frac{d^{2}k_{\perp}}{(2\pi)^{2}}\frac{\tilde{\Sigma}(\mathbf{k}_ {\perp})}{\Sigma_{0}}\frac{\cos{(r_{F}^{2}k_{\perp}^{2}/2)}-1}{r_{F}^{2}k_{ \perp}^{2}/2}, \tag{3.18}\] where \(\tilde{\Sigma}(\mathbf{k}_{\perp})\) is the Fourier transformed surface mass density and \(\Sigma_{0}\) is a constant that has the dimension of surface mass density, and \[r_{F}(\omega)\equiv\sqrt{\frac{r_{l}(r_{0}-r_{l})}{\omega r_{0}}} \tag{3.19}\] is called the Fresnel scale [18] with lens position \(r=r_{l}\). Here, we show that Eq. (3.18) can be obtained from Eq. (3.17) by applying the K-K relation (3.14). In preparation for that, we define \(\Omega\equiv r_{l}(r_{0}-r_{l})k_{\perp}^{2}/2r_{0}\). Then using Eq. (3.17), the K-K relation (3.14) requires Figure 3: \(\operatorname{Re}F\) (solid) and \(\operatorname{Im}F\) (dashed) in the case of point mass lens with \(y=1\). that \(S(\omega)\) should be given by \[S(\omega) =-\frac{\omega}{\pi}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{ \omega^{\prime}-\omega}\frac{1}{\omega^{\prime}}\int\frac{d^{2}k_{\perp}}{(2\pi )^{2}}\frac{\tilde{\Sigma}(\mathbf{k}_{\perp})}{\Sigma_{0}}\frac{\sin{(r_{F}^{2}( \omega^{\prime})k_{\perp}^{2}/2)}}{r_{F}^{2}(\omega^{\prime})k_{\perp}^{2}/2}\] \[=-\frac{\omega}{\pi}\int\frac{d^{2}k_{\perp}}{(2\pi)^{2}}\frac{ \tilde{\Sigma}(\mathbf{k}_{\perp})}{\Sigma_{0}}\int_{-\infty}^{\infty}\frac{d \omega^{\prime}}{\omega^{\prime}-\omega}\frac{1}{\Omega}\sin{\left(\frac{ \Omega}{\omega^{\prime}}\right)}. \tag{3.20}\] Then, using the formula [19, Eq. (5.129)] \[\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\omega-\omega^{ \prime}}\sin{\left(\frac{\Omega}{\omega^{\prime}}\right)}=\mathcal{H}\!\left[ \sin{\frac{1}{t}}\right](\omega/\Omega)=\cos\!\left(\frac{\Omega}{\omega} \right)-1,\] where \(\mathcal{H}[\ ]\) denotes the Hilbert transform, \(S(\omega)\) given by Eq. (3.20) reproduces Eq. (3.18). Thus, the K-K relation holds, or in other words, the causality is satisfied in the Born approximation. ### Implications of the Kramers-Kronig relation In this subsection, we report some implications that directly follow from the K-K relation. #### 3.3.1 Relation between squares First, we show a new relation between the square of the real and imaginary parts of \(F\). Substituting Eq. (3.14) to Eq. (3.13), we get \[\frac{K(\omega)}{\omega} = -\frac{1}{\pi^{2}}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}} {\omega^{\prime}-\omega}\int_{-\infty}^{\infty}\frac{d\omega^{\prime\prime}} {\omega^{\prime\prime}-\omega^{\prime}}\frac{K(\omega^{\prime\prime})}{\omega ^{\prime\prime}}, \tag{3.21}\] then we have \[\int_{-\infty}^{\infty}d\omega\frac{K^{2}(\omega)}{\omega^{2}} = -\frac{1}{\pi^{2}}\int_{-\infty}^{\infty}d\omega\frac{K(\omega)} {\omega}\int_{-\infty}^{\infty}\frac{d\omega^{\prime}}{\omega^{\prime}- \omega}\int_{-\infty}^{\infty}\frac{d\omega^{\prime\prime}}{\omega^{\prime \prime}-\omega^{\prime}}\frac{K(\omega^{\prime\prime})}{\omega^{\prime\prime}} \tag{3.22}\] \[= \frac{1}{\pi^{2}}\int_{-\infty}^{\infty}d\omega^{\prime}\int_{- \infty}^{\infty}\frac{d\omega}{\omega-\omega^{\prime}}\frac{K(\omega)}{ \omega}\int_{-\infty}^{\infty}\frac{d\omega^{\prime\prime}}{\omega^{\prime \prime}-\omega^{\prime}}\frac{K(\omega^{\prime\prime})}{\omega^{\prime\prime}}\] \[= \int_{-\infty}^{\infty}d\omega^{\prime}\frac{S^{2}(\omega^{ \prime})}{\omega^{\prime 2}},\] where we have exchanged the order of integration with respect to \(\omega\) and \(\omega^{\prime}\), and used Eq. (3.14) again. Finally, we get the new relation #5 Footnote #5: As it is clear from the derivation, the relation (3.23) holds not only for the amplification factor but also for any other response functions as long as the K-K relation of the type (3.13) and (3.14) holds true. \[\int_{-\infty}^{\infty}d\omega\frac{K^{2}(\omega)}{\omega^{2}}=\int_{-\infty }^{\infty}d\omega\frac{S^{2}(\omega)}{\omega^{2}}. \tag{3.23}\] This relation may become useful in future observations of GL caused by the dark matter inhomogeneities. In such observations, measurements of \(\langle K^{2}(\omega)\rangle\) and \(\langle S^{2}(\omega)\rangle\) enable us to determine the matter powerspectrum on sub-galactic scales and provide a novel avenue to probe small-scale matter fluctuations [17, 20]. In this respect, Eq. (3.23) can be used in principle to verify the correctness of observed \(K(\omega)\) and \(S(\omega)\) if the measurements cover a wide frequency range to allow estimation of both left and right hand sides of Eq. (3.23) to a good approximation. As a final remark of this subsection, it is straightforward to show that the relation (3.23) leads to \[\int_{-\infty}^{\infty}d\omega\frac{K^{2}(\omega)+S^{2}(\omega)}{\omega^{2}}= \int_{-\infty}^{\infty}d\omega\frac{K^{2}(2\omega)}{\omega^{2}}. \tag{3.24}\] On the other hand, it was shown in [21] that within the Born approximation for GL caused by dark matter fluctuations the variances of \(K\) and \(S\) satisfy the universal relation \[\langle K^{2}(\omega)\rangle+\langle S^{2}(\omega)\rangle=\langle K^{2}(2 \omega)\rangle. \tag{3.25}\] Because of the simplicity and universality of the relation, it is expected that there is a simple explanation for the relation (3.25) based on some fundamental physical principles. The coincidence between the relation (3.25) and the ensemble average of the integrand of Eq. (3.24) may suggest that the causality is partially responsible for the relation (3.25) to hold. #### 3.3.2 Sum rule Second, we show the so-called sum rule. The sum rule is known in the field of optics as the relation between the sum over all frequencies of absorption of a medium and its electric density [12]. To derive the GL version of this, we deform Eq. (3.13) like #6 Footnote #6: If we take the same procedure for Eq. (3.14), the result is trivial and we obtain nothing new. \[K(\omega) =\frac{1}{\pi}\fint_{-\infty}^{\infty}d\omega^{\prime}\bigg{(} \frac{\omega^{\prime}}{\omega^{\prime}-\omega}-1\bigg{)}\frac{S(\omega^{\prime })}{\omega^{\prime}}\] \[=\frac{1}{\pi}\fint_{-\infty}^{\infty}du\frac{S(\omega+u)}{u}- \frac{2}{\pi}\int_{0}^{\infty}d\omega^{\prime}\frac{S(\omega^{\prime})}{\omega ^{\prime}}, \tag{3.26}\] and take \(\omega\to\infty\) limit. To compute the right-hand side in this limit, we use the general expression of \(F\) in the geometric optics limit [5]: \[F(\omega)=\sqrt{\mu_{1}}+\sum_{j=2}^{n}\sqrt{\mu_{j}}\exp{[i(\omega T_{1j}-\pi n _{j})]}, \tag{3.27}\] where \(n\) is the number of images, \(\mu_{j}\) is the magnification factor of the j-th image size, \(T_{1j}\) is the arrival time difference between the 1st and j-th image, and \(n_{j}\) is some numbers but not important here. As can be seen from Eq. (3.27), the real part has the constant term \(\sqrt{\mu_{1}}\) while the imaginary part only has the oscillating terms. Therefore, if we take the average for \(\omega\) in \(\omega\to\infty\) limit, all the oscillating terms in Eq. (3.26) drop and we get \[\sqrt{\mu_{1}}=1-\frac{2}{\pi}\int_{0}^{\infty}d\omega\frac{S(\omega)}{\omega}. \tag{3.28}\] This is the GL version of the sum rule, whose meaning is that summing up the imaginary part of \(F\) for all frequencies yields the magnification factor of the earliest arriving image. We expect that this relation could be used as a consistency check in future observations. ## 4 Application of the Kramers-Kronig relation ### Method for determination of the amplification factor In this section, we investigate a potential application of the K-K relation to observations of GL of GWs. In observations of GWs, what we directly observe is the lensed waveform and the separation of the observed waveform into the unlensed waveform and the amplification factor requires additional procedures. One approach is to employ the unlensed waveform based on templates of some typical sources characterized by the source parameters and to determine the best fit parameters [22]. In this case, the amplification factor is determined by dividing the measured waveform with the unlensed template. However, if the template is determined incorrectly, the obtained amplification factor will also be different from the true one. Such an error will occur, for example, when the source is the precessing binary stars [23, 24] since the waveform of the unlensed precessing binary and the microlensed unprecessing binary are very similar [14]. Hence, one may mistake the effect of precession for that of GL, resulting in the wrong amplification factor. Here we consider the ideal situation with no measurement error and argue that the K-K relation, in principle, can tell us whether the measured amplification factor is truly due to GL or not. If the true unlensed waveform is \(\phi_{0}\) but we mistakenly employ a different template \(\hat{\phi}_{0}\), we obtain the incorrect amplification factor \(\hat{F}\) given by \[\hat{F}=\frac{\phi_{0}}{\hat{\phi}_{0}}F, \tag{4.1}\] where \(F\) is the correct one that satisfies the K-K relation. Now, we demonstrate that \(\hat{F}\), in general, does not satisfy the K-K relation. To this end, we decompose \(\hat{F}\) as \(\hat{F}=F+\delta F\) and assume an extreme case where the error \(\delta F\) only occurs in \(\omega_{1}\leq\omega\leq\omega_{2}\)#7: Footnote #7: This assumption is reasonable, at least in the case of the precessing binary. Because precession is the post-Newtonian correction [23], it is not negligible only at the end of the inspiral phase, which means that only the high-frequency region is modulated. \[\delta F(\omega)\begin{cases}\neq 0&\omega_{1}\leq\omega\leq\omega_{2}\\ =0&\omega<\omega_{1},\ \omega_{2}<\omega\end{cases}. \tag{4.2}\] Substituting \(\hat{F}\) for both sides of Eq. (3.7) and using that \(F\) satisfies the K-K relation, we have \[\Delta_{tem}(\omega) \equiv \hat{F}(\omega)-1-\frac{\omega}{\pi i}\fint_{-\infty}^{\infty}\frac {d\omega^{\prime}}{\omega^{\prime}-\omega}\frac{\hat{F}(\omega^{\prime})-1}{ \omega^{\prime}} \tag{4.3}\] \[= \delta F(\omega)-\frac{\omega}{\pi i}\bigg{(}\fint_{-\omega_{2}} ^{-\omega_{1}}+\fint_{\omega_{1}}^{\omega_{2}}\bigg{)}\frac{d\omega^{\prime}} {\omega^{\prime}-\omega}\frac{\delta F(\omega^{\prime})}{\omega^{\prime}}.\] In general, the second term is nonzero even when \(\omega<\omega_{1}\) or \(\omega_{2}<\omega\), for which case the first term is zero by assumption. Thus the K-K relation must be broken by the misselection of templates. This shows that testing whether \(\Delta_{tem}\) vanishes or not enables us to conclude whether the claimed lensing signal is correct or not #8. Footnote #8: If one uses templates not only for the unlensed waveform but also for the amplification factor based on some particular lens model, violation of the K-K relation does not appear. In real observations, there is another cause that leads to the violation of the K-K relation: truncation of the frequency range due to the limited sensitivity of GW detectors. When the observable frequency range is restricted to \([\omega_{min},\omega_{max}]\), computation of the integral in the RHS of Eq. (3.7) by using observational data is possible only when the range of integration is restricted to this range. Thus we must limit the integration as \[\fint_{-\infty}^{\infty}\rightarrow\fint_{-\omega_{max}}^{-\omega_{min}}+\fint _{\omega_{min}}^{\omega_{max}}. \tag{4.4}\] Then, this causes a further violation of the K-K relation in addition to \(\Delta_{tem}\). From above, the violation becomes \[\Delta(\omega) \equiv \hat{F}(\omega)-1-\frac{\omega}{\pi i}\bigg{(}\fint_{-\omega_{max }}^{-\omega_{min}}+\fint_{\omega_{min}}^{\omega_{max}}\bigg{)}\frac{d\omega^{ \prime}}{\omega^{\prime}-\omega}\frac{\hat{F}(\omega^{\prime})-1}{\omega^{ \prime}} \tag{4.5}\] \[= \Delta_{tr}(\omega)+\Delta_{tem}(\omega),\] where \[\Delta_{tr}(\omega) \equiv \frac{\omega}{\pi i}\bigg{(}\int_{-\omega_{min}}^{\omega_{min}}+ \int_{-\infty}^{-\omega_{max}}+\int_{\omega_{max}}^{\infty}\bigg{)}\frac{d \omega^{\prime}}{\omega^{\prime}-\omega}\frac{F(\omega^{\prime})-1}{\omega^{ \prime}} \tag{4.6}\] \[= \frac{2\omega}{\pi i}\bigg{(}\int_{0}^{\omega_{min}}+\int_{\omega _{max}}^{\infty}\bigg{)}\frac{d\omega^{\prime}}{\omega^{\prime 2}-\omega^{2}} \bigg{[}K(\omega^{\prime})+i\frac{\omega}{\omega^{\prime}}S(\omega^{\prime}) \bigg{]}\] is the contribution of the truncation, and we have used \(F(-\omega)=F^{*}(\omega)\) in the last line. Considering the region \(\omega_{min}\ll\omega\ll\omega_{max}\), we get \[\mathop{\rm Re}\Delta_{tr}(\omega) \simeq C_{1}+C_{2}\bigg{(}\frac{\omega}{\omega_{max}}\bigg{)}^{2}, \tag{4.7}\] \[\mathop{\rm Im}\Delta_{tr}(\omega) \simeq D_{1}\frac{\omega_{min}}{\omega}+D_{2}\frac{\omega}{\omega_{max}}, \tag{4.8}\] here \[C_{1} = -\frac{2}{\pi}\int_{0}^{\omega_{min}}d\omega\frac{S(\omega)}{\omega}, \tag{4.9}\] \[C_{2} = \frac{2\omega_{max}^{2}}{\pi}\int_{\omega_{max}}^{\infty}d\omega \frac{S(\omega)}{\omega^{3}},\] (4.10) \[D_{1} = \frac{2}{\pi\omega_{min}}\int_{0}^{\omega_{min}}d\omega\ K( \omega),\] (4.11) \[D_{2} = -\frac{2\omega_{max}}{\pi}\int_{\omega_{max}}^{\infty}d\omega \frac{K(\omega)}{\omega^{2}}, \tag{4.12}\] are dimensionless constants. To know the magnitude of \(\Delta_{tr}\), we focus on \(\mathop{\rm Re}\Delta_{tr}\)#9. In the region \(\omega_{min}\ll\omega\ll\omega_{max}\), the second term of Eq. (4.7) is negligible. Moreover, assuming that \(\omega_{min}\) is so small that the weak lensing is a good description, we can calculate \(C_{1}\) explicitly by using the Born approximation. Substituting Eq. (3.18) into Eq. (4.9) yields Footnote #9: If we focus on \(\mathop{\rm Im}\Delta_{tr}\), we have to know the value of \(D_{1}\) and \(D_{2}\). However, we can not use the same method for \(D_{2}\) as for \(C_{1}\), because it is not reasonable to assume that \(\omega_{max}\) is in the weak lensing regime and use the Born approximation. This is why we only focus on \(\mathop{\rm Re}\Delta_{tr}\) \[C_{1}=\int\frac{d^{2}k_{\perp}}{(2\pi)^{2}}\frac{\tilde{\Sigma}(\mathbf{k}_{\perp} )}{\Sigma_{0}}W\big{(}r_{*}^{2}k_{\perp}^{2}/2\big{)}, \tag{4.13}\] where \(r_{*}\equiv r_{F}(\omega_{min})\) and #10 Footnote #10: Si(\(x\)) is the sine integral defined by \(\mathop{\rm Si}(x)\equiv\int_{0}^{x}\frac{\sin t}{t}dt\). \[W(x)\equiv-\frac{2}{\pi}\bigg{[}\frac{\cos x-1}{x}+\mathop{\rm Si}(x)-\frac{ \pi}{2}\bigg{]}. \tag{4.14}\] Figure 4: Comparison of \(W(x)\) and \(\sin x/x\). They have different shapes, but the order coincides in all regions. Let us compare \(W(x)\) with \(\sin x/x\), which is the filter function of Eq. (3.17). As can be seen from Fig. 4, \(W(x)\) and \(\sin x/x\) have similar shapes. Thus we can roughly say that \[C_{1}\sim K(\omega_{min}), \tag{4.15}\] which means that from the observed value of \(K(\omega_{min})\), we can predict how much the K-K relation is violated by the truncation. If the actual violation is larger than that prediction, we can assert that the selection of the template is wrong, by which we may be able to find correct unlensed waveform and the amplification factor. Finally, it is important to note that the above discussion does not depend on what the GWs source and lensing object are. ### Example: A point mass lens As a demonstration of the methodology described above for general lensing objects, we show how much the K-K relation is violated only by the truncation in the case of the point mass lens for which the amplification factor is given by Eq. (3.15). In Fig. 5, we plot \(K(\omega)\) and \[K^{(K-K)}(\omega)\equiv\frac{\omega}{\pi}\bigg{(}\int_{-\omega_{max}}^{-\omega _{min}}+\int_{\omega_{min}}^{\omega_{max}}\bigg{)}\frac{d\omega^{\prime}}{ \omega^{\prime}-\omega}\frac{S(\omega^{\prime})}{\omega^{\prime}}, \tag{4.16}\] which is identical to \(K(\omega)\) if there is no truncation. It can be seen from Fig. 5 that \(K^{(K-K)}(\omega)\) is shifted from \(K(\omega)\) by approximately a constant, whose value is close to \(K(\omega_{min})\), as predicted in Eq. (4.15). Figure 5: Violation of the K-K relation by the truncation. The parameters are \(M=2\times 10^{6}M_{\odot}\), \(y=0.3\) and the frequency range is decided by \(f_{min}=10^{-3}\)[Hz] and \(f_{max}=10^{-1}\)[Hz]. The two lines are shifted by the constant \(C_{1}=0.25\), which is close to \(K_{min}=0.16\) as predicted. Conclusion It is known that the Kramers-Kronig relation holds true when any system under consideration respects the causality that output comes only after input. We showed that the signal of the gravitational lensing obeys the causality: waves from a distant source, which propagate in the gravitational potential created by the lensing objects during their journey, never arrive earlier than the null geodesics emitted from the same source simultaneously. Inspired by the fact that gravitational lensing has such causality, we showed that the Kramers-Kronig relation holds for the amplification factor \(F\). Since this is a completely new attempt, there are some interesting implications that have not been mentioned in the literature. One of them is Eq. (3.23), which is expected to be used in observations of gravitational lensing caused by the dark matter inhomogeneities. And the other is the sum rule Eq. (3.28) which relates the integral of the imaginary part of the amplification factor with the magnification of the first arrival image in geometrical optics. We also proposed the potential application of the Kramers-Kronig relation to observations of gravitational lensing of GWs. To determine the amplification factor correctly, we need to use the correct template of the GWs source. We argued that the false selection of templates can be detected by examining the violation of the K-K relation and also calculated the limit of this detection due to the truncation of the frequency range caused by the detector's sensitivity. Our work suggests that examining the violation of the Kramers-Kronig relation may be used for correctly extracting the lensing signal in the gravitational wave observations. ## Acknowledgements We would like to thank Morifumi Mizuno for useful comments and discussions. This work is supported by the MEXT KAKENHI Grant Number 17H06359 (TS), JP21H05453 (TS), and the JSPS KAKENHI Grant Number JP19K03864 (TS).
2303.14985
When does subtracting a rank-one approximation decrease tensor rank?
Subtracting a critical rank-one approximation from a matrix always results in a matrix with a lower rank. This is not true for tensors in general. Motivated by this, we ask the question: what is the closure of the set of those tensors for which subtracting some of its critical rank-one approximation from it and repeating the process we will eventually get to zero? In this article, we show how to construct this variety of tensors and we show how this is connected to the bottleneck points of the variety of rank-one tensors (and in general to the singular locus of the hyperdeterminant), and how this variety can be equal to and in some cases be more than (weakly) orthogonally decomposable tensors.
Emil Horobet, Ettore Teixeira Turatti
2023-03-27T08:29:04Z
http://arxiv.org/abs/2303.14985v3
# When does subtracting a rank-one approximation decrease tensor rank? ###### Abstract Subtracting a critical rank-one approximation from a matrix always results in a matrix with a lower rank. This is not true for tensors in general. We ask the question: what is the closure of the set of those tensors for which subtracting a critical rank-one approximation does result in lowering the rank? In this article we show how to construct this variety of tensors and we show how this is connected to the bottleneck points of the variety of rank-one tensors (and in general to the singular locus of the hyperdeterminant) and how this variety can be more than orthogonally decomposable tensors. ## 1 Introduction Low-rank approximation of matrices is used for mathematical modelling and data compression, more precisely in principal component analysis, factor analysis, orthogonal regression, etc. In order to get all critical rank-one approximations of a given matrix one can find all critical points of the distance function from the said matrix to the variety of rank-one matrices. By the Eckhart-Young theorem, this is done by computing singular value decomposition. The number of such critical rank-one approximations (which is more generally equal to the _Euclidean Distance Degree_ of the variety of rank-one matrices [6]) is always the minimum of the column and row dimensions of the matrix. Furthermore, by subtracting any such critical rank-one approximation from the matrix we get a drop in the rank, hence obtaining a suitable algorithm to construct any low-rank approximation of the matrix. Low-rank approximations of tensors have even more application potential, but they are much more challenging both mathematically as well as computationally (tensor rank and many related problems are NP-hard (see [9, 10]). Despite this fact, many algorithms exist for finding rank-one approximations of a tensor. A way to do this, similarly to the matrix case, is by finding all critical points of the distance function from the said tensor to the variety of rank-one tensors (luckily this is an algebraically closed set). The generic number of such critical approximations was computed in [5] and shows that the degree of complexity of this problem for tensors is substantially higher than for matrices. For higher-rank approximations, though, we have that tensors of bounded rank do not form a closed subset, so the best low-rank approximation of a tensor on the boundary does not exist (see [16]). From this results that subtracting a rank-one approximation from a tensor might even increase its rank (see [17]). In this article, to resolve this obstacle for higher-rank approximations we turn our attention to the definition of _border rank_ of tensors (see for example in [7]) and we ask the question: what is the closure of the set of those tensors for which subtracting a rank-one approximation does result in lowering the (border) rank? We approach this problem by constructing the variety \(\mathrm{DL}_{1}\) of tensors for which subtracting a critical rank-one approximation yields a rank-one tensor. Then we construct the variety \(\mathrm{DL}_{2}\) of tensors for which subtracting a critical rank-one approximation yields an element of \(\mathrm{DL}_{1}\), and so on. Our main finding can be formulated as follows. **Main Theorem 1.1** (Theorem 4.2).: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors. Then the sequence_ \[X\subseteq\mathrm{DL}_{1}\subseteq\mathrm{DL}_{2}\subseteq\ldots\subseteq V,\] _stabilizes. This limit \(\mathrm{DL}_{N}\) (for some sufficiently large \(N\)) is the closure of all tensors \(T\) in \(V\), for which subtracting a critical rank-one approximations of \(T\) from itself and repeating this process finitely many times we will eventually get to zero._ Furthermore, we study in depth the variety \(\mathrm{DL}_{1}\) of tensors for which subtracting one of its critical rank-one approximations we get a rank-one tensor. We will see that this variety is determined by the _bottleneck points_ of the variety of rank-one tensors and is related to the nodal singularities of the _hyperdeterminant_ (see Remark 3.2). We also show the relation between the \(\mathrm{DL}_{i}\)'s and orthogonally decomposable tensors (see Proposition 3.5 and Proposition 4.1). Finally in Section 5 we show examples of different behaviours of the limit variety \(\mathrm{DL}_{N}\). ## 2 Preliminaries Let us fix \(n_{1},n_{2},\ldots,n_{p}\in\mathbb{N}\) and let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\). Now let us denote by \(X\subseteq V\) the variety of **rank-one** tensors, or pure tensors, of the form \(x_{1}\otimes x_{2}\otimes\ldots\otimes x_{p}\in V\). This indeed is a variety and is always defined by the vanishing of all \(2\times 2\)-subdeterminants of all flattenings of the tensor into matrices. Such a flattening is obtained by partitioning the \(p\) dimensions \(n_{1},\ldots,n_{p}\) into two sets, for example \(n_{1},\ldots,n_{q}\) and \(n_{q+1},\ldots,n_{p}\) and viewing the tensor as a \((n_{1}\cdot\ldots\cdot n_{q})\times(n_{q+1}\cdot\ldots\cdot n_{p})\)-matrix. Now take a tensor \(T\in V\) and we want to optimize the squared distance from \(T\) to the variety \(X\). So \[\begin{cases}\text{minimize }d_{T}(x)=\sum_{i_{1},\ldots,i_{p}}(T_{i_{1}, \ldots,i_{p}}-x_{i_{1},\ldots,i_{p}})^{2},\\ \text{subject to }x\in X.\end{cases} \tag{1}\] The constrained critical points \(x_{1},\ldots,x_{m}\) of the function \(d_{T}(x)\) are called the **critical rank-one approximations** of the tensor \(T\). For a generic tensor \(T\) the number \(m\) of such constrained critical points is constant and is called the _Euclidean Distance Degree_ of the variety \(X\) (for more details on this topic see [6]) and it is given by a precise formula in [5]. Now we are interested to see what happens to the rank of the tensors \(T-x_{i}\). It was shown in [17], that in the case of \(n_{1}=n_{2}=n_{3}=2\), so in \(V=\mathbb{C}^{2}\otimes\mathbb{C}^{2}\otimes\mathbb{C}^{2}\) for a generic tensor \(T\) (which has rank 2 or 3) subtracting an \(x_{i}\) results in a tensor \(T-x_{i}\) that has rank 3. So the rank doesn't decrease or even may increases. Motivated by this phenomena in continuation we want to construct the closure of the set of all tensors for which subtracting a critical rank-one approximation DOES decrease the rank. ### Joint ED correspondence and ED Duality Let us suppose that \(X\) is minimally generated by the homogeneous polynomials \(f_{1},\ldots,f_{s}\). A classical approach to solving constrained optimization problems like (1) is to use Lagrange multipliers. Any constrained critical point of \(d_{T}(x)\) is a solution to the following system, see [2, Section 5.5.3],) \[\begin{cases}\nabla d_{T}(x)+\sum_{i=1}^{s}\lambda_{i}\nabla f_{i}(x)=0\\ f_{1}(x)=f_{2}(x)=\ldots=f_{s}(x).\end{cases} \tag{2}\] The vector space spanned by the gradients of the defining polynomials at a point \(x\in X\) is called the **normal space** of \(X\) at \(x\) and we will denote it by \(N_{x}X\subseteq V\). Indeed for regular points of \(X\) we have that \(N_{x}X\) is the orthogonal complement (with respect to the standard scalar product) of the tangent space at \(x\) to \(X\). We will be interested only in solutions that are regular points (in this case only 0 is a singular point) of \(X\). We will denote the collection of regular points by \(X_{reg}\) (and in general the singular points of a variety \(X\) by \(X_{sing}\)). Using this notation (by [6]) the above system is equivalent to \[\begin{cases}T-x\in N_{x}X,\\ x\in X_{reg}.\end{cases}\] So a point \(x\in X_{reg}\) is a solution to (2) if and only if there exists \(y\in N_{x}X\), such that \(x+y=T.\) We have seen so far that pairs of points \((x,y)\) such that \(x\in X\) and \(y\in N_{x}X\) play a crucial role in our analysis. The closure of the collection of all such pairs with \(x\in X_{reg}\) is called the **conormal variety** and we denote it by \(\operatorname{Con}(X)\). Formally we have \[\operatorname{Con}(X)=\overline{\{(x,y)\in V_{x}\times V_{y},\ \ x\in X_{reg},\ y \in N_{x}X\}}. \tag{3}\] We use the notation \(V_{x}\times V_{y}\) instead of just simply \(V\times V\) to keep track that the first tuple of coordinates represents a point \(x\) in \(X\subseteq V\) and the second tuple of coordinates represents a point \(y\) in \(N_{x}X\subseteq V\). There is a natural pair of projections \(\pi_{1}:\operatorname{Con}(X)\to V_{x}\) to the first tuple of coordinates and \(\pi_{2}:\operatorname{Con}(X)\to V_{y}\) to the second tuple of coordinates. The image of the first projection is the variety \(X\) itself and the closure of the image of the second projection \[X^{*}:=\overline{\pi_{2}(\operatorname{Con}(X))}\] is called the **dual variety** of \(X\) (see [15, Section 5.4.2]) or in other words the **hyper-determinant** of format \(n_{1}\times\ldots\times n_{p}\) (see [8]). This way we realize both \(X\) and the dual \(X^{*}\) in the same ambient space \(V\). So we have that \(x\in X_{reg}\) is a solution to (2) if and only if there exists a point \((x,y)\) in \(\operatorname{Con}(X)\) such that \(x+y=T\). To encapsulate this relationship we construct the so called **joint ED correspondence**\(\mathcal{E}_{X,X^{*}}\) (see [6, Section 5]), to be the closure of all triples \[\{(x,y,T)\text{ such that, }\ (x,y)\in\operatorname{Con}(X)\subseteq V,T\in V \text{ and }T=x+y\}.\] So \(\mathcal{E}_{X,X^{*}}\) is the closure of the graph of the Minkowski sum over \(\operatorname{Con}(X)\). We have the following diagram of projections. Now we remind our reader of the duality property from [6, Theorem 5.2] governed by the joint ED correspondence. **Theorem 2.1** (ED Duality).: _Let \(X\subseteq V\) be an irreducible affine cone, \(X^{*}\subseteq V\) its dual variety and \(T\in V\) a general data point. The map \(x\mapsto T-x\) gives a bijection from the critical points of \(d_{T}\) on \(X\) to the critical points of \(d_{T}\) on \(X^{*}\)._ So this means that subtracting a critical rank-one approximation from \(T\) results in a critical point of the distance function \(d_{T}\) to the dual variety \(X^{*}\). So in continuation we will deal with tensors \(T\) for which we will have additional requirements on the critical points of the distance function \(d_{T}\) to the dual variety \(X^{*}\). ### Special data locus on the dual variety \(X^{*}\) So we want to determine the set of all tensors \(T\in V\), such that the optimization problem \[\begin{cases}\text{minimize }d_{T}(x)=\sum_{i_{1},\ldots,i_{p}}(T_{i_{1}, \ldots,i_{p}}-x_{i_{1},\ldots,i_{p}})^{2},\\ \text{subject to }x\in X^{*}.\end{cases} \tag{4}\] on the dual variety \(X^{*}\) has at least one critical point in a given subvariety of \(X^{*}\) (namely \(X\cap X^{*}\) to start with). By the ED duality [6, Section 5] we have that the joint ED correspondence \(\mathcal{E}_{X^{*},X}\) for the distance optimization problem on the dual variety \(X^{*}\) is the same as the joint ED correspondence \(\mathcal{E}_{X,X^{*}}\) for the distance optimization problem on \(X\), up to swapping the first to tuples \((x,y,T)\leftrightarrow(y,x,T)\) before taking the closures. This holds for the corresponding conormal varieties as well, so \(\text{Con}(X)\) is equal to \(\text{Con}(X^{*})\) up to the swap \((x,y)\leftrightarrow(y,x)\), before taking the closures. So we have the following dual diagram of projections. \(X^{*}\subseteq V_{y}\)\(\mathcal{E}_{X^{*},X}\subseteq V_{y}\times V_{x}\times V_{T}\)\(\pi_{1}\)\(\pi_{2}\)\(X^{*}\subseteq V_{y}\times V_{x}\)\(\pi_{3}\)\(\pi_{1}\)\(\pi_{2}\)\(X^{*}\subseteq V_{y}\times V_{x}\)\(\pi_{12}\)\(\pi_{2}\)\(X^{*}\subseteq V_{x}\) For any subvariety \(A\subseteq X^{*}\), following [12], we define the **data-locus of \(A\)** to be the closure of the projection \(\pi_{3}\) into the space \(V_{T}\) of tensors \(T\), such that corresponding to such a tensor \(T\) there is at least one pair of points \((y,x)\in\text{Con}(X^{*})\) in the conormal variety of \(X^{*}\), such that \(y\in A\). We will denote the data locus of \(A\) by \(\text{DL}_{A}\). Formally we have that \[\text{DL}_{A}=\overline{\pi_{3}\left(\mathcal{E}_{X^{*},X}\cap(A\times V_{x} \times V_{T})\right)}. \tag{5}\] **Remark 2.2**.: We choose not use the theorem in [12, Theorem 5], which describes the structure of a general \(\text{DL}_{A}\), because there we have the condition that \(X^{*}_{reg}\cap A_{reg}\neq\emptyset\). For our discussion in general we can not have such an assumption on \(A\). ## 3 The construction of the first layer of special tensor data locus \(\text{DL}_{1}\) In continuation we want to construct the closure of the set of all tensors for which subtracting a critical rank-one approximation does decrease the rank. We will do this by first constructing the set of tensors \(T\) for which subtracting a critical rank-one approximation we get a tensor that is rank-one. We denote this variety by \(\mathrm{DL}_{1}\). So first we are interested in tensors \(T\) with \(x_{1},\ldots,x_{m}\) critical rank-one approximations, such that \(T-x_{i}\in X\) for some \(x_{i}\). By the ED Duality (Theorem 2.1) this is equivalent to searching for tensors \(T\) with \(T-x_{1},\ldots,T-x_{m}\) critical points of the distance function \(d_{T}\) from \(T\) to the dual variety \(X^{*}\), such that \(T-x_{i}\in X\) for some \(x_{i}\). So we want to determine the special data locus of the subvariety \(X\cap X^{*}\subseteq X^{*}\) of the distance optimization problem to \(X^{*}\). So we have that \[\mathrm{DL}_{1}=\mathrm{DL}_{X\cap X^{*}}\,.\] We have the following theorem describing the structure of \(\mathrm{DL}_{1}\) **Theorem 3.1** (Structure of \(\mathrm{DL}_{1}\)).: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors and \(X^{*}\) its dual. Then_ \[\mathrm{DL}_{1}=\overline{\{x+y|\text{ such that }x\in X_{reg}\text{ and }y\in N_{x}X\cap X.\}}\] Proof.: By definition 5 of the data locus we have that \[\mathrm{DL}_{1}=\overline{\pi_{3}\left(\mathcal{E}_{X^{*},X}\cap((X\cap X^{*} )\times V_{x}\times V_{T})\right)}.\] So \(\mathrm{DL}_{1}\) is the closure of \[\{y+x|\text{ }(y,x,y+x)\in\mathcal{E}_{X^{*},X},\text{ }y\in X\cap X^{*}\}.\] For any triple \((y,x,y+x)\in\mathcal{E}_{X^{*},X}\), we have that \((y,x)\in\mathrm{Con}(X^{*})\). Now there exists a sequence \(y_{i}\to y\), with \(y_{i}\in X_{reg}^{*}\), such that \(x\in N_{y_{i}}X^{*}\). We have that generically \(x\) is a regular point of \(X\) (here actually \(X_{sing}=\{0\}\)) so by applying the swap from the ED duality (Theorem 2.1) we get that \(y_{i}\in N_{x}X\), that is \((x,y_{i})\in\mathrm{Con}(X)\). But now since \(\mathrm{Con}(X)\) is closed, by taking limits we get that \((x,y)\in\mathrm{Con}(X)\), hence \(y\in N_{x}X\). So we have that \[\mathrm{DL}_{1}\subseteq\overline{\{x+y|\text{ such that }x\in X_{reg}\text{ and }y\in N_{x}X\cap X\}}.\] For the other inclusion, take any pair \((x,y)\), such that \(x\in X_{reg}\text{ and }y\in N_{x}X\cap X\). Then it is clear that \(x\) is a critical point of \(x+y\), because \(y\in N_{x}X\) and subtracting \(x\) from \(x+y\) we get \(y\) which is an element of \(X\). Hence \(x+y\in\mathrm{DL}_{1}\). **Remark 3.2**.: We have that if for a pair \(x,y\in X\), with \(y\in N_{x}X\) in addition we require that \(x\) and \(y\) are projectively distinct points and \(x\in N_{y}X\), then the given pair \((x,y)\in\mathrm{BN}(X)\), where by \(\mathrm{BN}(\mathrm{X})\) we denote the **projective bottleneck pairs** of \(X\) (see [4, Lemma 2.4]). So we have that \[\{x+y|\text{ }(x,y)\in\mathrm{BN}(X)\}\subseteq\mathrm{DL}_{1}\,.\] Moreover we have that the projective bottleneck points are corresponding to the nodal singularities of \(X^{*}\) (see [18, Theorem 8.14]or [19, Theorem 0.3]). We will see in the upcoming examples, that indeed the structure of the singular locus of \(X^{*}\) plays an important role in our discussion. We will continue with presenting a set of properties of \(\mathrm{DL}_{1}\). We start with the dimension of the data locus. First we want to understand how the dimension of \(\mathrm{DL}_{1}\) relates to the dimension of \(X\cap X^{*}\). For this we have to distinguish two separate cases. First if \(\dim((X\cap X^{*})\cap X^{*}_{sing})<\dim(X\cap X^{*})\) or if \(\dim((X\cap X^{*})\cap X^{*}_{sing})=\dim(X\cap X^{*})\). In the second case we can not say anything about the dimension. For the first case though we get the following general description for the dimension of data loci (this can be rewritten for arbitrary varieties). **Proposition 3.3** (Dimension of \(\mathrm{DL}_{A}\) non-singular case).: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors and \(X^{*}\) its dual. Furthermore let \(A\subset X^{*}\) be a proper subvariety such that \(\dim(A\cap X^{*}_{sing})<\dim A\). Then_ \[\dim\mathrm{DL}_{A}=\dim A+\mathrm{codim}X^{*}.\] Proof.: In this case we can use the structure theorem of \(\mathrm{DL}_{A}\) according to [12, Theorem 5] and we get that \(\mathrm{DL}_{A}\) is the closure of the image under the Minkowski sum of \[\mathrm{Con}(A)\cap\mathrm{Con}(X^{*})=\{(x,y)|\ x\in A\setminus X^{*}_{sing}, \ y\in N_{x}X^{*}\}.\] Just like in the proof of [6, Theorem 4.1] this means that \(\pi_{1}:\mathrm{Con}(A)\cap\mathrm{Con}(X^{*})\to A\) is an affine vector bundle of rank equal to the codimension of \(X^{*}\), since the fibre over an \(x\in A\setminus X^{*}_{sing}\) is equal to \(\{x\}\times N_{x}X^{*}\), where the second factor is an affine space of dimension equal to the codimension of \(X^{*}\), varying smoothly with \(x\). Because we have that \(A\cap X^{*}_{sing}\) is of codimension higher than zero in \(A\), we get that the dimension of \(\mathrm{Con}(A)\cap\mathrm{Con}(X^{*})\) is equal to \(\dim A+\mathrm{codim}X^{*}\). Now the Minkowski sum map \(\Sigma:\mathrm{Con}(A)\cap\mathrm{Con}(X^{*})\to\mathrm{DL}_{A}\subseteq V\), with \((x,y)\mapsto x+y\), can not have positive dimensional fibres over a general point \(T\in\mathrm{DL}_{A}\) because for \((x,y)\in\Sigma^{-1}(T)\), the corresponding \(x\) is a critical point of the distance function from \(T\) to \(X^{*}\), hence the number of such \(x\)-s is finite and equal to the ED degree of \(X^{*}\). So in this case the dimension of \(\mathrm{DL}_{A}\) equals the dimension of \(\mathrm{Con}(A)\cap\mathrm{Con}(X^{*})\), hence the claim. **Remark 3.4**.: So in particular for \(\mathrm{DL}_{1}\), where \(A=X\cap X^{*}\), we get that if we have that \(\dim((X\cap X^{*})\cap X^{*}_{sing})<\dim(X\cap X^{*})\), then \(\dim\mathrm{DL}_{1}=\dim(X\cap X^{*})+\mathrm{codim}X^{*}\). Now we have the following set of properties of \(\mathrm{DL}_{1}\), which are about the relation of \(\mathrm{DL}_{1}\) to the variety of rank-one tensors, to border rank two tensors and to orthogonally decomposable or odeco tensors (see [3, 13] for more details on this topic). **Proposition 3.5**.: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors and \(X^{*}\) its dual. Then we have that_ 1. \(X\subseteq\mathrm{DL}_{1}\)_;_ 2. _Any_ \(T\in\mathrm{DL}_{1}\) _has border rank less than or equal to_ \(2\)_;_ 3. \(\mathrm{DL}_{1}\) _contains all rank at most_ \(2\) _orthogonally decomposable tensors._ Proof.: 1. First we observe that \(\{0\}\in X\cap X^{*}\), so \(\mathrm{DL}_{\{0\}}\subseteq\mathrm{DL}_{1}\), now by applying [11, Theorem 1] and following biduality we have that \(X=(X^{*})^{*}\subseteq\mathrm{DL}_{\{0\}}\subseteq\mathrm{DL}_{1}\). Hence the inclusion. (We remark that the \(X\cap X^{*}\subseteq\mathrm{DL}_{1}\) inclusion also follows from [12, Corollary 3.2].) 2. For any \(T\in\mathrm{DL}_{1}\), there exists a sequence of \(T_{i}\to T\), such that for each \(T_{i}\) there exists an \(x_{i}\in X\) with \(T-x_{i}\in X\) and this way \(T_{i}=x_{i}+(T-x_{i})\), so \(T_{i}\) is of rank at most \(2\). But by definition this exactly means that \(T\) is of border rank at most \(2\) (is the limit of rank at most \(2\) tensors). 3. By construction all rank at most two orthogonally decomposable tensors \(T\) are of the form \(x_{1}^{1}\otimes\ldots\otimes x_{p}^{1}+x_{1}^{2}\otimes\ldots\otimes x_{p}^{2}\), with \(x_{i}^{1}\perp x_{i}^{2}\). It is classical, that the summands in this decomposition are critical rank-one approximations to the given tensor (see for instance [14, Section 1]), hence such tensors are elements of \(\mathrm{DL}_{1}\). ## 4 The sequence of higher border rank special tensor data loci Now to continue our pursuit for the variety of tensors for which subtracting a critical rank-one approximation decreases the rank, we construct the second layer of data locus, the \(\mathrm{DL}_{2}\), the variety of those tensors for which subtracting a critical rank-one approximation we get a tensor on \(\mathrm{DL}_{1}\). By the ED Duality (Theorem 2.1) this is equivalent to searching for tensors \(T\) with \(T-x_{1},\ldots,T-x_{m}\) critical points of the distance function \(d_{T}\) from \(T\) to the dual variety \(X^{*}\), such that \(T-x_{i}\in\mathrm{DL}_{1}\) for some \(x_{i}\). So we want to determine the special data locus of the subvariety \(\mathrm{DL}_{1}\cap X^{*}\subseteq X^{*}\) of the distance optimization problem to \(X^{*}\). This means that we have that \(\mathrm{DL}_{2}=\mathrm{DL}_{X^{*}\cap\mathrm{DL}_{1}}\). We continue with the construction of \(\mathrm{DL}_{2},\mathrm{DL}_{3},\ldots\) so we define in general \[\mathrm{DL}_{i}=\mathrm{DL}_{X^{*}\cap\mathrm{DL}_{i-1}}\,.\] We have the following properties of the sequence of \(\mathrm{DL}_{i}\)-s. **Proposition 4.1**.: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors and \(X^{*}\) its dual. Then we have that_ 1. \(\mathrm{DL}_{i}\subseteq\mathrm{DL}_{i+1}\)_;_ 2. _Any_ \(T\in\mathrm{DL}_{i}\) _has border rank at most_ \(i+1\)_;_ 3. \(\mathrm{DL}_{i}\) _contains all rank at most_ \(i+1\) _orthogonally decomposable tensors._ Proof.: 1. For any tensor \(T\) which is not on the boundary of \(\mathrm{DL}_{i}\), subtracting a critical rank-one approximation from itself by definition we get a tensor on \(\mathrm{DL}_{i-1}\), which by induction is a subvariety of \(\mathrm{DL}_{i}\), hence after subtraction we get a tensor on \(\mathrm{DL}_{i}\). So the original \(T\) is in \(\mathrm{DL}_{i+1}\). But now also the algebraic closure of such tensors must be in \(\mathrm{DL}_{i+1}\), because this latter one is algebraically closed. Hence the inclusion. (We remark that the \(\mathrm{DL}_{i}\cap X^{*}\subseteq\mathrm{DL}_{i+1}\) inclusion also follows from [12, Corollary 3.2].) 2. For any tensor \(T\) which is not on the boundary of \(\mathrm{DL}_{i}\), there exist an \(x\in X\) critical approximation such that \(T-x\in\mathrm{DL}_{i-1}\). By induction \(T-x\) has border rank at most \(i\), so there exist a sequence \(T_{n}\to T\), with each \(T_{n}\) having rank at most \(i\). This way the sequence \(T_{n}+x\) converges to \(T\) and all \(T_{n}+x\) have rank at most \(i+1\). So \(T\) is of border rank at most \(i+1\). But having border rank at most \(i+1\) is an algebraically closed condition, so all tensors \(T\in\mathrm{DL}_{i}\) have border rank at most \(i+1\); 3. By construction all rank at most \(i+1\) orthogonally decomposable tensors \(T\) are of the form \(\sum_{k=1}^{i+1}x_{1}^{k}\otimes\ldots\otimes x_{p}^{k}\), with \(x_{j}^{k}\perp x_{j}^{l}\), for all \(k\neq l\). It is classical, that the summands in this decomposition are critical rank-one approximations to the given tensor (see for instance [14, Section 1]), hence such tensors are elements of \(\mathrm{DL}_{i}\). Now we switch our attention to the question of the stabilization of the sequence of varieties \(\mathrm{DL}_{i}\). **Theorem 4.2** (**Main Theorem**).: _Let \(V=\mathbb{C}^{n_{1}}\otimes\ldots\otimes\mathbb{C}^{n_{p}}\) and let \(X\) be the variety of rank-one tensors. Then the sequence_ \[X\subseteq\mathrm{DL}_{1}\subseteq\mathrm{DL}_{2}\subseteq\ldots\subseteq V,\] _stabilizes. This limit \(\mathrm{DL}_{N}\) (for some sufficiently large \(N\)), is the closure of all tensors \(T\) in \(V\), for which subtracting a critical rank-one approximations of \(T\) from itself and repeating this process finitely many times we will eventually get to zero._ Proof.: Indeed we have seen from Proposition 4.1 point \((a)\) that \(\mathrm{DL}_{i}\subseteq\mathrm{DL}_{i+1}\), so the sequence \((\mathrm{DL}_{i})_{i}\) is increasing. Also we have that \(\mathrm{DL}_{i}\subseteq V\), so the sequence is bounded from above, hence is must converge. Now this limit must be some \(\mathrm{DL}_{N}\), for a sufficiently large \(N\), because otherwise \(\bigcup_{i}(\mathrm{DL}_{i}\setminus\mathrm{DL}_{i-1})_{i}\) would be a countably infinite partition of the limit variety. Now from an index on the affine dimension of \(\mathrm{DL}_{i}\setminus\mathrm{DL}_{i-1}\) is equal to the affine dimension of the limit variety, then the limit variety would be the _countable_ union of equal dimensional open sets. This is a contradiction. Then for any generic enough (outside of the algebraic boundary of \(\mathrm{DL}_{N}\)) tensor \(T\in\mathrm{DL}_{N}\) we have that subtracting a suitable critical rank-one approximation \(x_{N}\in X\) we get \(T-x_{N}\in\mathrm{DL}_{N-1}\). Now again by genericity, we have that \(T-x_{N}\) is outside the algebraic boundary of \(\mathrm{DL}_{N-1}\), so there is a suitable critical rank-one approximation \(x_{N-1}\) of \(T-x_{N}\), such that \(T-x_{N}-x_{N-1}\in\mathrm{DL}_{N-2}\). And so we repeat the process, till eventually we get to \(T-x_{N}\ldots-x_{1}\in X\), so there is a rank-one tensor \(x_{0}=T-x_{N}\ldots-x_{1}\) and hence \(T=x_{N}+\ldots+x_{0}\) and the process stops. **Remark 4.3**.: We have the following remarks. 1. The minimum number of steps to reach the limit \(\mathrm{DL}_{N}\) is equal to the minimum of the dimensions \(n_{1},\ldots,n_{p}\) minus one. This is because first of all by their definition orthogonally decomposable tensors are a subvariety of \(\mathrm{DL}_{N}\). Then by Proposition 4.1 point \((c)\) we have that at step \(i\) in \(\mathrm{DL}_{N}\) we get included all the rank \(i+1\) orthogonally decomposable tensors. And finally the maximal rank of an orthogonally decomposable tensor is the minimum of the dimensions \(n_{1},\ldots,n_{p}\). 2. By Proposition 4.1 point \((b)\) we have that any tensor \(T\in\mathrm{DL}_{N}\) has border rank at most \(N+1\). 3. By Proposition 3.3 we can see that we will not get stabilization of the sequence \(\mathrm{DL}_{i}\) whilst we have that \(\dim((X\cap X^{*})\cap X^{*}_{sing})<\dim(X\cap X^{*})\), because in this case \(\dim\mathrm{DL}_{i}<\dim\mathrm{DL}_{i+1}\). So \(\dim((X\cap X^{*})\cap X^{*}_{sing})=\dim(X\cap X^{*})\) is a necessary condition for stabilization of the sequence. But is not a sufficient condition for the stabilization, you can see this for instance on Example 5.3, where we already started the sequence with the subvariety \(X\cap X^{*}=X\subseteq X^{*}_{sing}\) and there is no stabilization yet at that step. ## 5 Examples of \(\mathrm{DL}_{N}\) In this section we will present several examples to show that \(\mathrm{DL}_{N}\) can be sometimes the entire ambient space or it can be something smaller. It can be equal to the variety of orthogonally decomposable tensors or it can be something bigger than that. In our first example we will see that \(\mathrm{DL}_{N}\) can be the entire ambient space. **Example 5.1** (Matrices).: It is classical that, for all matrices subtracting a rank-one approximation from the given matrix the result is of a lower rank. Let \(M\) be the variety of \(n\times n\) matrices and let \(M^{\leq i}\) be the subvariety of matrices of rank less than or equal to \(i\). Then \(M^{\leq 1}\) is the variety of rank-one matrices and \(M^{\leq n-1}\) is its dual. Now by [8, Chapter 1, Prop. 4.11 and Lemma 4.12] we know that the conormal variety of \(M^{\leq i}\) is the closure of the set of pairs \[\{(A,B)\in M\times M,\text{ s.t. }A\in M^{\leq i}\text{ and }B\in M^{\leq n-i}\}.\] So this way by Proposition 3.1 we get that \(\mathrm{DL}_{1}\) is the closure of \[\{A+B|A\in M^{\leq 1}\text{ and }B\in\left(M^{\leq n-1}\cap M^{\leq 1}\right) \}=\{A+B|A\in M^{\leq 1}\text{ and }B\in M^{\leq 1}\}.\] So \(\mathrm{DL}_{1}=M^{\leq 2}\). Now we claim that \(\mathrm{DL}_{i}=M^{\leq i+1}\). Indeed if by induction we suppose that \(\mathrm{DL}_{i-1}=M^{\leq i}\), then \(\mathrm{DL}_{i}\) is the closure of the \(\pi_{3}\) projection of \[\mathcal{E}_{M^{n-1},M^{\leq 1}}\cap\left(M^{\leq i}\times M\times M\right).\] We also have that \(\mathcal{E}_{M^{\leq n-1},M^{\leq 1}}=\{(A,B,A+B)|A\in M^{n-1},B\in M^{\leq 1}\}\}\). So \(\mathrm{DL}_{i}\) is the closure of the \(\pi_{3}\) projection of \[\{(A,B,A+B)|A\in M^{\leq i},B\in M^{\leq 1}\},\] and hence \(\mathrm{DL}_{i}\) is the closure of \(M^{\leq i}+M^{\leq 1}=M^{\leq i+1}\). So finally we have that \[M^{\leq 1}\subset\mathrm{DL}_{1}=M^{\leq 2}\subset\ldots\mathrm{DL}_{n-2}=M^{ \leq n-1}\subset\mathrm{DL}_{n-1}=M^{\leq n}=M.\] In our next example we will see that stabilization of the \(\mathrm{DL}_{i}\) sequence can be happening before reaching the entire ambient space. Moreover in our next example the limit variety will be equal to the variety of orthogonally decomposable tensors. **Example 5.2** (Symmetric \(2\times 2\times 2\) tensors).: Now we switch our attention to \(2\times 2\times 2\) tensors. Let \(V=\mathbb{C}^{2}\otimes\mathbb{C}^{2}\otimes\mathbb{C}^{2}\). Let \(X\) be the variety of symmetric rank-one tensors in \(V\). This variety is defined by the symmetry conditions \(x_{2}=x_{3}=x_{5}\) and \(x_{4}=x_{6}=x_{7}\) and the corresponding \(2\times 2\) minors of all the flattenings. Its dual \(X^{*}\) is of codimension one and of degree four, generated by the polynomial \[-y_{2}^{2}y_{4}^{2}-2y_{2}y_{3}y_{4}^{2}-\ldots-18y_{1}y_{5}y_{7}y_{8}+27y_{1} ^{2}y_{8}^{2}.\] Now if we run the computation (from [12, Example 4.2]) to find the data locus of tensors which have at least a critical point to \(X^{*}\) on \(X\) we find that this locus, which is \(\mathrm{DL}_{1}\), has dimension \(3\) and besides the symmetry conditions is generated by the polynomial \[-x_{3}^{2}+x_{1}x_{6}-x_{6}^{2}+x_{3}x_{8},\] which is exactly the generating polynomial of the variety of \(2\times 2\times 2\) symmetric orthogonally decomposable tensors (for the equations see [13, Example 3.4] and for specific format restrictions see [1]). We remark that \(\mathrm{DL}_{1}\) is an irreducible variety and so is the variety \(X^{*}_{sing}\). Now if we proceed to compute \(\mathrm{DL}_{2}\) we find that stabilization occurs at this step, so \[X\subset\mathrm{DL}_{1}=\mathrm{DL}_{2}=\text{symmetric dodeco tensors}.\] So for \(2\times 2\times 2\) tensors the symmetric orthogonally decomposable tensors are the only ones for which subtracting a symmetric critical rank-one approximations from themselves (eventually) will drop their rank. In our next example we will see that \(\mathrm{DL}_{N}\) can be more than the orthogonally decomposable tensors. **Example 5.3** (Regular \(2\times 2\times 2\) tensors).: Let \(V=\mathbb{C}^{2}\otimes\mathbb{C}^{2}\otimes\mathbb{C}^{2}\). Let \(X\) be the variety of regular rank-one tensors in \(V\). This variety is defined by the \(2\times 2\) minors of all the flattenings. Its dual \(X^{*}\), Cayley's hyperdeterminant, is of codimension one, of degree four and generated by the polynomial \[y_{4}^{2}y_{5}^{2}-2y_{3}y_{4}y_{5}y_{6}+y_{3}^{2}y_{6}^{2}-\ldots-2y_{1}y_{3} y_{6}y_{8}-2y_{1}y_{2}y_{7}y_{8}+y_{1}^{2}y_{8}^{2}.\] After running the computations for \(\mathrm{DL}_{1}\) we find out that it is a codimension 2, degree 12 variety with 3 components. Here again \(\mathrm{DL}_{1}\) has as many components as \(X^{*}_{sing}\) and the precise description of this singular locus can be found in [19, Theorem 0.3]). The first component \(\mathrm{DL}_{11}\) is generated by \[-x_{2}x_{5}+x_{1}x_{6}-x_{4}x_{7}+x_{3}x_{8},-x_{2}x_{3}+x_{1}x_{4}-x_{6}x_{7}+ x_{5}x_{8},\] the second component \(\mathrm{DL}_{12}\) is generated by \[-x_{2}x_{5}+x_{1}x_{6}-x_{4}x_{7}+x_{3}x_{8},-x_{3}x_{5}-x_{4}x_{6}+x_{1}x_{7}+ x_{2}x_{8}\] and the third component \(\mathrm{DL}_{13}\) is generated by \[-x_{2}x_{3}+x_{1}x_{4}-x_{6}x_{7}+x_{5}x_{8},-x_{3}x_{5}-x_{4}x_{6}+x_{1}x_{7}+ x_{2}x_{8}.\] So we can see that each component is generated by two out of the three generating polynomials for odeco tensors moreover we have that odeco tensors are exactly \(\mathrm{DL}_{11}\cap\mathrm{DL}_{12}\cap\mathrm{DL}_{13}\). Now if we continue to compute \(\mathrm{DL}_{2}\) we get that it is of codimension 1 and of degree 24 with 3 components. So we don't have yet stabilization of the \(\mathrm{DL}^{\prime}_{i}\,s\). Now unfortunately, due to the lack of necessary computational power we don't know what \(\mathrm{DL}_{3}\) will be. But because the maximal rank in \(V\) is 3 and by [17] we know that the limit \(\mathrm{DL}_{N}\neq X^{*}\), we conjecture that \(\mathrm{DL}_{2}=\mathrm{DL}_{3}\). **Remark 5.4**.: We have seen throughout our examples that \(\mathrm{DL}_{1}\) will always have as many components as \(X^{*}_{sing}\). And that this number of components carries through for the rest of the \(\mathrm{DL}_{i}\)'s. This shows, in harmony with Remark 3.2, that indeed there is a strong connection between our variety \(\mathrm{DL}_{N}\) and the (nodal) singularities of the hyperdeterminant \(X^{*}\). This research direction should be further persuaded. **Remark 5.5**.: It also remains open to describe exactly after which step do the \(\mathrm{DL}_{i}\)'s stabilize in function of the given tensor format or in function of the typical (border) rank or the maximal possible rank. **Acknowledgements.** The author is grateful to Jan Draisma for the stimulating discussions in the topic of this paper.
2310.04476
Strong transitivity of a graph
A vertex partition $\pi = \{V_1, V_2, \ldots, V_k\}$ of $G$ is called a \emph{transitive partition} of size $k$ if $V_i$ dominates $V_j$ for all $1\leq i<j\leq k$. For two disjoint subsets $A$ and $B$ of $V$, we say $A$ \emph{strongly dominates} $B$ if for every vertex $y\in B$, there exists a vertex $x\in A$, such that $xy\in E$ and $deg_G(x)\geq deg_G(y)$. A vertex partition $\pi = \{V_1, V_2, \ldots, V_k\}$ of $G$ is called a \emph{strong transitive partition} of size $k$ if $V_i$ strongly dominates $V_j$ for all $1\leq i<j\leq k$. The \textsc{Maximum Strong Transitivity Problem} is to find a strong transitive partition of a given graph with the maximum number of parts. In this article, we initiate the study of this variation of transitive partition from algorithmic point of view. We show that the decision version of this problem is NP-complete for chordal graphs. On the positive side, we prove that this problem can be solved in linear time for trees and split graphs.
Subhabrata Paul, Kamal Santra
2023-10-06T06:25:09Z
http://arxiv.org/abs/2310.04476v1
# Strong transitivity of a graph ###### Abstract A vertex partition \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) of \(G\) is called a _transitive partition_ of size \(k\) if \(V_{i}\) dominates \(V_{j}\) for all \(1\leq i<j\leq k\). For two disjoint subsets \(A\) and \(B\) of \(V\), we say \(A\)_strongly dominates_\(B\) if for every vertex \(y\in B\), there exists a vertex \(x\in A\), such that \(xy\in E\) and \(deg_{G}(x)\geq deg_{G}(y)\). A vertex partition \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) of \(G\) is called a _strong transitive partition_ of size \(k\) if \(V_{i}\) strongly dominates \(V_{j}\) for all \(1\leq i<j\leq k\). The Maximum Strong Transitivity Problem is to find a strong transitive partition of a given graph with the maximum number of parts. In this article, we initiate the study of this variation of transitive partition from algorithmic point of view. We show that the decision version of this problem is NP-complete for chordal graphs. On the positive side, we prove that this problem can be solved in linear time for trees and split graphs. **Keywords.** Strong transitivity, NP-completeness, Linear-time algorithm, Trees, Split graphs, Chordal graphs. ## 1 Introduction Partitioning a graph is one of the fundamental problems in graph theory. In the partitioning problem, the objective is to partition the vertex set (or edge set) into some parts with desired properties, such as independence, minimal edges across partite sets, etc. A _dominating set_ of \(G=(V,E)\) is a subset of vertices \(D\) such that every vertex \(x\in V\setminus D\) has a neighbour \(y\in D\), that is, \(x\) is dominated by some vertex \(y\) of \(D\). For two disjoint subsets \(A\) and \(B\) of \(V\), we say \(A\)_dominates_\(B\) if every vertex of \(B\) is adjacent to at least one vertex of \(A\). Many variants of partitioning problem have been studied in literature based on some domination relationship among the partite sets. For example _domatic partition_[12, 13] (each partite set is a dominating set), _Grundy partition_[10, 11, 12] (each partite set is independent and dominates every other partite sets after itself), _transitive partition_[10, 10, 11, 12] (a generalization of Grundy partition where partite sets need not be independent), _upper domatic partition_[10, 11, 12] (a generalization of transitive partition where for any two partite sets \(X\) and \(Y\) either \(X\) dominates \(Y\) or \(Y\) dominates \(X\) or both). In 1996, Sampathkumar and Pushpa Latha introduced the notion of _strong domination_[13]. A _strong dominating set_ of \(G=(V,E)\) is a subset of vertices \(D\) such that for every vertex \(x\in V\setminus D\), \(x\) is dominated by some vertex \(y\in D\) and \(deg_{G}(y)\geq deg_{G}(x)\). Recently, based on this strong domination, a variation of domatic partition, namely _strong domatic partition_, has been studied in [1]. In the _strong domatic partition_, the vertex set is partitioned into \(k\) parts, say \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\), such that each \(V_{i}\) is a strong dominating set of \(G\). In this article, we introduce a variation of transitive partition based on strong domination, namely _strong transitive partition_. A vertex partition \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) of \(G\) is called a _strong transitive partition_ of size \(k\) if \(V_{i}\) strongly dominates \(V_{j}\) for all \(1\leq i<j\leq k\). The maximum order of such a strong transitive partition is called _strong transitivity_ of \(G\) and is denoted by \(Tr_{st}(G)\). The Maximum Strong Transitivity Problem and its corresponding decision version are defined as follows: Maximum Strong Transitivity Problem(MSTP) _Instance:_ A graph \(G=(V,E)\) _Solution:_ A strong transitive partition of \(G\) _Measure:_ Order of the strong transitive partition of \(G\) Maximum Strong Transitivity Decision Problem(MSTDP) _Instance:_ A graph \(G=(V,E)\), integer \(k\) _Question:_ Does \(G\) have a Strong transitive partition of order at least \(k\)? Note that every strong transitive partition is also a transitive partition. Therefore, for any graph \(G\), \(1\leq Tr_{st}(G)\leq Tr(G)\leq\Delta(G)+1\), where \(\Delta(G)\) is the maximum degree of \(G\). From the definition of a strong transitive partition, it is clear that for the regular graph, transitivity is same as strong transitivity; as a consequence, for the graph class \(K_{n}\) and cycle \(C_{n}\), they are the same. However, the transitive partition of a graph is not always a strong transitive partition, even for the same value of both parameters. For a path \(P_{3}\), with vertex set \(\{a,b,c\}\), taking \(\pi=\{V_{1}=a,c,V_{2}=\{b\}\}\), then it is a transitive partition but not a strong transitive partition as the \(deg(b)>deg(a)\) and \(deg(b)>deg(a)\). But considering \(\pi^{\prime}=\{V_{1}=b,c,V_{2}=\{a\}\}\), then it is both a strong transitive and transitive partition with size \(2\). It can be easily verified that for the graph class \(P_{n},n\geq 6\), transitivity and strong transitivity are the same and equal to \(3\). So, we see that there are graph classes where both parameters have the same value, but generally, their difference can be arbitrarily large. If \(G\) is a complete bipartite graph of the form \(K_{m,m-1}\), then \(Tr_{st}(K_{m,m-1})=2\). Let \(V(G)=X\cup Y\). Also, let \(x\in X\) and consider a vertex partition \(\pi=\{V_{1},V_{2}\}\), where \(V_{1}=(X\setminus\{x\})\cup Y\), \(V_{2}=\{x\}\). Since \(m\geq 2\), there exits \(y\in Y\) and \(m=deg(y)\geq deg(x)=m-1\). So, \(\pi\) is a strong transitive partition of \(G\). Therefore, \(Tr_{st}(K_{m,m-1})\geq 2\). To prove \(Tr_{st}(K_{m,m-1})=2\), we now show \(Tr_{st}(K_{m,m-1})<3\) by contradiction. Assume \(Tr_{st}(K_{m,m-1})\geq 3\) and \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) be a strong transitive partition of \(G\) of size \(k\). Let \(y\in V_{i}\), \(3\leq i\leq k\) and \(y\in Y\). Since \(\pi\) is a strong transitive partition, \(V_{1}\) strongly dominates \(V_{i}\). So, for \(y\in V_{i}\), there must exists a vertex \(x\in V_{1}\), such that \(xy\in E(G)\) and \(deg_{G}(x)\geq deg_{G}(y)\). But for every vertex \(x\in X\), \(deg_{G}(x)=m-1<deg_{G}(y)=m\). So, \(V_{i}\) cannot contain vertices from \(Y\). Therefore, \(V_{i}\) contains only vertices from \(X\). Let \(x\in V_{i}\) and \(x\in X\), \(3\leq i\leq k\). As \(\pi\) is a strong transitive partition, \(V_{2}\) strongly dominates \(V_{i}\). So, for \(x\in V_{i}\), there must exists a vertex \(y\in V_{2}\), such that \(xy\in E(G)\) and \(deg_{G}(y)\geq deg_{G}(x)\). Now, \(V_{1}\) strongly dominates \(V_{2}\) and \(y\in V_{2}\). To strongly dominates \(y\), we have a vertex \(x^{\prime}\in V_{1}\cup X\), such that \(x^{\prime}y\in E(G)\) and \(deg_{G}(x^{\prime})\geq deg_{G}(y)\). Again, this is not possible as the \(deg_{G}(x^{\prime})=m-1<deg_{G}(y)=m\). Therefore, \(V_{i}\), \(3\leq i\leq k\) cannot contain vertices from \(X\) also. So, \(k\leq 3\). Therefore, if \(G\) is a \(K_{m,m-1}\), \(m\geq 2\), then \(Tr_{st}(K_{m,m-1})=2\). We know that \(Tr(K_{m,m-1})=\min\{m+1,m\}=m\), so we have \(Tr_{st}(K_{m,m-1})=2\). So, the difference \(Tr(G)-Tr_{st}(G)=m-2\), which is arbitrarily large for \(m\). For the transitivity, if \(H\) is a subgrah of graph \(G\), then \(Tr(H)\leq Tr(G)[\text{HH18}]\). But for the strong transitivity, this is not true. Considering \(G=K_{3,2}\) and \(H=C_{4}\). Clearly, \(H\) is a subgraph of \(G\), and \(Tr_{st}(G=K_{3,2})=2\), \(Tr_{st}(C_{4})=3\). So, in this example, \(Tr_{st}(H)>Tr_{st}(G)\). Moreover, the behaviour of the strong transitivity is the same as the transitivity when the graph is disconnected. It is also known that every connected graph \(G\) with \(Tr(G)=k\geq 3\) has a transitive partition \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) such that \(|V_{k}|=|V_{k-1}|=1\) and \(|V_{k-i}|\leq 2^{i-1}\) for \(2\leq i\leq k-2\)[19]. This implies that the maximum transitivity problem is fixed-parameter tractable [19]. Since a strong transitive partition of a graph is also a transitive partition, MSTP is also fixed-parameter tractable. In this paper, we study the computational complexity of this problem. The main contributions are summarized below: 1. The MSTDP is NP-complete for chordal graphs. 2. The MSTP can be solved in linear time for trees and split graphs. The rest of the paper is organized as follows. Section 2 shows that the MSTDP is NP-complete in chordal graphs. Section 3 describes linear-time algorithms for trees and split graphs. Finally, Section 4 concludes the article. ## 2 NP-complete for chordal graphs of strong transitivity This section shows that Maximum Strong Transitivity Decision Problem is NP-complete for chordal graphs. A graph is called _chordal_ if there is no induced cycle of length more than \(3\). Clearly, MSTDP is in NP. We prove the NP-completeness of this problem by showing a polynomial-time reduction from Proper 3-Coloring Decision Problem in graphs, which is known to be NP-complete [1]. A proper 3-colring of a graph \(G=(V,E)\) is a function \(g\), from \(V\) to \(\{1,2,3\}\), such that for any edge \(uv\in E\), \(g(u)\neq g(v)\). The Proper 3-Coloring Decision Problem is defined as follows: _Input:_\(G=(V,E)\) _Input:_\(G=(V,E)\) _Output:_\(G=(V,E)\) 1. For each vertex \(v_{i}\in V\), we consider a tree \(T\) (shown in Figure 1) with \(v_{i}\) as the root and degree of the root is \((m+3)-deg_{G}(v_{i})\). Also, for each edge, \(e_{j}\in E\), we consider a vertex \(v_{e_{i}}\) and consider another tree \(T^{\prime}\) (shown in Figure 1) with \(v_{e_{i}}\) as the root, where the degree of the root is \(m+2\). 2. For each edge \(e_{j}\in E\), we take another vertex \(e_{j}\) in \(G^{\prime}\) and also take another extra vertex \(e\) in \(G^{\prime}\). Let \(A=\{e_{1},e_{2},\ldots,e_{m},e\}\). We make a complete graph with vertex set \(A\). 3. We take another extra three vertices \(v_{a}\), \(v_{e}\) and \(v_{b}\) and consider three trees \(T^{\prime}\) (shown in Figure 1) with \(v_{a}\), \(v_{e}\) and \(v_{b}\) as the roots, respectively. 5. Next we add the following edges: for every edge \(e_{k}=(v_{i},v_{j})\in E\), we join the edges \((e_{k},v_{i})\), \((e_{k},v_{j})\), \((e_{k},v_{e_{k}})\). Also we add the edges \((e,v_{a})\), \((e,v_{e})\), \((e,v_{b})\). 6. Finally, we set \(k=m+4\). Note that \(G^{\prime}\) is a chordal graph. The construction from \(G\) to \(G^{\prime}\) is illustrated in Figure 2. Next, we show that \(G\) has a proper 3-coloring if and only if \(G^{\prime}\) has a strong transitive partition of size \(k\). For the forward direction, we have the following lemma. **Lemma 1**.: _If \(G=(V,E)\) has a proper 3-coloring, then \(G^{\prime}=(V^{\prime},E^{\prime})\) has a strong transitive partition of size \(k\)._ Figure 1: The trees \(T\) and \(T^{\prime}\) Figure 2: Construction of \(G^{\prime}\) from \(G\) Proof.: Given a proper \(3\)-coloring \(g\) from \(V\) to \(\{1,2,3\}\), a strong transitive partition of size \(k\), say \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) can be obtain in the following ways: 1. If \(g(v_{i})=q\), then \(v_{i}\in V_{q}\), for all \(v_{i}\in V(G)\). 2. \(v_{a}\in V_{3}\), \(v_{e}\in V_{2}\) and \(v_{b}\in V_{1}\). 3. For each \(v_{e_{j}}\) vertex corresponding an edge \(e_{j}\) with end points \(v_{x}\) and \(v_{y}\) in \(G\), assign \(v_{e_{j}}\in V_{l}\), where \(l=\{1,2,3\}\setminus\{g(v_{x}),g(v_{y})\}\). Put the other vertices of the trees \(T\) and \(T^{\prime}\) in \(V_{1},V_{2}\) and \(V_{3}\) based on their root. This is illustrated in Figure 3. 4. Let \(e_{j}\in V_{3+j}\), \(1\leq j\leq m+3\), and \(e\in V_{m+4}\). Let \(H\) be the complete graph induced by \(A\). Since \(H\) is a complete graph, then \(V_{i}\) strongly dominates \(V_{j}\) for \(4\leq i<j\leq k\). Also, for each \(i=1,2,3\), every vertex of \(A\) is adjacent to a vertex of \(V_{i}\), and the degree of that vertex is equal to the degree of a vertex of \(A\). Therefore, for each \(i=1,2,3\), \(V_{i}\) strong dominates \(V_{j}\) for all \(j>3\). At the end, from Figure 3, it is clear that \(V_{i}\) strongly dominates \(V_{j}\) for \(1\leq i<j\leq 3\). Hence, \(\pi\) is a strong transitive partition of \(G^{\prime}\) of size \(k\). Therefore, if \(G\) has a proper \(3\)-coloring, then \(G^{\prime}\) has a strong transitive partition of size \(k\). Next, we show the converse of the statement. For this, we first prove the following claim. **Claim 2**.: _Let \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) be a strong transitive partition of \(G^{\prime}\) of size \(k\) such that \(|V_{k}|=1\). Then the sets \(V_{4},V_{5},\ldots V_{k}\) contain only vertices from \(A\) and the sets \(V_{1},V_{2},V_{3}\) contain only vertices from \(V^{\prime}\setminus A\)._ Proof.: We divide the proof into two cases: **Case 1.**\(e\in V_{m+4}\) Since the degree of each vertex from \(\{v_{a},v_{e},v_{b}\}\) is \(m+3\) and adjacent with three vertices having a degree is more than or equal to \(m+3\), then they cannot be in \(V_{p}\), \(p\geq 5\). Now, if any vertex from \(\{v_{a},v_{e},v_{b}\}\) is in \(V_{4}\), then \(e\) must be in \(V_{i}\), \(1\leq i\leq 3\), a contradiction as \(e\in V_{k}\) and \(k\geq 4\). Therefore, the Figure 3: Partition of \(T\) and \(T^{\prime}\) into \(V_{1},V_{2}\) and \(V_{3}\). All the leaves are in \(V_{1}\) vertices from \(\{v_{a},v_{e},v_{b}\}\) are belong to \(V_{p},1\leq p\leq 3\). The vertex \(e\in V_{k}\), to strongly dominate \(e\), each set in \(\{V_{1},V_{2},\ldots,V_{m+3}\}\) must contain at least one vertex from \(N^{\prime}_{G}(e)=\{e_{1},e_{2},\ldots,e_{m},v_{a},v_{e},v_{b}\}\). Since \(e\) is adjacent with exactly \(m+3\) vertices and the set \(N^{\prime}_{G}(e)=\{e_{1},e_{2},\ldots,e_{m},v_{a},v_{e},v_{b}\}\) contains exactly \(m+3\) vertices, each \(V_{i},1\leq i\leq m+3\) contains exactly one vertex from \(N^{\prime}_{G}(e)\). Since \(\{v_{a},v_{e},v_{b}\}\) belong to \(V_{p}\) for some \(1\leq p\leq 3\), it follows that vertices from \(\{e_{1},e_{2},\ldots,e_{m}\}\) belong to \(V_{p}\), \(p\geq 4\). Hence, the vertices of \(A\) belong to \(\{V_{4},V_{5},\ldots V_{m+4}\}\). Note that none of the vertices from \(\{v_{1},v_{2},\ldots,v_{n},v_{e_{1}},v_{e_{2}},\ldots,v_{e_{m}}\}\) belong to \(V_{p}\) for some \(p\geq 4\). Because otherwise, there exists a vertex of \(A\) must be in \(V_{3}\). But this contradicts the fact that the vertices of \(A\) belong to \(\{V_{4},V_{5},\ldots V_{k}\}\). Since the number of neighbours having more than or equal degree of every other vertices is at most \(2\), they cannot belong to \(V_{p}\), \(p\geq 4\). Therefore, \(V_{4},V_{5},\ldots V_{m+4}\) contain only vertices from \(A\) and \(V_{1},V_{2},V_{3}\) contain only vertices from \(V^{\prime}\setminus A\). **Case 2**.: \(e\notin V_{m+4}\)__ Since \(\pi\) is a strong transitive partition, for any \(x\in V_{m+4}\), \(deg(x)\geq m+3\) and has at least \(m+3\) neighbour with a degree at least \(deg(x)\). As \(deg(v_{i})=m+3\) and \(v_{i}\) has at most \(m+2\) neighbour having degree at least \(m+3\), so \(v_{i}\) cannot be in \(V_{m+4}\). Similarly, we can prove that any vertex other than \(\{e_{1},e_{2},\ldots,e_{m},e\}\) cannot belong to \(V_{m+4}\). Since, \(e\notin V_{m+4}\), without loss of generality assume \(e_{1}\in V_{m+4}\), where \(e_{1}\) is the vertex of \(G^{\prime}\) corresponding to the edge \(e_{1}=v_{1}v_{2}\in E\). Now we show that \(v_{1}\) and \(v_{2}\) belong to the first three sets in \(\pi\). Let \(v_{1}\in V_{l}\) and \(v_{2}\in V_{t}\), where \(t\leq l\). If possible, let \(l\geq 4\). Since \(e_{1}\in V_{m+4}\), to dominate \(e_{1}\), each set in \(\{V_{1},V_{2},\ldots,V_{m+4}\}\) must contain at least one vertex from \(N_{G^{\prime}}(e_{1})=\{e_{2},e_{3},\ldots,e_{m},e,v_{1},v_{2},v_{e_{1}}\}\). Since \(e_{1}\) is adjacent with exactly \(m+3\) vertices, each \(V_{i}\), \(1\leq i\leq m+3\) contains exactly one vertex from \(N_{G^{\prime}}(e_{1})\). So, if \(l\geq 4\), then to strongly dominate \(v_{1}\), each set in \(\{V_{3},V_{4},\ldots,V_{l-1}\}\) contains exactly one vertex from \(\{e_{2},e_{3},\ldots,e_{m}\}\). Now, to strongly dominate \(e_{1}\), each set in \(\{V_{l+1},V_{l+2},\ldots,V_{m+3}\}\) contains exactly one vertex from \(\{e_{2},e_{3},\ldots,e_{m},e\}\). The vertex \(e\) cannot belong to \(V_{q},q\geq l+1\), because \(V_{l}\) contains \(v_{1}\) and any \(\{v_{a},v_{e},v_{b}\}\) cannot be in \(V_{l}\) as \(l\geq 4\). Also \(e\) cannot belong to \(V_{p},3\leq p\leq l-1\), as \(v_{1}\in V_{l}\) and each of \(\{V_{3},V_{4},\ldots,V_{l-1}\}\) contains exactly one vertex from \(\{e_{2},\ldots,e_{m}\}\). Therefore, the vertex \(e\) must be in \(V_{i},1\leq i\leq 2\). So each of \(\{V_{3},V_{4},\ldots,V_{l-1},V_{l+1},\ldots,V_{m+3}\}\) contains exactly one vertex from \(\{e_{2},\ldots,e_{m}\}\) only. But the number of vertices in \(\{e_{2},\ldots,e_{m}\}\) is \(m-1\) whereas we need \(k-4=m\) vertices. Therefore, \(l\) cannot be more than \(3\). Note that \(v_{e_{1}}\) cannot be in \(V_{j}\) for \(j\geq 4\) as no of neighbour of \(v_{e_{1}}\) having degree at least \(deg(V_{e_{1}})=m+3\) other than \(e_{1}\in 3\). Therefore, the vertices \(\{v_{1},v_{e_{1}},v_{2}\}\) belong to \(V_{p}\) for \(1\leq p\leq 3\). With similar arguments as in Case 1, we can say that the vertices of \(A\) belong to \(\{V_{4},V_{5},\ldots V_{k}\}\). We can further claim that, the vertices of \(\{v_{1},v_{2},\ldots,v_{n},v_{e_{1}},v_{e_{2}},\ldots,v_{e_{m}}\}\) belong to \(V_{p}\) for \(1\leq p\leq 3\) and the other vertices of \(G^{\prime}\) belong to \(V_{p}\) for \(1\leq p\leq 3\). Therefore, \(V_{4},V_{5},\ldots V_{k}\) contain only vertices from \(A\) and the sets \(V_{1},V_{2},V_{3}\) contain only vertices from \(V^{\prime}\setminus A\). Using the claim, we show that \(G\) has a proper \(3\)-coloring. **Lemma 3**.: _If \(G^{\prime}\) has a strong transitive partition of size \(k\), then \(G\) has a proper \(3\)-coloring._ Proof.: Let \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\) be a strong transitive partition of \(G^{\prime}\) of size \(k\). Since \(\pi\) is also a transitive partition, from [1] we can assume that \(|V_{k}|=1\). Let us define a coloring of \(G\), say \(g\), by labelling \(v_{i}\) with color \(p\) if its corresponding vertex \(v_{i}\) is in \(V_{p}\). The previous claim ensures that \(g\) is a \(3\)-coloring. Now we show that \(g\) is a proper coloring. Let \(e_{t}=v_{i}v_{j}\in E\) and let its corresponding vertex \(e_{t}\) in \(G^{\prime}\) belong to some set \(V_{p}\) with \(p\geq 4\). This implies that the vertices \(\{v_{i},v_{j},v_{e_{t}}\}\) must belong to different sets from \(V_{1},V_{2},V_{3}\). Therefore, \(g(v_{i})\neq g(v_{j})\) and hence \(g\) is a proper coloring of \(G\). Therefore, we have the following main theorem of this section: **Theorem 4**.: _The MSTDP is NP-complete for chordal graphs._ ## 3 Linear-time algorithms ### Trees In this subsection, we design a linear-time algorithm for finding the strong transitivity of a given tree \(T=(V,E)\). We design our algorithm in a similar way to the algorithm for finding the Grundy number of an input tree presented in [1]. First, we give a comprehensive description of our proposed algorithm. #### 3.1.1 Description of the algorithm: Let \(T^{c}\) denote a rooted tree rooted at a vertex \(c\) and \(T^{c}_{v}\) denote the subtree of \(T^{c}\) rooted at a vertex \(v\). With a small abuse of notation, we use \(T^{c}\) to denote both the rooted tree and the underlying tree. To find the strong transitivity of \(T=(V,E)\), we first define the _strong transitive number_ of a vertex \(v\) in \(T\). The strong transitive number of a vertex \(v\) in \(T\) is the maximum integer \(p\) such that \(v\in V_{p}\) in a strong transitive partition \(\pi=\{V_{1},V_{2},\ldots,V_{k}\}\), where the maximum is taken over all strong transitive partition of \(T\). We denote the strong transitive number of a vertex \(v\) in \(T\) by \(st(v,T)\). Note that the strong transitivity of \(T\) is the maximum strong transitive number that a vertex can have; that is, \(Tr_{st}(T)=\max\limits_{v\in V}\{st(v,T)\}\). Therefore, our goal is to find a strong transitive number of every vertex in the tree. Now we define another parameter, namely the _rooted strong transitive number_ of \(v\) in \(T^{c}\) is the strong transitive number of \(v\) in the tree \(T^{c}_{v}\) and it is denoted by \(st^{r}(v,T^{c})\). Therefore, \(st^{r}(v,T^{c})=st(v,T^{c}_{v})\). To this end, we define another parameter, namely the _modified rooted strong transitive number_. The _modified rooted strong transitive number_ of \(v\) in \(T^{c}\) is the strong transitive number of \(v\) in the tree \(T^{c}_{v}\), considering the \(deg(v)\) as \(deg(v)+1\) if \(v\) is a non-root vertex and for root vertex \(deg(v)\) as \(deg(v)\). We denote it by \(mst^{r}(v,T^{c})\). Note that the value modified rooted strong transitive number of a vertex depends on the rooted tree, whereas the strong transitive number is independent of the rooted tree. Also, for the root vertex \(c\), \(st^{r}(c,T^{c})=mst^{r}(c,T^{c})=st(c,T)\). We recursively compute the modified rooted strong transitive number of the vertices of \(T^{c}\) in a bottom-up approach. First, we consider a vertex ordering \(\sigma\), which is the reverse of BFS ordering of \(T^{c}\). For a leaf vertex \(c_{i}\), we set \(mst^{r}(c_{i},T^{c})=1\). For a non-leaf vertex \(c_{i}\), we call the function Strong_Transitive_Number(), which takes the modified rooted strong transitive number of children of \(c_{i}\) in \(T^{c}\) as input and returns the modified rooted strong transitive number of \(c_{i}\) in \(T^{c}\). At the end of the bottom-up approach, we have the modified rooted strong transitive number of \(c_{i}\) in \(T^{c}\), that is, \(mst^{r}(c,T^{c})\), which is the same as the strong transitive number of \(c\) in \(T\), that is, \(st(c,T)\). After the bottom-up approach, we have the strong transitive number of the root vertex \(c\), and the modified rooted strong transitive number of every other vertices in \(T^{c}\). Next, we compute the strong transitive number of every other vertex. For a vertex \(c_{i}\), other than \(c\), we compute the strong transitive number using the function Strong_Transitive_Number(), which takes the modified rooted strong transitive number of children of \(c_{i}\) in \(T^{c_{i}}\) as input. Let \(y\) be the parent of \(c_{i}\) in \(T^{c}\). Note that, except \(y\), the modified rooted strong transitive number of children of \(c_{i}\) in \(T^{c_{i}}\) is the same as the modified rooted strong transitive number in \(T^{c}\). We only need to compute the modified rooted strong transitive number of \(y\) in \(T^{c_{i}}\). We use another function called Strong_Mark_Required() for this. This function takes a strong transitive number of a vertex \(x\) and modified rooted strong transitive number of its children in \(T^{x}\) as input and marks the status of whether a child, say \(v\), is required or not to achieve the strong transitive number of \(x\). We mark \(R(v)=1\) if the child \(v\) is required, otherwise \(R(v)=0\). We compute the strong transitive number of every vertex, other than \(c\), by processing the vertices in the reverse order of \(\sigma\), that is, in a top-down approach in \(T^{c}\). While processing the vertex \(c_{i}\), first based on the status marked by Strong_Mark_Required() function, we calculate the modified rooted strong transitive number of \(p(c_{i})\) in \(T^{c_{i}}\), where \(p(c_{i})\) is the parent of \(c_{i}\) in the rooted tree \(T^{c}\). Then, we call Strong_Transitive_Number() to calculate the strong transitive number of \(c_{i}\). Next, we call the Strong_Mark_Required() to mark the status of the children, which will be used in subsequent iterations. At the end of this top-down approach, we have a strong transitive number of all the vertices and hence the strong transitivity of the tree \(T\). The process of finding \(Tr_{st}(T)\) is described in Algorithm 1. #### 3.1.2 Proof of correctness In this subsection, we give the proof of the correctness of Algorithm 1. It is clear that the correctness of Algorithm 1 depends on the correctness of the functions used in the algorithm. First, we show the following two lemmas, which prove the correctness of Strong_Transitive_Number() function. **Lemma 5**.: _Let \(x\) be a child of \(T^{c}\) and \(y\) be its parent in \(T^{c}\). Also, let \(mst^{r}(v,T)=t\). Then there exists a strong transitive partition of \(T^{c}_{x}\), say \(\{V_{1},V_{2},\ldots,V_{i}\}\) such that \(x\in V_{i}\), for all \(1\leq i\leq t\)._ Proof.: Since \(mst^{r}(x,T^{c})=t\), there exists a strong transitive partition \(\pi=\{U_{1},U_{2},\ldots,U_{t}\}\) of \(T^{c}_{x}\) such that \(x\in U_{t}\). For each \(1\leq i\leq t\), let us define another strong transitive partition \(\pi^{\prime}=\{V_{1},V_{2},\ldots,V_{i}\}\) of \(T^{c}_{x}\) as follows: \(V_{j}=U_{j}\) for all \(1\leq j\leq(i-1)\) and \(V_{i}=\bigcup_{j=i}^{t}U_{j}\). Clearly, \(\pi^{\prime}\) is a strong transitive partition of \(T^{c}_{x}\) of size \(i\) such that \(x\in V_{i}\). Hence, the lemma follows. **Lemma 6**.: _Let \(v_{1},v_{2},\ldots,v_{k}\) are the children of \(x\) in a rooted tree \(T^{c}\) and \(y\) be the parent of \(x\) in \(T^{c}\). Also, let for each \(1\leq i\leq k\), \(l_{i}\) denote the modified rooted strong transitive number of \(v_{i}\) in \(T^{c}\) with \(l_{1}\leq l_{2}\leq\ldots\leq l_{k}\) and \(p(y)=1\) when \(y\) exists in \(T^{c}\) otherwise \(p(y)=0\). Let \(z\) be the largest integer such that there exists a subsequence of \(\{l_{i}:1\leq i\leq k\}\), say \((l_{i_{1}}\leq l_{i_{2}}\leq\ldots\leq l_{i_{z}})\) such that \(l_{i_{p}}\geq p\), for all \(1\leq p\leq z\) and \(deg(v_{i_{j}})\geq deg(x)+p(y)\). Then, the modified rooted strong transitive number of \(x\) in the underlying tree \(T^{c}\) is \(1+z\), that is, \(mst^{r}(x,T^{c})=1+z\)._ Proof.: For each \(1\leq j\leq z\), let us consider the subtrees \(T^{c}_{v_{i_{j}}}\). It is also given that \(mst^{r}(v_{i_{j}},T^{c})=l_{i_{j}}\), for \(j\in\{1,2,\ldots,z\}\). For all \(1\leq p\leq z\), since \(l_{i_{p}}\geq p\), by Lemma 5, we know that there exists strong transitive partitions \(\pi^{p}=\{V_{1}^{p},V_{2}^{p},\ldots,V_{p}^{p}\}\) of \(T^{c}_{v_{i_{p}}}\) such that \(v_{i_{p}}\in V_{p}^{p}\) and \(deg(v_{i_{p}})\geq deg(x)+1\). Let us consider the partition of \(\pi=\{V_{1},V_{2},\ldots,V_{z},V_{z+1}\}\) of \(T^{c}_{x}\) as follows: \(V_{i}=\bigcup_{j=i}^{z}V_{i}^{j}\), for \(\leq i\leq z\), \(V_{z+1}=\{x\}\) and every other vertices of \(T\) are put in \(V_{1}\). Clearly, \(\pi\) is a strong transitive partition of \(T^{c}_{x}\). Also it is given that \(deg(v_{i_{j}})\geq deg(x)+1\). Therefore, \(mst^{r}(x,T^{c})\geq 1+z\). Next, we show that \(mst(x,T^{c})\) cannot be more than \(1+z\). If possible, let \(mst(x,T^{c})\geq 2+z\). Then by Lemma 5, we have that there exists a strong transitive partitions \(\pi=\{V_{1},V_{2},\ldots,V_{2+z}\}\) such that \(x\in V_{2+z}\). This implies that for each \(1\leq i\leq 1+z\), \(V_{i}\) contains a neighbour of \(x\), say \(v_{i}\) such that the modified rooted strong transitive number of both \(v_{i}\) is greater or equal to \(i\), that is, \(l_{i}\geq i\) and \(deg(v_{i})\geq deg(x)+1\). The set \(\{l_{i}|1\leq i\leq 1+z\}\) forms a desired subsequence of \(\{l_{i}:1\leq i\leq k\}\), contradicting the maximality of \(z\). Hence, \(mst(x,T)=1+z\). Note that in line 6 of Algorithm 1, when Strong_Transitive_Number() is called, then it returns the strong transitive number of \(c_{i}\) in \(T^{c}_{c_{i}}\) which is in fact the modified rooted strong transitive number of \(c_{i}\) in \(T^{c}\). And in line 13 of Algorithm 1, when Strong_Transitive_Number() is called, then it returns the strong transitive number of \(c_{i}\) in \(T^{c_{i}}\) which is same as \(st(c_{i},T)\). From Lemma 5 and 6, we have the function Next, we prove the correctness of Strong_Mark_Required(). Let \(T^{x}\) be a rooted tree and \(st(x,T)=z\). A child \(v\) of \(x\) is said to be required if the \(st(x,T^{x}\setminus T_{v}^{x})=z-1\). The function returns the required status of every child of \(x\) by marking \(R(v)=1\) if it is required and \(R(v)=0\) otherwise. The children of \(x\) that are required can be identified using the following lemma. **Lemma 7**.: _Let \(T^{x}\) be a tree rooted at \(x\) and \(v_{1},v_{2},\ldots,v_{k}\) be its children in \(T^{x}\). Also, let the strong transitive number of \(x\) be \(z\) and for each \(1\leq i\leq k\), let \(l_{i}\) denote the modified rooted strong transitive number of \(v_{i}\) in \(T^{x}\). Moreover, let for all \(1\leq i\leq p\), \(deg(v_{i})<k\) and for all \(p+1\leq i\leq k\), \(deg(v_{i})\geq k\) and \(l_{p+1}\leq l_{p+2}\leq\ldots\leq l_{k}\). Then the following hold:_ 1. _If_ \(k=z-1\)_, then_ \(R(v_{i})=1\) _for all_ \(1\leq i\leq k\)_._ 2. _Let_ \(k>z-1\)_. If_ \(k-p=z-1\)_, then for all_ \(1\leq i\leq p\)_,_ \(R(v_{i})=0\) _and for all_ \(p+1\leq i\leq k,R(v_{i})=1\)_._ 3. _Let_ \(k>z-1\) _and_ \(k-p>z-1\)_. Then for all_ \(1\leq i\leq p\)_,_ \(R(v_{i})=0\) _and for all_ \(p+1\leq i\leq k-z+1,R(v_{i})=0\)_._ 4. _Let_ \(k>z-1\) _and_ \(k-p>z-1\)_. Also, let_ \(k-z+2\leq i\leq k\)_. If for all_ \(j\)_,_ \(k-z+2\leq j\leq i\)_,_ \(l_{j-1}\geq j-(k-z+1)\) _then_ \(R(v_{i})=0\)_._ 5. _Let_ \(k>z-1\)_,_ \(k-p>z-1\) _and_ \(k-z+2\leq i\leq k\)_. If there exists_ \(j\) _in_ \(k-z+2\leq j\leq i\) _such that_ \(l_{j-1}<j-(k-z+1)\) _or then_ \(R(v_{i})=1\)_._ Proof.: Note that \(k-p\geq z-1\) as \(st(x,T^{x})=z\) and \(deg(x)=k\). \((a)\) Since \(st(x,T^{x})=z\), by the Lemma 5, there exists a strong transitive partition of \(T\), say \(\pi=\{V_{1},V_{2},\ldots,V_{z}\}\), such that \(x\in V_{z}\). In that case, all the vertices in \(\{v_{1},v_{2},\ldots,v_{k}\}\) must be in \(V_{1},V_{2},\ldots,V_{z-1}\) and each set \(V_{i}\) contains at least one of these vertices. Since \(k=z-1\), each set \(V_{i}\) contains exactly one vertex from \(\{v_{1},v_{2},\ldots,v_{k}\}\). Therefore, if we remove any \(v_{i}\) from the tree, the strong transitive number of \(x\) will decrease by \(1\). Hence, every \(v_{i}\) is required, that is, \(R(v_{i})=1\) for all \(1\leq i\leq k\). \((b)\) Since \(st(x,T^{x})=z\), by the Lemma 5, there exists a strong transitive partition of \(T^{x}\), say \(\pi=\{V_{1},V_{2},\ldots,V_{z}\}\), such that \(x\in V_{z}\). Since the vertices from \(\{v_{1},v_{2},\ldots,v_{p}\}\) have degree less than \(x\), they are not use to strong dominate \(x\). Therefore, if we remove any \(v_{i}\) (\(1\leq i\leq p\)) from the tree, then the strong transitive number of \(x\) will be unchanged. Hence, for each \(1\leq i\leq p\), \(v_{i}\) is not required, that is, \(R(v_{i})=0\) for all \(1\leq i\leq p\). So, all the vertices in \(\{v_{p+1},v_{p+2},\ldots,v_{k}\}\) must be in \(V_{1},V_{2},\ldots,V_{z-1}\) and each set \(V_{i}\) contains at least one of these vertices. Since \(k-p=z-1\), each set \(V_{i}\) contains exactly one vertex from \(\{v_{p+1},v_{p+2},\ldots,v_{k}\}\). Therefore, removing any \(v_{i}\) (\(p+1\leq i\leq k\)) from the tree will decrease the strong transitive number of \(x\) by \(1\). Hence, every \(v_{i}\) is required, that is, \(R(v_{i})=1\) for all \(p+1\leq i\leq k\). \((c)\) Let \(\pi=\{V_{1},V_{2},\ldots,V_{z}\}\) be a strong transitive partition of \(T^{x}\) such that \(x\in V_{z}\). As before, the vertices from \(\{v_{1},v_{2},\ldots,v_{p}\}\) are not required. Hence, \(R(v_{i})=0\) for all \(1\leq i\leq p\). Now, in this partition, at least one vertex from \(\{v_{p+1},v_{p+2},\ldots,v_{k}\}\) must be in each \(V_{i}\) for \(1\leq i\leq z-1\). As the vertices are arranged in increasing order of their modified rooted strong transitive number, without loss of generality, we can assume that \(\{v_{p+1},v_{p+2},\ldots,v_{k-z+1}\}\subset V_{1}\) and \(v_{i}\in V_{I_{i}}\) for each \(k-z+2\leq i\leq k\), where \(I_{i}=i-(k-z+1)\). Clearly, if we remove any \(v_{i}\) (\(p+1\leq i\leq k-z+1\)) from the tree, then the strong transitive number of \(x\) will be unchanged. Hence, for each \(p+1\leq i\leq k-z+1\), \(v_{i}\) is not required, that is, \(R(v_{i})=0\) for all \(p+1\leq i\leq k-z+1\). (\(d\)) Let us consider the same strong transitive partition \(\pi\) of \(T^{x}\) as in case (\(c\)). Let for some \(k-z+2\leq i\leq k\), \(v_{i}\) be a vertex such that \(l_{j-1}\geq l_{j}\) for all \(k-z+2\leq j\leq i\), where \(I_{j}=j-(k-z+1)\). We can modify \(\pi\) to get a strong transitive partition of \(T^{x}\setminus\{v_{i}\}\) of size \(z\). The modification is as follows: for each \(j\in\{k-z+2,k-z+3,\ldots,i\}\), we put \(v_{i-1}\in V_{I_{i}}\) and remove the vertices that are not in \(T^{x}\setminus\{v_{i}\}\). Therefore, \(v_{i}\) is not required and \(R(v_{i})=0\) for such vertices. (\(e\)) Let for some \(i,k-z+2\leq i\leq k\), \(v_{i}\) be a vertex such that \(l_{j-1}<I_{j}\) for some \(k-z+2\leq j\leq i\), where \(I_{j}=j-(k-z+1)\). Since the modified rooted strong transitive numbers are arranged in increasing order, \(l_{q}<I_{j}\) for all \(p+1\leq q\leq j-1\). Suppose, after deleting the vertex \(v_{i}\), let the strong transitive number of \(x\) in \(T^{x}\setminus\{v_{i}\}\) remain \(z\). Let \(\pi^{\prime}=\{V_{1},V_{2},\ldots,V_{z}\}\) be a strong transitive partition of \(T^{x}\setminus\{v_{i}\}\) of size \(z\) such that \(x\in V_{z}\). Since \(l_{q}<I_{j}\) for all \(p+1\leq q\leq j-1\), none of the vertices of \(\{v_{p+1},v_{p+2},\ldots,v_{j-1}\}\) can be in the sets \(V_{I_{j}},V_{I_{j+1}},\ldots,V_{I_{k}}\). On the other hand, the sets \(V_{I_{j}},V_{I_{j+1}},\ldots,V_{I_{k}}\) must contain at least \((k-j+1)\) vertices from \(\{v_{p+1},v_{p+2},\ldots,v_{i-1},v_{i+1},\ldots,v_{k}\}\), as \(\pi^{\prime}\) is a strong transitive partition of \(T^{x}\setminus\{v_{i}\}\). Therefore, \(V_{I_{j}},V_{I_{j+1}},\ldots,V_{I_{k}}\) contains at least \((k-j+1)\) vertices from \(\{v_{j},v_{j+1},\ldots,v_{i-1},v_{i+1},\ldots,v_{k}\}\). But there are only \((k-j)\) many vertices available. Hence, the strong transitive number of \(x\) in \(T^{x}\setminus\{v_{i}\}\) cannot be \(z\). Therefore, \(v_{i}\) is required and \(R(v_{i})=1\) for such vertices. Note that the condition in case (\(e\)) is such that if \(R(v_{i})=1\) for some \(i\), then \(R(v_{j})=1\) for all \(i+1\leq j\leq k\). Based on the lemma, we have the function Strong_Mark_Required(). **Input:** A rooted tree \(T^{x}\), rooted at a vertex \(x\) and modified rooted strong transitive number of the children of \(x\), such that \(deg(v_{i})<k\) for \(1\leq i\leq p\) and \(deg(v_{i})\geq k\), for all \(i\geq p+1\) also \(mst^{r}(v_{p+1},T^{c})\leq\ldots\leq mst^{r}(v_{k},T^{c})\). **Output:**\(R(v)\) value of \(v\), \(v\) is a child of \(x\) in \(T^{x}\). ``` 1:if the number of children is \(z-1\), that is \(k=z-1\)then 2:\(R(v)=1\), for all children \(v\) of \(x\) in \(T^{x}\). 3:endif 4:if\(k>z-1\)then 5: For all \(1\leq i\leq k-z+1\), \(R(v_{i})=0\) 6:endif 7:for all \(i\gets k-z+2\) to \(k\)do 8:if\(l_{i-1}\geq i-(k-z+1)\)then 9:\(R(v_{i})=0\) 10:else 11:\(R(v_{i})=1\) 12:break 13:endif 14:endfor 15:For all \(j>i\), \(R(v_{j})=1\) ``` **Algorithm 3**Strong_Mark_Required(\(st(x,T),mst^{r}(v_{1},T^{x}),\ldots,mst^{r}(v_{k},T^{x})\)) #### 3.1.3 Complexity Analysis: In the function Strong_Transitive_Number(), we find the strong transitive number of vertex \(x\) based on the modified rooted strong transitive number of its children. We have assumed that the children are sorted according to their modified rooted strong transitive number. Since the for loop in line \(2-6\) of Strong_Transitive_Number() runs for every child of \(x\), this function takes \(O(deg(x))\) time. Similarly, Strong_Mark_Required() takes a strong transitive number of a vertex \(x\) and modified rooted strong transitive number of its children in \(T^{x}\) as input and marks the status of whether a child, say \(v\), is required or not to achieve the strong transitive number of \(x\). Here also, we have assumed that the children are sorted according to their modified rooted strong transitive number. Clearly, line \(1-2\) of Strong_Mark_Required() takes \(O(deg(x))\). In line \(3-4\), we mark the status for a few children without any checking and for each of the remaining vertices, we mark the required status by checking condition in \(O(1)\) time. Therefore, Strong_Mark_Required() also takes \(O(deg(x))\) time. In the main algorithm Strong_Transitivity(T), the vertex order mentioned in line \(1\) can be found in linear time. Then, in a bottom-up approach, we calculate every vertex's modified rooted strong transitive numbers. For that we are spending \(O(deg(c_{i}))\) for every \(c_{i}\in\sigma\). Note that we must pass the children of \(c_{i}\) in a sorted order to Strong_Transitive_Number(). But as discussed in [1] (algorithm for finding Grundy number of a tree), we do not need to sort all the children based on their modified rooted strong transitive numbers; sorting the children whose modified rooted strong transitive number is less than \(deg(c_{i})\), is sufficient. We can argue that this can be done in \(O(deg(c_{i}))\) as shown in [1]. Hence, the loop in line \(1-6\) takes linear time. Similarly, we conclude that line \(8-14\) takes linear time. Therefore, we have the following theorem: **Theorem 8**.: _The MSTP can be solved in linear time for trees._ ### Transitivity in split graphs A graph \(G=(V,E)\) is said to be a _split graph_ if \(V\) can be partitioned into an independent set \(S\) and a clique \(K\). In this subsection, we prove that the transitivity of a split graph \(G\) is \(\omega(G)\), where \(\omega(G)\) is the size of a maximum clique in \(G\). First, we prove that \(Tr_{st}(G)\geq\omega(G)\). **Lemma 9**.: _Let \(G=(S\cup K,E)\) be a split graph, where \(S\) and \(K\) are an independent set and a clique of \(G\), respectively. Also, assume that \(K\) is the maximum clique of \(G\), that is, \(\omega(G)=|K|\). Then \(Tr_{st}(G)\geq\omega(G)\)._ Proof.: Let \(\omega(G)=t\) and \(\{v_{1},v_{2},\ldots,v_{t}\}\) be the vertices of a maximum clique. Also, assume \(deg(v_{1})\geq deg(v_{2})\geq\ldots\geq deg(v_{t})\). Consider a vertex partition \(\pi=\{V_{1},V_{2},\ldots,V_{t}\}\) of size \(\omega(G)\) by considering each \(V_{i}=\{v_{i}\}\), for \(i\geq 2\) and \(V_{1}=V\setminus\{v_{2},v_{3},\ldots,v_{t}\}\). Since the vertices \(\{v_{1},v_{2},\ldots,v_{t}\}\) form a clique and \(deg(v_{1})\geq deg(v_{2})\geq\ldots\geq deg(v_{t})\), \(V_{i}\) strongly dominates \(V_{j}\) for all \(1\leq i<j\leq t\). Therefore, \(\pi\) forms a strong transitive partition of \(G\) with size \(t\). Hence, \(Tr_{st}(G)\geq t=\omega(G)\). Next, in the following lemma, we show that \(Tr_{st}(G)=\omega(G)\). **Lemma 10**.: _Let \(G=(S\cup K,E)\) be a split graph, where \(S\) and \(K\) are an independent set and a clique of \(G\), respectively. Also, assume that \(K\) is the maximum clique of \(G\), that is, \(\omega(G)=|K|\). Then \(Tr_{st}(G)=\omega(G)\)._ Proof.: From [11], we know that \(Tr(G)=\omega(G)+1\) if and only if every vertex of \(K\) has a neighbour in \(S\). Also, we know that \(Tr(G)\geq Tr_{st}(G)\). Now we divide our proof into the following two cases: **Case 1.**_A vertex \(x\in K\) exists, such that \(x\) has no neighbour in \(S\)._ In this case, form [11], we know that \(Tr(G)=\omega(G)\). As \(Tr_{st}(G)\leq Tr(G)\), so \(Tr_{st}(G)\leq\omega(G)\). Again, by the Lemma 9, we have \(\omega(G)\leq Tr_{st}(G)\). Therefore, \(Tr_{st}(G)=\omega(G)\). **Case 2.**_Every vertex of \(K\) has a neighbour in \(S\)._ In this case we have \(Tr(G)=\omega(G)+1\)[11]. So, \(Tr_{st}(G)\leq Tr(G)=\omega(G)+1\). Suppose \(Tr_{st}(G)=\omega(G)+1\) and \(\pi=\{V_{1},V_{2},\ldots,V_{\omega(G)+1}\}\) be a strong transitive partition of \(G\) with size \(\omega(G)+1\). Since \(|K|=\omega(G)\), a set in \(\pi\) contains only vertices from \(S\). Also, note that for any \(s\in S\), \(deg(s)<deg(x)\), for all \(x\in K\) as \(K\), is the maximum clique of \(G\), and every vertex of \(K\) has a neighbour in \(S\). Let \(s\in S\) and \(s\in V_{\omega(G)+1}\). Then \(deg(s)\) is at least \(\omega(G)\), which is impossible. So, no vertices from \(S\) in \(V_{\omega(G)+1}\). Let \(x\in K\) and \(x\in V_{\omega(G)+1}\). Also, assume \(V_{i}\subseteq S\). Since \(\pi\) is a strong transitive partition, \(V_{i}\) strongly dominates \(V_{\omega(G)+1}\). That implies \(deg(s)\geq deg(x)\) for some \(s\in S\). We have a contradiction as \(deg(s)<deg(x)\), for all \(s\in S\) and \(x\in K\). Therefore, \(Tr_{st}(G)\) cannot be \(\omega(G)+1\). Hence, \(Tr_{st}(G)<\omega(G)+1\). Again, by the Lemma 9, we have \(\omega(G)\leq Tr_{st}(G)\). Therefore, \(Tr_{st}(G)=\omega(G)\). From Lemma 10, it follows that computing the strong transitivity of a split graph is the same as computing the maximum clique. Note that finding a vertex partition of \(V\) into \(S\) and \(K\), where \(S\) and \(K\) are an independent set and a clique of \(G\), respectively, and \(\omega(G)=|K|\) can be computed in linear time [11]. Hence, we have the following theorem: **Theorem 11**.: _The MSTP can be solved in linear time for split graphs._ Conclusion In this paper, we have introduced the notion of strong transitivity in graphs, which is a variation of transitivity. We have shown that the decision version of this problem is NP-complete for chordal graphs. On the positive side, we have proved that this problem can be solved in linear time for trees and split graphs. It would be interesting to investigate the complexity status of this problem in other graph classes. Designing an approximation algorithm for this problem would be another challenging open problem. ## Acknowledgements: Subhabrata Paul was supported by the SERB MATRICS Research Grant (No. MTR/2019/000528). The work of Kamal Santra is supported by the Department of Science and Technology (DST) (INSPIRE Fellowship, Ref No: DST/INSPIRE/ 03/2016/000291), Govt. of India.
2310.03319
Hamiltonian Encoding for Quantum Approximate Time Evolution of Kinetic Energy Operator
The time evolution operator plays a crucial role in the precise computation of chemical experiments on quantum computers and holds immense promise for advancing the fields of physical and computer sciences, with applications spanning quantum simulation and machine learning. However, the construction of large-scale quantum computers poses significant challenges, prompting the need for innovative and resource-efficient strategies. Traditional methods like phase estimation or variational algorithms come with certain limitations such as the use of classical optimization or complex quantum circuitry. One successful method is the Trotterization technique used for quantum simulation, specifically in atomic structure problems with a gate complexity of approximately O(n^2) for an n-qubit realization. In this work, we have proposed a new encoding method, namely quantum approximate time evolution (QATE) for the quantum implementation of the kinetic energy operator as a diagonal unitary operator considering the first quantization level. The theoretical foundations of our approach are discussed, and experimental results are obtained on an IBM quantum machine. Our proposed method offers gate complexity in sub-quadratic polynomial with qubit size $n$ which is an improvement over previous work. Further, the fidelity improvement for the time evolution of the Gaussian wave packet has also been demonstrated.
Mostafizur Rahaman Laskar, Kalyan Dasgputa, Amit Kumar Dutta, Atanu Bhattacharya
2023-10-05T05:25:38Z
http://arxiv.org/abs/2310.03319v1
# Hamiltonian Encoding for Quantum Approximate Time Evolution of Kinetic Energy Operator ###### Abstract The time evolution operator plays a crucial role in the precise computation of chemical experiments on quantum computers and holds immense promise for advancing the fields of physical and computer sciences, with applications spanning quantum simulation and machine learning. However, the construction of large-scale quantum computers poses significant challenges, prompting the need for innovative and resource-efficient strategies. Traditional methods like phase estimation or variational algorithms come with certain limitations such as the use of classical optimization or complex quantum circuitry. One successful method is the Trotterization technique used for quantum simulation, specifically in atomic structure problems with a gate complexity of approximately \(\mathcal{O}(n^{2})\) for an \(n\)-qubit realization. In this work, we have proposed a new encoding method, namely quantum approximate time evolution (QATE) for the quantum implementation of the kinetic energy operator as a diagonal unitary operator considering the first quantization level. The theoretical foundations of our approach are discussed, and experimental results are obtained on an IBM quantum machine. Our proposed method offers gate complexity in subquadratic polynomial with qubit size \(n\) which is an improvement over previous work. Further, the fidelity improvement for the time evolution of the Gaussian wave packet has also been demonstrated. Quantum time evolution, Hamiltonian encoding, quantum chemistry ## I Introduction Quantum mechanics, a fundamental theory in physics, provides a powerful framework for understanding the behaviour of particles at the atomic and subatomic levels. In the realm of chemistry, where the properties and interactions of atoms and molecules are of paramount importance, quantum mechanics plays a crucial role in elucidating their behaviour. One key concept in quantum mechanics is the time evolution of quantum states, governed by the unitary operator known as the time evolution operator, denoted as \(\mathbf{U}(t)\). This operator describes how a quantum state changes over time, encapsulating the dynamics of a quantum system and allowing for the calculation of various physical observables. In the context of chemistry, the time evolution operator is particularly significant as it enables the simulation and prediction of chemical reactions, the study of energy transfer processes, and the understanding of electronic and vibrational spectra. By leveraging the time evolution operator, chemists can explore intricate details of chemical reactions, including bond breaking and formation, energy transfer, and excited state formation, which are inherently quantum mechanical phenomena. Furthermore, the time evolution operator plays a pivotal role in the field of quantum computing, where it facilitates the simulation and exploration of complex chemical systems, offering promising solutions to computationally demanding problems such as simulating large molecules and optimizing chemical reactions. ### _Background_ Quantum simulation of electronic structure in quantum chemistry has been a prominent research area, aiming to understand the time evolution of wave functions using the kinetic and potential energy operators in the Hamiltonian [1, 2, 3, 4]. However, the dynamics of chemical reactions, especially in complex systems, pose challenges that cannot be efficiently addressed by classical computation, necessitating the use of quantum algorithms [5, 6, 7]. The complexity is further amplified by interactions between particles and quantum tunnelling effects, which perturb the Hamiltonian operator [8, 9, 10, 6, 11, 12, 13, 14, 15, 16, 17, 18]. Despite the demand, the realization of time-evolving states remains challenging due to the high gate complexity of existing quantum simulation algorithms on limited physical resources [11, 12, 13, 14, 4]. Quantum Hamiltonian simulations (QHS) involve approximating a unitary operator corresponding to a given Hamiltonian matrix [15, 16]. Various frameworks, such as the Trotter-Suzuki product formula [17], truncated Taylor series [18], quhitization method [16], quantum walk [15], and optimal quantum signal processing algorithm [19], have been proposed in QHS, each with its own advantages and challenges. There are several methods to solve the energy structure problem and their time dynamics on a quantum computer, such as using quantum phase estimation [20], adiabatic algorithm [21], variational approach [22] etc. However, a central challenge remains with the quantum simulation of the underlying Hamiltonian. From a practical implementation standpoint, the Trotter-Suzuki-based approach, known as the "Trotterization" technique, is commonly employed on quantum computers for applications like atomic structure problems [3, 4, 23]. The \(2^{nd}\)-order Trotter-Suzuki approximation for the time evolution of the Hamiltonian \(\mathbf{H}\) is given by \[e^{-i\mathbf{H}\Delta t} =e^{-i(\mathbf{K}+\mathbf{V})\Delta t}\] \[=e^{-i\frac{\mathbf{v}}{2}\Delta t}e^{-i\mathbf{K}\Delta t}e^{-i \frac{\mathbf{v}}{2}\Delta t}+\mathcal{O}(\Delta t^{3}), \tag{1}\] where the terms \(\mathbf{K}\), and \(\mathbf{V}\) represent the Hamiltonian for kinetic energy and potential energy operator respectively. In recent literature [7, 24], quantum simulation of the imaginary time evolution for the Hamiltonian operator \(\mathbf{H}\) has been considered as a powerful tool for studying quantum systems. In order to implement the time-evolution operator, authors in [7] showed a variational approach to find the ground state energy of a multi-particle system (e.g., lithium hydride). However, this approach is a hybrid approach which considers a classical optimizer in addition to the quantum circuit to prepare a variational ansatz. An interesting approach known as the inexact quantum imaginary time evolution (QITE) algorithm as shown in [24] showed that a unitary operator can be created to a domain \(\mathcal{D}\) smaller than that induced by correlations for the resource-limited quantum computation. For studying the dynamics of free particles in a finite potential well, the Trotterization approach is shown promising, especially the implementation on a quantum machine [4]. However, for the complex Hamiltonian dynamics, circuit optimization has not been explored well, which can pose significant challenges for higher-dimensional configurations [2]. ### _Contributions:_ Given the above background, our approach is conceptually novel for designing a time evolution operator with the following contributions. * We exploit the bi-symmetric diagonal structure of the kinetic energy operator and propose a quantum pyramid architecture using the ladder of CNOT gates. It uses half of the samples to be encoded using the circuitry and the other half (about the plane of symmetry) is generated by the reflection operator. We have observed that the ladder of CNOT gates acts as a reflection operator. Using this phenomenon, we design a quantum pyramid architecture for encoding the kinetic energy (which is a bi-symmetric diagonal) operator. * We propose new encoding techniques for the kinetic energy operator. One method is quantum approximate time evolution (QATE) encoding for simulating the Hamiltonian with a high accuracy. Here, the \(1\) qubit gates requirement is less than the state of the art, however, \(2\) qubit gates are similar in number. The other method is called quantum windowing encoding (QWE) which is inspired by the window technique used in signal processing literature. * We implement our proposed algorithms on an IBM quantum machine, and show the results for a Gaussian wave packet evolving in time steps. Further, we show the fidelity result and gate counts for various qubit sizes. We demonstrate new concepts for low-complex simulation of kinetic energy operators, which can have novel applications in the near future. ## II The Time Evolution Operator Given a Hamiltonian \(\mathbf{H}=\sum_{i=1}^{l}\mathbf{H}_{i}\), with \(l\)-local terms, the time evolution of a wave function \(\psi\) for time-step \(\Delta t\) can be written following Schrodinger's equation as \[\ket{\psi_{t+\Delta t}}=\mathbf{U}(\Delta t)\ket{\psi_{t}}, \tag{2}\] where \(\mathbf{U}(\Delta t)=e^{-i\mathbf{H}\Delta t}\) denotes the time-evolution operator that transforms the state of the system from \(\ket{\psi(t)}\) to \(\ket{\psi(t+\Delta t)}\). Various mathematical techniques and numerical methods have been developed to compute the Time Evolution Operator efficiently and accurately as discussed in the background. These methods are crucial for simulating quantum systems, studying quantum dynamics, and exploring the behaviour of complex quantum phenomena. Here, we describe a fermionic Hamiltonian system and discuss some special cases with efficient algorithms for implementation on a quantum machine. Our approach is focused on the structural aspects of the underlying Hamiltonian operator to find optimal gate complexity, thereby reducing the total gate cost as well as the noise level in realistic experimentation on a quantum computer. In a non-relativistic case, the behaviour of Hamiltonian considers that particles (such as electrons) interact in the external potential of another particle (positively-charged nuclei) described within the Born-Oppenheimer approximation, given by \[\mathbf{H}=-\sum_{i}\frac{\nabla_{i}^{2}}{2}-\sum_{i,j}\frac{q_{j}}{\ket{R_{j }-r_{i}}}+\sum_{i<j}\frac{1}{r_{i}-r_{j}}+\sum_{i<j}\frac{q_{i}q_{j}}{R_{i}-R _{j}}. \tag{3}\] Here, \(\mathbf{H}_{1}=-\sum_{i}\frac{\nabla_{i}^{2}}{2}\) denotes the kinetic energy term, \(\mathbf{H}_{2}=-\sum_{i,j}\frac{q_{j}}{\ket{R_{j}-r_{i}}}\) represents the potential energy where \(q_{j}\) are charges of the nuclei, \(R_{j}\) and \(r_{i}\) are positions of the nuclei and electrons respectively; the \(\mathbf{H}_{3}=\sum_{i<j}\frac{1}{r_{i}-r_{j}}\) denotes the electron-electron repulsion potential term, and \(\mathbf{H}_{4}=\sum_{i<j}\frac{q_{i}q_{j}}{R_{i}-R_{j}}\) is some constant term. Discretization techniques are employed to convert the differential form (3) to a practical computational problem [25, 14], and the Hamiltonian is simplified as \[\mathbf{H}=\mathbf{K}(\hat{p})+\mathbf{V}(\hat{x}), \tag{4}\] where \(\mathbf{K}(\hat{p})\) denotes the discretized kinetic energy operator in momentum domain (\(\hat{p}\)), and the \(\mathbf{V}(\hat{x})\) represents the potential energy term expressed in coordinate domain (\(\hat{x}\)). The effects due to other terms in (3) i.e., \(\mathbf{H}_{3}\), and \(\mathbf{H}_{4}\) can either be ignored for simplification or can be absorbed in the corresponding kinetic or potential energy term as per their representation either in position or momentum basis. Note that, we will consider the first quantization level expression for encoding the energy operators in the quantum circuit. ### _Potential Energy_ The potential energy operator denoted as \(e^{-i\mathbf{V}\Delta t}\), holds immense importance in quantum mechanics as a key component of the Hamiltonian [26, 27]. It characterizes the potential energy associated with a quantum system, exerting a profound influence on its behaviour and properties. In specific scenarios, the potential energy operator manifests in diverse forms contingent upon the characteristics of the potential energy itself. A noteworthy example is the finite step potential, which features abrupt shifts in potential energy at distinct positions within the system. This phenomenon is relevant in numerous contexts, ranging from quantum wells to barrier structures, where the potential energy undergoes sudden changes at specific locations. The step potential operator for evolution time \(\Delta t/r\) (with \(r=2\) for \(2^{nd}\)-order Trotterization) can be written as \[\mathbf{U}_{V}(\Delta t/r)=e^{-i\mathbf{V}(x)\Delta t/r}. \tag{5}\] For a single-step potential, the potential energy abruptly changes at a particular position, given as \[\mathbf{V}(x)=\begin{cases}V_{1}&\text{for }x<x_{0}\\ V_{2}&\text{for }x\geq x_{0},\end{cases}\] where \(V_{1}\) and \(V_{2}\) represent the potential energy values on either side of the step at \(x_{0}\). The potential energy operator for the single-step potential can be realized by applying the appropriate phase shift based on the potential energy values. Similarly, for a double-step potential, there are two abrupt changes in the potential energy at different positions. Mathematically, this can be represented as \[\mathbf{V}(x)=\begin{cases}V_{1}&\text{for }x<x_{1}\\ V_{2}&\text{for }x_{1}\leq x<x_{2}\\ V_{3}&\text{for }x\geq x_{2},\end{cases}\] where \(V_{1}\), \(V_{2}\), and \(V_{3}\) are the potential energy values in different regions separated by the step positions \(x_{1}\) and \(x_{2}\). The potential energy operator for the double-step potential involves applying the respective phase shifts corresponding to each region. In the case of multiple-step potentials, the potential energy exhibits multiple abrupt changes at different positions. The mathematical description becomes more complex, involving multiple regions with different potential energy values and corresponding phase shifts. To implement the potential energy on a digital computer, we need to perform discretization on \(x\)-space (\(-d<x<d\)), where each sample can be represented on a grid (with each smallest grid of \(\Delta x=\frac{2d}{N}\) for \(N\) samples) as \[x_{k}=-d+\left(k+\frac{1}{2}\right)\Delta x. \tag{6}\] The single, double, and multiple well potentials (with equal potential barriers) can be implemented as a quantum circuit using elementary quantum gates as follows \[\mathbf{U}_{V}^{s} =e^{-i\eta Z\Delta t/r}\otimes I\otimes I\cdots\otimes I,\] \[\mathbf{U}_{V}^{d} =I\otimes e^{-i\eta Z\Delta t/r}\otimes I\cdots\otimes I\text{ and},\] \[\mathbf{U}_{V}^{m} =I\otimes I\ldots I\otimes e^{-i\eta Z\Delta t/r}\otimes I\cdots \otimes I, \tag{7}\] where \(\eta\) denotes magnitude of the potential barrier, and \(Z\) is the Pauli-\(Z\) operator. Note that, by changing the position of the Pauli-\(Z\) operator with respect to the identity operators (a single qubit operation) we can create any choice of the step potentials. For a \(N\times N\), potential energy operator, here we need a Pauli-\(Z\) operator. Hence, for an \(n\) input qubit circuit, the overall number of elementary quantum gates required for the quantum gate implementation of the potential energy operator is given by \(\tilde{\mathcal{O}}(n)\) with \(N=2^{n}\). ### _Kinetic Energy_ The unitary form of the kinetic energy operator is expressed as \(\mathbf{U}=\mathbf{e}^{-i\mathbf{K}t}\), which is a diagonal matrix representing the time evolution operator. Mathematically, we can represent the kinetic energy operator as \(\mathbf{K}=\frac{\mathbf{p}^{2}}{2m}\), where \(\mathbf{p}\) is the momentum operator and \(m\) is the mass. For the implementation of the Kinetic energy operator on a digital computer, we need the discretized representation of the system, where the position is represented by discrete points or samples. In this case, we can represent the momentum variable as \(p=n\Delta p\), where \(n\) is an integer representing the sample index and \(\Delta p\) is the spacing between samples. Assuming a simple configuration (e.g., the motion of a free particle in a step potential), the kinetic energy operator shows a parabolic function of the momentum variable and exhibits a plane of reflection in about half of the samples (refer to Fig. 3.\(a\)). We consider a one-dimensional grid in the \(p\)-space, taking a finite range \(-d<p<d\) with \(N\) uniformly spaced grid-points representing the samples of the continuous variable \(p\) given by \[p_{j}:=\frac{\pi}{d}\left(j+\frac{1}{2}-\frac{N}{2}\right)\quad\text{for }j=0,\ 1,\ \ldots,N-1. \tag{8}\] As the kinetic energy operator is a parabolic function of \(p\), it is an even function with respect to the sample index \(p_{j}\), and its diagonal elements exhibit symmetry about the plane of reflection. It shows that the Hamiltonian operator corresponding to kinetic energy has a bi-symmetric pattern due to its parabolic (or even-symmetric) nature. To encode the kinetic energy function in the Hamiltonian operator, we define another variable \(\mathbf{D}_{\theta}:=\mathbf{K}(\tilde{p})\Delta t\) where \(\mathbf{D}_{\theta}=\mathbf{diag}(\theta_{0},\ \theta_{1},\ldots,\theta_{l},\theta_{l},\ldots, \theta_{N-1})\) with its plane of reflection about the coordinate \((l,l)\) (here, \(l=\frac{N}{2}\)). Using this definition, we can construct the time evolution operator \(\mathbf{U}\) as a diagonal matrix with elements \(\mathbf{e}^{-i\mathbf{K}t}\) for each diagonal element as follows \[\mathbf{U}_{K}(\Delta t) =e^{-i\mathbf{D}_{\theta}}\] \[=e^{-i\mathbf{diag}(\theta_{0},\ \theta_{1},\ldots,\theta_{l}, \theta_{1},\ldots,\theta_{N-1})}\] \[=\left(\frac{\mathbf{B}|}{\mathbf{0}\left|\mathbf{B}_{ref} \right.}\right), \tag{9}\] where \(\mathbf{B}=\mathbf{diag}(\mathbf{b})\), and \(\mathbf{B}_{ref}\) is the reflection of \(\mathbf{B}\) for \(\mathbf{b}=[e^{-i\theta_{0}},\ e^{-i\theta_{1}},\ldots,\ e^{-i\theta_{l}}]\). Thus, the unitary relation \(\mathbf{e}^{-i\mathbf{K}t}\) through Hamiltonian simulation captures the time evolution of the system under the influence of the kinetic energy operator, with a parabolic momentum dependence and a plane of reflection symmetry about half of the samples. The direct implementation of the operator \(\mathbf{U}_{K}(\Delta t)\) on a superconducting qubit-based quantum machine using the Trotterization method is addressed in [4, 11]. ## III Encoding Kinetic Energy Evolution Operator on a Quantum Circuit In this research work, we exploit the parabolic nature of the Kinetic energy which successively generates the bi-symmetric pattern in the unitary operator \(\mathbf{U}_{K}\). Here, we define the parameter \(\theta_{j}=\frac{-p_{j}^{2}\Delta t}{2m}\) (for simplicity, we take \(m=1\)) for \(j=0,\ldots,N-1\). As a consequence, the \((j,j)^{th}\) coordinate of the \(\mathbf{U}_{K}\) denotes the element \(e^{-i\theta_{j}}\). A conceptually novel quantum architecture is shown here which can efficiently simulate (in an approximate sense) the operator \(\mathbf{U}_{K}\) on quantum hardware by exploiting its structure. The below lemmas demonstrate the motivation behind our approach for the bi-symmetric kinetic energy operator. **Lemma 1**.: _Given \(\mathbf{C}\) be a CNOT operator, and \(\mathbf{P}=\mathbf{I}\otimes\mathbf{P}_{1}\) is another operator with \(\mathbf{I}\) be the identity operator and \(\mathbf{P}_{1}\) is some phase gate, then \(\mathbf{R}=\mathbf{C}\mathbf{P}\mathbf{C}^{\dagger}\) is a bi-symmetric diagonal quantum operator._ Proof.: The proof is given in Appendix-VIII-A. **Lemma 2**.: _Given a list of phase gates as \(\mathbf{P}_{1},~{}\ldots,~{}\mathbf{P}_{n}\) with every \(\mathbf{P}_{j}=diag([1~{}e^{i\theta_{j}}])\) placed at \(j^{th}\) qubit starting \(q[1]\) (second qubit) to \(q[n-1]\) (last qubit) with \(n=\log_{2}N\), the product of the operators \(\mathbf{F}_{1},~{}\ldots,~{}\mathbf{F}_{n}\) is a diagonal matrix of dimension \(N\times N\) with first \(2^{n-1}\) elements repeated in order along the main diagonal, where every \(\mathbf{F}_{j}\) is obtained by placing \(\mathbf{P}_{j}\) phase gate at \(j^{th}\) qubit in absence of any other gates._ Proof.: The proof is given in Appendix-VIII-B. Often, the kinetic energy of a fermion has the form of a quadratic function (parabolic), which can be represented as a bi-symmetric and diagonal unitary operator. Based on Lemma-1, and Lemma-2, we have given the below proposition for the bi-symmetric diagonal operator. **Proposition 1**.: _The operator \(\mathbf{A}=\mathbf{I}\otimes\cdots\otimes(\left|\mathbf{0}\right\rangle\left\langle \mathbf{0}\right|\times\mathbf{I}~{}+~{}\left|\mathbf{1}\right\rangle\left\langle \mathbf{1}\right|\times\mathbf{X})\otimes\mathbf{I}\otimes\cdots\otimes \mathbf{I}\) is a row-exchange operator for a given matrix \(\mathbf{F}\) (assuming compatible with \(\mathbf{A}\)) and a Pauli-operator \(\mathbf{X}\), if it is multiplied as \(\mathbf{A}\mathbf{F}\), and \(\mathbf{A}^{\dagger}=\mathbf{A}\) is a column-exchange operator when it is post-multiplied as \(\mathbf{F}\mathbf{A}^{\dagger}\). The product \(\mathbf{A}\mathbf{F}\mathbf{A}^{\dagger}\) has a symmetry about the mid-point along the main diagonal when \(\mathbf{F}=\mathbf{F}_{1},\ldots,\mathbf{F}_{n}\) following lemma-2._ Proof.: The proof is given in Appendix-VIII-C. ### _Proposed Algorithm_ The Trotterization algorithm is employed for the overall time-evolution algorithm design. The pseudo-code for the \(2^{nd}\) order Trotter-Suzuki method is given in Algorithm-1. Note that, in the Trotterization method we employ the quantum Fourier transform (denoted as \(\mathbf{U}_{QFT}\)) to transform the Kinetic energy from momentum basis (\(\hat{p}\)) to space basis (\(\hat{x}\)). The implementation of potential energy operator term \(\mathbf{U}_{P}=e^{-i\mathbf{V}t/2}\) can be implemented in linear gate complexity with input qubit size as discussed earlier. Employing the Trotterization technique to implement the kinetic energy term \(\mathbf{U}_{K}=e^{-i\mathbf{K}t}\) on a quantum machine requires significant quantum resources. Here, we propose a new quantum architecture, namely quantum pyramid architecture (QPA) which helps us design a bi-symmetric operator, which is often the case for kinetic energy operators. A natural operator representation of the kinetic energy as a function of momentum has a plane of reflection about the skew-diagonal as shown in (9). Exploiting this structure, our proposed quantum architecture is shown in Algorithm-2. ``` 1:procedureTrotterization(\(\mathbf{K}\), \(\mathbf{V}\), \(\Delta t\)) 2:\(\mathbf{U}\leftarrow\) Identity Matrix 3:\(t\gets t_{0}\) 4:\(\left|\psi_{t}\right\rangle\leftarrow\psi_{t_{0}}\)\(\triangleright\) Initialization 5:for\(t\gets t+\Delta t\)do\(\triangleright\) Apply Trotter-Suzuki approximation 6:\(\mathbf{U}\gets e^{-i\mathbf{V}t/2}\cdot\mathbf{U}_{QFT}e^{-i\mathbf{K}t} \mathbf{U}_{IQFT}\cdot e^{-i\mathbf{V}t/2}\).\(\triangleright\) Apply Trotterization 7:\(\left|\psi_{t+\Delta t}\right\rangle\leftarrow\mathbf{U}\left|\psi_{t}\right\rangle\) 8:endfor 9:return\(\mathbf{U}\), \(\left|\psi_{t+\Delta t}\right\rangle\) 10:endprocedure ``` **Algorithm 1** Trotterization-based Time Evolution Operator _Note on Algorithm-2_: Here, the inputs are number of registers (\(n\)), and the phase vector \(\boldsymbol{\theta}=\theta_{0}~{},\ldots,~{}\theta_{l}\). The quantum pyramid architecture can be designed in a ladder-cascaded form as shown in Fig. 1 following the QPA pseudo-code as described. Here, \(QR,~{}CR\), and \(QC\) denote the number of quantum registers, classical registers and the quantum circuit respectively. Here, \(Cx\) denotes the controlled-NOT gate, which creates entangled quantum states in the circuit. We demonstrate two methods for the encoding of the Hamiltonian. The first encoding method approximates the kinetic energy operator with \(n-1\) phase gates, and \({}^{n-1}\mathcal{C}_{2}\) controlled-phase gates. The complexity can be further reduced to \(\mathcal{O}(n)\) for certain experiments where \(n\)-level energies are studied instead of \(n^{2}\) available bands, with a proposed approach called quantum windowing encoding (QWE). ### _Encoding Method_ The QPA algorithm helps us to encode the Hamiltonian for \(\frac{N}{2}\) sampling points, instead of \(N\) samples. The \(\frac{N}{2}\) phase samples stored in the vector \(\boldsymbol{\theta}\) needs to be encoded in the matrix exponential as \(e^{-i\boldsymbol{\theta}}\) along the diagonal of the operator \(\mathbf{U}_{K}\). In the encoding procedure, we will be using \(1\)-qubit phase gates (denoted as \(p\)) and \(2\)-qubit controlled-phase gates (denoted as \(Cp\)). Here, we propose two encoding techniques as follows. #### Iii-B1 Quantum Approximate Time Evolution (QATE) The quantum approximate time evolution (QATE) encoding method uses \(n-1\) number of phase gates, and \({}^{n-1}\mathcal{C}_{2}\) controlled phase gates to approximate the matrix exponential for all \(2^{n-1}\) samples of phases. The pseudo-code of the QATE algorithm is given below. ``` 1:procedureQATE(\(n\)) 2:\(\boldsymbol{\theta}_{P}\leftarrow\) Primary angles 3:\(\boldsymbol{\theta}_{C}\leftarrow\) Composite angles 4:for\(i\gets 0:n-1\)do 5:\(QC.P(-\boldsymbol{\theta}_{P}[i],QR[i+1])\)\(\triangleright\) Phase gate encoding 6:endfor 7:for\(i,x\) in enumerate(sequence)do 8:\(index=i\) 9:\(QC.Cp(-\boldsymbol{\theta}_{C}[index],QR[x[0]],QR[x[1]])\)\(\triangleright\) Controlled-phase gate encoding 10:endfor 11:return\(\mathbf{U}_{K}\) 12:endprocedure ``` **Algorithm 3** Proposed QATE Encoding _Note on Algorithm-3:_ In the QATE encoding method, we first allocate the \(\theta_{0}\) sample as a global phase in the initialization of the algorithm, which does not require any additional resources. The working principle of the QATE algorithm is discussed as follows. * We will consider a \(n\) qubit circuit, to encode the kinetic energy function \(\mathbf{K}\) in the diagonal unitary matrix form as an evolution operator. Here, the parameters \(\theta_{j}=-\frac{p_{j}^{2}\Delta t}{2}\) are the samples of the Kinetic energy samples multiplied by the evolution time. * Using a QPA architecture, one can see some indices of the unitary matrix can be uniquely prepared by placing phase gates in the quantum circuits. The other indices of the unitary diagonal matrix are composed of the linear combinations of those phase angles. Based on these observations, we divide the set of angles \(\{\theta[j]\}\) into two sets. One set is called primary angles, which is stored in an array \(\boldsymbol{\theta}_{P}\). Note that, we will subtract the global phase here, in case the first phase component (i.e., \(\theta[0]\)) is encoded as a global phase. * We have observed that if we assign phase gates from the second qubit onward till the last qubit (from up to down approach), the positions (or indices along the diagonal) which are uniquely represented with those phase angles ( while other indices are linear combinations of the phase angles due to the tensor product representation of all gates) can be found as follows: \[\begin{cases}2^{k},&\text{for}\ \ k=0:n-1\\ 2^{k}-1,&\text{for}\ \ k=n.\end{cases}\] (10) For example, if \(n=3\), the unique functional values in the diagonal of the overall unitary operator can be found at indices \(2,4,7\). * Now, suppose one is interested in encoding the vector in the principal diagonal of the unitary matrix as \(1:1\) correspondence with the angles \(\theta_{j}\) with the functional value \(e^{-i\theta_{j}}\) for \(j\) varies from \(0:2^{n}-1\). The first functional value is already encoded due to global phase \(\theta_{0}\), which is performed to eliminate any gate requirement for encoding the first functional point \(e^{-i\theta_{0}}\). As a consequence, all other functional values are biased with the angle, which needs to be subtracted in successive encoding. To perform this manipulation, we define dummy variables \(\{a_{i}\}\) in relation to the phase angles \(\{\theta_{j}\}\). * Our first approach is to encode the primary angles, which are \(\{\theta_{j}\}\) where \(j\) follows (10). They can uniquely encode functional values. Accordingly, we define the dummy variables (with their suffix named with unique indices) \(\{a_{i}\}\) by adjusting with the global variables. For the \(n=5\) qubit system, one can follow (11). Here, we choose \(\theta_{2},\theta_{4},\theta_{8},\theta_{15}\) to encode their corresponding functional values in the diagonal unitary operator. Note, that the QPA architecture takes half of the samples (here \(16\) samples are considered instead of \(32\)) and reflects the other half by exploiting the bi-symmetric structural advantage of the operator. * Now, already the other indices in the principal diagonal vector of the unitary matrices are impacted by the primary Fig. 1: Proposed quantum pyramid architecture for bi-symmetric Hamiltonian operator angles. We have observed that for \(n=5\), the positions \(6,7,10,11,12,13\) are impacted in the diagonal which are in fact related to the combinations of the primary angles. One can easily find this relation for \(n\) qubit system by training several values of \(n\). Note that, as these positions are impacted, we can manipulate them with entangled controlled phase gates and retain the actual functional values. For example, for \(n=5\), one can see in (11), we have encoded \(\theta_{6},\theta_{7},\theta_{10},\theta_{11},\theta_{12},\theta_{13}\) with their functional correspondence in the diagonal of the unitary matrix. We have used the dummy variables \(a_{6},\ldots,a_{13}\) here for adjusting the primary angles and the global phase. We call these angles the composite angle. * All the primary angles are stored in an array \(\boldsymbol{\theta}_{P}\) and all the composite angles needs to be stored in another array, namely \(\boldsymbol{\theta}_{C}\) which are used in the QATE algorithm. Note, that with QPA architecture, the QATE encoding requires \(n-1\) phase gates, and there will be \({}^{n-1}C_{2}\) possible combinations of controlled phase gates. Hence, the size of \(\boldsymbol{\theta}_{P}\) is \(n-1\), and it is \({}^{n-1}C_{2}\) for \(\boldsymbol{\theta}_{C}\). One can find (by parity checking of the binary representation of the indices where the qubits are placed) that, there is a relation between the primary angles and composite angles related to the placement of phase gates in certain qubits. We have found that phase angles for the \(Cp\) gates can be obtained as a function of the sequence generated from primary angles. For example, in case \(n=3\), the phase of the \(Cp\) gate will be \(\boldsymbol{\theta}_{C}[1]=\theta[1]-(\boldsymbol{\theta}_{P}[1]+\boldsymbol{ \theta}_{P}[2])-\theta[0]\), where \(\theta[0]\) is the global phase, \(\boldsymbol{\theta}_{P}[1],\boldsymbol{\theta}_{P}[2]\) are primary angles. QATE algorithm is further explained in the result section. #### Iii-C2 Quantum Windowing Evolution (QWE) One may find interest in the study of the time evolution operator for certain regions of interest in the momentum domain. It can be either estimating the portion of the wave function where the probability amplitudes are at their peak or at locations where the amplitudes are low. We may often seek to know in certain regions of the lattice in the momentum domain and the corresponding kinetic energy (KE) time-evolution operator that accurately gives only those portions. In such a scenario, we can reduce the quantum gate complexity to a linear scale, i.e., \(\mathcal{O}(n)\), by windowing the region of interest to certain lattice points for a close approximation of the evolution operator. The pseudo-code for the proposed quantum windowing evolution (QWE) encoding is described as follows. It is to be noted that the windowing operation is being done in the momentum domain, where the KE operator operates. _Note on Algorithm-4_: ``` 1:procedureQWE(\(n\),\(\boldsymbol{\theta}_{W}\), \(K\)) 2:if\(k\in K_{1}<K\) is in \(\boldsymbol{\theta}_{W}\)then 3: Sort \(\boldsymbol{\theta}_{K_{1}}\) 4:for each\(k\in K_{1}\)do 5: perform \(QC.P(\boldsymbol{\theta}_{K_{1}}[k],QR[k])\) 6:endfor 7:else 8: Sort \(\boldsymbol{\theta}_{m}\) for \(m=K-K_{1}\) 9:for\(k=1\) to \(n\)do 10: perform \(QC.Cp(\boldsymbol{\theta}_{m}[k],QR[1],\boldsymbol{\theta}[m-1])\) 11:endfor 12:endif 13:return\(\mathbf{U}_{K}\) 14:endprocedure ``` **Algorithm 4** Proposed QWE Encoding In the QWE encoding, we decide a window of samples having length \(K\in\tilde{\mathcal{O}}(n)\), denoted as \(\boldsymbol{\theta}_{W}\subset\boldsymbol{\theta}\). We find the angles \(\theta_{k}\) (\(k=1,\ldots,K_{1}<K\)) for which \(e^{-i\theta_{k}}\) is an element in the diagonal of \(\mathbf{D}_{\theta}\), and we sort all such angles in \(\boldsymbol{\theta}_{K_{1}}\) which can be created by the phase gates. Similarly, we find the remaining \(m=K-K_{1}\) positions which can be created as entangled positions by the CNOT gates and sort them as \(\boldsymbol{\theta}_{m}\). The windowing encoding method follows similar embedding as in QATE encoding to prepare the quantum circuit. However, the total number of gates required in this procedure is kept within \(\mathcal{O}(n)\) to realize \(n\)-amplitudes instead of the \(n^{2}\) found in the entire lattice. However, the QWE encoding may not be used for the time evolution of a wave packet in the displacement domain. The QWE algorithm can be a low-cost version of the QATE algorithm, where one can play with how many \(Cp\) gates need to be placed in the circuit thereby trade-off between complexity and accuracy. ## IV Results and Discussions In this result section, we demonstrate numerical simulation results performed on an IBM quantum machine and quantum simulator. In the below subsections, we show the implementation of the kinetic energy operator for the \(5\) qubit system as an example using the QATE and QWE encoding method. Further, we portray the time evolution of a Gaussian wave function following (2) with the proposed QPA and QATE encoding method (compared with classical simulation). The performance of the proposed framework of the time evolution operator is measured with fidelity and complexity as the key parameter indices (KPIs) and also compared with the state-of-the-art method. In the below Table-I, we have shown the choice of parameters taken in the simulation environment. ### _Proposed Experimental Procedure for Kinetic Energy Operator Design_ The algorithm in 2 shows how a pyramid-like architecture of CNOT gates helps to design a bi-symmetric operator. In our case, the kinetic energy operator has a plane of reflection about the skew-diagonal. As a consequence, the first \(\frac{N}{2}\) elements of the diagonal matrix are the reflection of the second \(\frac{N}{2}\) elements of the matrix. However, constructing the matrix with desired functional values in \(a_{i,i}\) positions for \(i\in[N]\) requires \begin{table} \begin{tabular}{|c|c|} \hline Quantum Simulator & Statevector Simulator, Qush Simulator \\ \hline Number of qubits (\(n\)) & \(3-10\) \\ \hline Number of shots & \(1000-10000\) \\ \hline Evolution time (\(\Delta t\)) & \(0.1\) second \\ \hline Range of space coordinate (\(x\)) & \([-10,\,10]\) in \(\mathbf{A}^{r}\) (Angstrom) \\ \hline Sampling interval (\(dx\)) & \(0.625\) \\ \hline Wave packet encoding method & amplitude encoding \\ \hline \end{tabular} \end{table} TABLE I: Simulation parameters the proper choice of phases in the one-qubit phase gates (\(P\)) and two-qubit controlled-phase gates (\(Cp\)) while placing on a particular qubit in the circuit. Here, we demonstrate the experimental set-up for \(5\) qubit as an example. The discrete kinetic energy values in vector form \(\mathbf{k}=[KE(p_{0}),\ KE(p_{1}),\ \ldots,KE(p_{31})]\) can be encoded in angles \(\boldsymbol{\theta}=[\theta_{0},\theta_{1},\ldots,\theta_{31}]\) for time segment \(\Delta t\) as \(\boldsymbol{\theta}=\mathbf{k}\Delta t\). As a consequence, the unitary operator for the Hamiltonian operator \(\mathbf{k}\) denoted as \(\mathbf{U}_{k}=e^{-i\mathbf{diag}(\mathbf{k})\Delta t}\) becomes a function of \(\boldsymbol{\theta}\), expressed as \(\mathbf{U}_{k}:=\exp{-i\mathbf{diag}(\boldsymbol{\theta})}\). As \(\mathbf{U}_{k}\) is a bi-symmetric operator about its skew-diagonal, we design a quantum circuit for the first half of samples (i.e., \(\theta_{0},\ldots,\theta_{15}\)) using the QPA algorithm. With QPA, we have implemented the kinetic energy operator using the combination of phase gates and controlled phase gates. For the \(5\) qubit quantum circuit, our choice of the phase angles (after adjustment with global phases and employing the QATE encoding method) are given in (11). Using the above angles following the QATE procedure, we simulate the Kinetic energy operator for \(5\) qubit quantum circuit on an IBM machine using the 'Statevector' quantum simulator as shown in Fig. 2. Here, we have used \(4\)-phase gates and \(6\) number of \(Cp\) gates to simulate the operator \(\mathbf{U}_{K}\in\mathbf{C}^{32\times 32}\). One can simulate the Hamiltonian \(\mathbf{U}_{k}\) using a classical procedure with \(2^{n}\) samples of angles (note that here angle refers to the quantity "kinetic energy \(\times\) time" in the equation \(e^{-i\mathbf{K}t}\)). The discretized kinetic energy as a function of momentum is shown in Fig. 3.\(a\). We show the plot of the diagonal array in the quantum simulated unitary matrix \(\mathbf{U}_{k}\) designed for \(5\) qubits in Fig.3.\(b\). Note that, the QPA algorithm using the QATE encoding technique simulates the kinetic energy operator arbitrarily close to the classical simulated operator, which are overlapped in the given figure. However, the QWE encoding technique can only simulate the part of an operator within our region of interest with a lesser number of quantum gates. In Fig.3.\(c\)., we have shown the mid-windowing method. Here, we estimate \(4\) amplitudes in the mid-region of the evolution operator. For the mid-windowing evolution, our choice of the angles (with adjustment following QWE algorithm) are as follows: \(a_{0}=\theta[0],\ a_{15}=\theta[15]-a_{0},\ a_{11}=\theta[11]-a_{15}-a_{0},\ a_{12}=\theta[12]-a_{11}-a_{0},\ a_{13}=\theta[13]-a_{15}-a0,\) and \(\ a_{14}=\theta[14]-a_{11}-a_{12}-a_{13}-a_{0}.\) Here, phase gates are placed (following QWE encoding) from second to fifth qubit register with phases \(-a_{13},-a_{11},-a_{12},-a_{15}\) respectively, and a controlled phase gate is applied between second and third qubit register to create an entanglement with phase \(-a_{14}\). Similarly, we show side-window encoding when we are interested in the terminal region of the evolution operator (e.g., near the valence energy states) in Fig. 3.\(d\). Here, we encode the side windowing evolution operator with the angles \(a_{0}=\theta[0],\ a_{1}=\theta_{2}-a_{0},\ a_{4}=\theta[4]-a_{0},\) and \(a_{1}=\theta[1]-a_{2}-a_{3}-a_{4}-a_{0}.\) Here, we place the phase gates from the second qubit to the fourth qubit with angles \(-a_{2},\ -a_{4},\ -a_{3}\) respectively, and we place a controlled phase gate with angle \(-a_{1}\) between the second and third qubit registers. Note that, here we have used \(\theta[k]\) and \(\theta_{k}\) interchangeably with the same notion of a sample of phase at \(k^{th}\) instant. A Gaussian wave packet with the form \(\boldsymbol{\psi}_{0}=e^{-\frac{\epsilon^{2}}{42}}e^{ik_{0}x}\) is considered for the study of its time evolution dynamics which is often chosen as initialization [28, 29]. The wave function is normalized and embedded as the initial quantum state in the qubit registers using the amplitude encoding method. The kinetic energy operator \(\mathbf{U}_{k}(\hat{p})\) designed with the proposed QPA method and QATE encoding is applied on the initial state to get the final state \(\boldsymbol{\psi}_{t}\). As the kinetic energy is a function of the momentum (\(p\)), and the wave function is defined in terms of the space coordinate (\(x\)), we employ the quantum Fourier transform (QFT) and its inverse (IQFT) to represent the overall operator in the displacement domain (space coordinate) as \[\boldsymbol{\psi}_{t}=\mathbf{U}_{QFT}\mathbf{U}_{k}(\hat{p})\mathbf{U}_{ IQFT}\boldsymbol{\psi}_{0}. \tag{12}\] In Fig. 4, we have shown the time evolution of the Gaussian wave packet performed on IBM 'Qasm simulator' with \(5\) qubit registers. We have considered an evolution time of \(\Delta t=0.1\), and varying Trotterization steps (\(10-50\)). The simulation is performed for \(10000\) quantum shots to get the probability histogram of the measurement bases (\(00000\) to \(11111\)). With \(5\) qubit registers, the quantum-evolved state approaches the classically evolved state with fidelity of \(0.73\) approximately. However, by increasing the qubit size the fidelity can be further improved as discussed in the next subsection. Here, we have seen that the quantum-evolved state with our proposed quantum framework is very near to the classically evolved state which has potential usages for the study of dynamics of various wave functions in physics and chemistry. Note that, the unit of time (\(\Delta t\)) and space (\(dx\)) for the study of atomic or orbital energy levels may be in atomic unit (\(au\)). _Note:_ For the time evolution of a wave packet in the coordinate domain, we need to perform the QFT (to transform the Kinetic energy from the momentum domain to the coordinate domain). For this, we rely on the QATE encoding procedure which encodes the kinetic energy for the entire domain of interest. However, the QWE encoding technique is limited within the momentum domain for the applications within small regions of interest. Applying the QFT in the quantum circuit with QWE encoding may not provide perfect time evolution. Also, the application of the time evolution operator in the momentum domain directly is also limited fat present. ### _Fidelity comparison_ We perform a quantum fidelity test using the quantum swap circuit to measure the accuracy in terms of the inner product \(\langle\psi|\phi\rangle\), where \(\psi\) is the actual (or target) state and \(\phi\) denotes the estimated (or output) state. The swap test circuit takes two input states \(\psi_{t}\), and \(\phi_{t}\) and outputs a probability in computational basis \(|0\rangle\) as \[Pr(first\ qubit=0)=\frac{1}{2}+\frac{1}{2}|\left\langle\psi_{t}|\phi_{t} \right\rangle|^{2}. \tag{13}\] In Fig. 5, we have shown a schematic of the swap-test circuit to measure the distance between the quantum evolved state (\(\phi_{t}\)) and the target state (\(\psi_{t}\)). The target state is a quantum state which is obtained by doing amplitude encoding of the classically evolved state. The output state \(\phi_{t}\) is the quantum-evolved state which is obtained by the implementation of the proposed quantum time evolution operator (\(\mathbf{U}(\Delta t)\)) applied on the initial quantum state (\(\psi_{0}\)). The swap-test circuit is composed of Hadamard gates, control qubit, input states, and the swap gate as shown in Fig. 5. We have performed our experiments for varying qubit size (\(n\)) to test the fidelity of the proposed quantum circuit and also compared our result with the Shokri et al. method [4]. For the fidelity experiments we have kept, the evolution time \(\Delta t=0.1\) (with varying Trotter step size) and various measurement shots (depending on the qubit size) are performed on the IBM Qasm simulator. It is observed that our proposed quantum circuit possesses a fidelity of \(0.73\) with \(3\) qubits, and it reaches a fidelity of \(0.99\) with \(9\) or more qubits. As compared to the state of the art (\(0.89\) approximately with \(9\) qubits), the proposed quantum circuit shows significant improvement in the fidelity (with a fidelity of \(0.99\)). ## V Computational complexity and error analysis The computational gate complexity of the Trotter-Suzuki method for the Hamiltonian simulation is of \(\Theta(n^{2})\) with a qubit size of \(n\). In recent literature Shokri et. al. [4], have shown an implementation (up to \(5\) qubits) which can take a total number of \(3n+^{n}\mathcal{C}_{2}\) gates. Our proposed QPA-based algorithm with the QATE encoding method further reduces the complexity, given in the below Lemma. **Lemma 3**.: _Given an \(n\)-qubit quantum circuit, the QATE algorithm requires \(\mathcal{O}(n)\)\(1\)-qubit gate, and \({}^{n-1}\mathcal{C}_{2}+2(n-2)\) number of \(2\)-qubit gates to design a bi-symmetric diagonal evolution operator._ Proof.: The number of \(Cx\) gates required for the exchange operator as discussed in Proposition-1 to prepare the bi-symmetric pattern in the diagonal matrix is given by \(2(n-1)\). The prime locations for an \(n\)-qubit quantum circuit are created with \(n-1\) phase gates. The number of combinations of controlled phase gates that can be placed in the quantum circuit for generating the bi-symmetric pattern following the QATE algorithm is given by \({}^{n-1}\mathcal{C}_{2}\). Hence, the total number of single-qubit, and \(2\)-qubit quantum gates are given by \(\mathcal{O}(n)\), and \({}^{n-1}\mathcal{C}_{2}+2(n-2)\). The gate complexity as compared to [4] has been improved for \(1\) qubit quantum gates. The overall fidelity of the quantum circuit is slightly increased in the proposed algorithm with the QATE encoding method. In problems, where one is interested in a specific region of the evolution operator, the QWE technique is preferred. To realize \(n\)-states, one can restrict the Fig. 2: Kinetic energy evolution operator designed for \(5\) qubit system with proposed QPA and QATE encoding using QISKIT script on IBM ’Statevector’ quantum machine. gate complexity to \(\mathcal{O}(n)\) with the proper choice of phase and controlled phase gates. In fact, the least-square approach may also be adapted to find an approximation of the kinetic energy operator by revising our QWE approach with modified phase angles which can be a trade-off between approximation and complexity (between \(\mathcal{O}(n)\) and \(\mathcal{O}(n^{2})\)). ### _Gate counts_ The method by Shokri et. al. in [4] has shown the implementation of the time evolution operator on a real quantum machine for \(4\) qubit register which takes a total number of \(18\) quantum gates. We compare our proposed quantum algorithm with the QATE encoding method (which approximates the kinetic energy operator for all samples) with that of [4] (which is \(3n+n^{\ast}\mathcal{C}_{2}\) for \(n\) qubit circuit). For a \(4\) qubit quantum register circuit, our quantum circuit requires a total number of \(12\) quantum gates, with a fidelity of \(0.72\) approximately. With the QATE encoding method, we reduce the single qubit gate to \(n\). The number of \(2\)-qubit gates is similar to the existing method. QWE encoding can take lesser \(2\) qubit gates depending on the choice of window size. ### _Circuit depth_ We have performed circuit depth analysis which is an important measure of usage of gate resources. We have implemented our proposed algorithm and the existing method [4] on qiskit, and compare the circuit depth as shown in Table-II. The circuit depth is reduced in the proposed QATE encoding method as compared to the existing approach. \begin{table} \begin{tabular}{|c|c|c|} \hline **Qubit Size (n)** & **Existing approach [4]** & **Proposed QATE algorithm** \\ \hline 3 & 16 & 9 \\ \hline 4 & 24 & 18 \\ \hline 5 & 32 & 22 \\ \hline 6 & 40 & 36 \\ \hline \end{tabular} \end{table} TABLE II: Circuit depth Fig. 3: **Realization of the QATE and QWE algorithm with \(5\) qubit register:****a.** Plot of kinetic energy (**K**) as a function of momentum (\(p\)), **b.** Simulation of Kinetic energy operator with QATE algorithm. Here, four-phase gates and five \(Cp\) gates are used to simulate \(\mathbf{U}_{KE}\). Note that, the quantum simulation is arbitrarily close to the classical simulation result using the QATE procedure. **c.** Simulation with windowing encoding procedure is performed in the mid-region of the Kinetic energy function. Here, we realize four amplitudes near the mid-windowed evolution \(\mathbf{U}_{KE}\) using four phase gates and one \(Cp\) gate. **d.** Quantum simulation is performed for the side window of the wave function using three-phase gates and one \(Cp\) gate. ### _Error analysis_ There are several sources of errors in the practical circuit simulation on a quantum machine, such as gate-level errors, cross talk, readout and coupling errors, and simulation errors. We have seen that the gate level error is significant for the \(2\)-qubit gates (example: CNOT). In the QATE encoding method, we have used \(1\)-qubit and \(2\)-qubit gates for approximating the kinetic energy function. In this approximation, we incur residual error of \(\mathcal{O}\left(h^{3}\right)\) where \(h\) is the distance between two successive samples (also called step size). For example, if \(10\) qubits are taken to encode the potential energy within a region \([0,1]\), the \(h\) can be approximately equal to \(1/2^{10}\approx 9.7\times 10^{-4}\). The overall approximate error bound in the diagonal unitary encoding of the function \(f(x)\) using the proposed polynomial encoding procedure within polynomial order \(r\) encoded in \(n\) qubit registers and evolved for time \(\Delta t\) has the form given as follows: \[\left\|\epsilon_{s}\right\|\approx\mathcal{O}\left(h^{3}\right)+L_{2}\sigma_{g }^{2}+\mathcal{O}\left(1+\Delta t\left(\frac{T_{1}+T_{2}}{T_{1}T_{2}}\right) \right)+\sigma_{cr}^{2}. \tag{14}\] Fig. 4: Time evolution of a Gaussian wave packet in the presence of kinetic energy operator in a unit step potential well: Time evolution is shown for an evolution time of \(\Delta t\), and varying Trotterization step size \(Nt\). The figure denotes in **a**. time evolution with \(Nt=10\) steps, **b**. time evolution with \(Nt=20\) steps, **c**. time evolution with \(Nt=40\) steps, and **d**. time evolution with \(Nt=50\) Trotterization steps. The quantum evolved state obtained with proposed QPA and QATE encoding for \(5\) qubits approaches the classically evolved state. Here, the unit of distance is in Angstrom. Fig. 5: Quantum swap-test circuit , where \(\mathcal{O}\left(h^{3}\right)\) is the residual error for step size \(h\), \(L_{2}\) represents total CNOT gates with each variance of \(\sigma_{g}^{2}\), \(T_{1},T_{2}\) are decoherence time constants (here, we have taken first order approximation of the decoherence term), \(\Delta t\) be evolution time, \(\sigma_{cr}^{2}\) denotes total read-out error variance term. ## VI Conclusion In this research article, we have studied quantum time evolution operator design on a quantum machine considering practical constraints. Time evolution plays a vital role in diverse disciplines for studying dynamics, especially in atomic chemistry. Considering the first quantization level, we have proposed a Hamiltonian encoding method for the Kinetic energy operator. It improves the total gate counts and fidelity for the time evolution of Gaussian wave packets. Further, we have proposed a quantum architecture namely quantum pyramid architecture to efficiently simulate the kinetic energy taking half of the sampled values on a quantum computer by exploiting its structural aspects. The underlying mathematical propositions are given with examples in the appendix. While the proposed QATE encoding shows a time evolution process with high accuracy, the application of the proposed QWE is unknown at the moment. QWE method can be a future direction of research as it exploits the complexity benefit in the momentum domain. We show the complexity analysis of the proposed quantum algorithm with gate counts for \(1\) qubit and \(2\) qubit gates. Experimental results are shown on the IBM quantum simulator, and the fidelity is compared with the state of the art. There are several future directions of this research for the study of dynamics in chemical experiments, free particle systems, multi-body systems etc. ## VII Acknowledgement We acknowledge Rajiv Sangle, MTech in Quantum technology at Indin Institute of Science for his support in the Swap test circuit. We acknowledge Anupama Ray, Dhiraj Madan, and SheshaShayee K Raghunathan of IBM Research Bangalore for their valuable suggestions for improving our work.
2308.01744
Multitask Learning with No Regret: from Improved Confidence Bounds to Active Learning
Multitask learning is a powerful framework that enables one to simultaneously learn multiple related tasks by sharing information between them. Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning. In this work, we provide novel multitask confidence intervals in the challenging agnostic setting, i.e., when neither the similarity between tasks nor the tasks' features are available to the learner. The obtained intervals do not require i.i.d. data and can be directly applied to bound the regret in online learning. Through a refined analysis of the multitask information gain, we obtain new regret guarantees that, depending on a task similarity parameter, can significantly improve over treating tasks independently. We further propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance, i.e., automatically adapting to task similarity. As a second key application of our results, we introduce a novel multitask active learning setup where several tasks must be simultaneously optimized, but only one of them can be queried for feedback by the learner at each round. For this problem, we design a no-regret algorithm that uses our confidence intervals to decide which task should be queried. Finally, we empirically validate our bounds and algorithms on synthetic and real-world (drug discovery) data.
Pier Giuseppe Sessa, Pierre Laforgue, Nicolò Cesa-Bianchi, Andreas Krause
2023-08-03T13:08:09Z
http://arxiv.org/abs/2308.01744v1
# Multitask Learning with No Regret: ###### Abstract Multitask learning is a powerful framework that enables one to simultaneously learn multiple related tasks by sharing information between them. Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning. In this work, we provide novel multitask confidence intervals in the challenging agnostic setting, i.e., when neither the similarity between tasks nor the tasks' features are available to the learner. The obtained intervals do not require i.i.d. data and can be directly applied to bound the regret in online learning. Through a refined analysis of the multitask information gain, we obtain new regret guarantees that, depending on a task similarity parameter, can significantly improve over treating tasks independently. We further propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance, i.e., automatically adapting to task similarity. As a second key application of our results, we introduce a novel multitask active learning setup where several tasks must be simultaneously optimized, but only one of them can be queried for feedback by the learner at each round. For this problem, we design a no-regret algorithm that uses our confidence intervals to decide which task should be queried. Finally, we empirically validate our bounds and algorithms on synthetic and real-world (drug discovery) data. ## 1 Introduction In many real-world applications, one often faces multiple related tasks to be solved sequentially or simultaneously. The goal of multitask learning (MTL) [4] is to leverage the similarities across the tasks to obtain more accurate and robust models. Indeed, by jointly learning multiple tasks, MTL can exploit their statistical dependencies, yielding better generalization and faster learning than treating each task independently. MTL has gained significant attention in recent years, as it has been shown to be effective in a wide range of applications, including natural language processing, computer vision, federated learning, and drug discovery, see e.g., [11; 17; 14; 24]. A very natural model for learning across multiple tasks is the agnostic multitask (MT) regression approach of [13]. This utilizes a multitask kernel that can interpolate between running \(N\) (number of tasks) independent regressions, and regressing all tasks to their common average, depending on a tunable parameter. Notably, such a kernel does not require any knowledge neither about tasks' features nor about their similarity, thus finding good application in several domains. For instance, Cavallanti et al. [5] study it for online classification, and Cesa-Bianchi et al. [6] for online convex optimization. However, it is much less understood how to _quantify the uncertainty_ of such MT regression, i.e., assessing confidence in the estimated tasks. In particular, as also outlined by [13] as an open problem, it is important to assess their generalization error as a function of the kernel parameter. Appropriately characterizing these confidence intervals is indeed of crucial importance for a whole set of downstream applications. More concretely, multitask confidence intervals are used in online learning to inform the next decision to be made [6]. In active learning--as we show next--these intervals are pivotal to deciding the most informative task to query. In this work, we study the agnostic MT regression setup of [13], and provide _new multitask confidence intervals_ (see Figure 1 for a visualization) for the full range of the kernel parameter. Our intervals hold in the so-called adaptive setting, i.e., without requiring i.i.d. data, and are _tighter up to a \(\sqrt{N}\) factor_ than the naive ones employed in [6]. Moreover, we provide the first bounds for the information gain of MT regression and utilize them--together with the derived intervals--to obtain _tighter online learning guarantees_. The latter depend on a task similarity parameter and can significantly improve over treating tasks independently. Additionally, we propose an adaptive no-regret algorithm that exploits task similarity without knowing this parameter in advance. Finally, we consider a novel multitask _active learning_ setup, where tasks should be simultaneously optimized but only one of them can be queried at each round. We show that the newly derived intervals are also crucial in such a setting, and provide a new algorithm that ensures sublinear regret. We demonstrate the superiority of the derived intervals over previously proposed algorithms on synthetic as well as real-world drug discovery tasks. **Related work.** The agnostic MT regression approach of [13] reduces the learning of \(N\) tasks to a single regression problem, as a function of the MT kernel parameter. When combined with support vector machines, it was shown effective in a series of classification problems [13, 21], and since then was studied in various further settings. Cavallanti et al. [5], e.g., analyze mistake bounds for online MT classification algorithms as a function of the kernel parameter. Cesa-Bianchi et al. [6], instead, utilize the MT kernel to prove regret bounds in online MT learning with bandit feedback. Inspired by this, [10] focuses on learning more general kernel structures from data. An important question not addressed by previous work, though, is how to properly quantify the uncertainty of the obtained task estimates. This problem is well-understood in single-task learning (e.g., [22, 2, 8]) but remains largely unexplored in MT domains. As shown in [6], MT confidence intervals can in principle be obtained by a naive application of the single-task guarantees of [2]. However, as we show in Section 2, the so-obtained intervals are extremely conservative and--as a result--can hamper the MT learning performance. Our intervals are tighter by a factor up to \(\sqrt{N}\) w.r.t. the naive ones from [6], yielding novel online learning regret guarantees which can provably improve over treating tasks independently. Compared to MT online learning [5, 6], where a single task is revealed to the learner at each round, a series of works have considered learning multiple tasks _simultaneously_, i.e., taking a decision for each one of them. Dekel et al. [12], e.g., propose the use of a shared loss function to account for tasks' relatedness, Lugosi et al. [18] studies the computational tractability of taking multiple actions with joint constraints, while Cavallanti et al. [5] propose a matrix-based extension of the multitask Perceptron algorithm. In all of these works, however, the learner receives feedback from _all_ the Figure 1: Independent vs. Multitask (MT) regression. MT regression leverages data coming from multiple related tasks and can yield more accurate and more confident estimates. In this work, we show naive confidence intervals are overly conservative and provide improved ones (shaded in red). tasks. In Section 4, instead, we focus on the challenging setup where only one task can be queried by the learner at each round. Hence, in addition to choosing good actions, the learner faces the _active learning_ task of assessing the most informative feedback, in order to achieve sublinear regret. Perhaps most related to ours, is the offline contextual Bayesian optimization setup of [7; 16] where the goal is to compute the best strategy for each context (task) with minimal function interactions. However, unlike us, [7; 16] do not guarantee sublinear regret but provide only sample-complexity results. Finally, we note that MT confidence intervals and regret guarantees were also recently derived by [9], albeit in a different setup and regression model. Indeed, the authors of [9] focus on multi-objective optimization where they ought to learn _multi-output_ functions (each output corresponding to a task) using matrix-valued kernels. Although their setup can be related to ours, it crucially requires all tasks to be observed at each round, leading to different challenges than ours, see Appendix A.1 for details. **Notation.** We use \([N]\coloneqq\{1,\ldots,N\}\), \(1_{N}\) for the vector in \(\mathbb{R}^{N}\) full of ones. Norms of functions are always taken w.r.t. the natural RKHS norm, so that we drop the subscript for simplicity of writing. ## 2 Improved Confidence Intervals for Multitask Kernel Regression In this section, we introduce the MT kernel regression setting, and prove our refined confidence intervals. Of independent interest, these results are then leveraged in Sections 3 and 4 to derive novel regret bounds for online and active multitask learning. All proofs are deferred to the Appendix. ### Multitask Kernel Regression Given an input space \(\mathcal{X}\), equipped with a (single task) scalar kernel \(k_{\mathcal{X}}\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), the goal of MT kernel regression is to jointly learn \(N\) different functions \(f_{1},\ldots,f_{N}\) from \(\mathcal{X}\) to \(\mathbb{R}\), all belonging to \(\mathcal{H}_{k,\mathcal{X}}\), the RKHS associated to \(k_{\mathcal{X}}\). To do so, the learner is given a set of triplets \(\{(i_{s},x_{s}),y_{s}\}_{s=1}^{t}\) consisting of a measured task index \(i_{s}\in[N]\), a measured point \(x_{s}\in\mathcal{X}\), and a noisy measurement \(y_{s}=f_{i_{s}}(x_{s})+\xi_{s}\), where \(\xi_{s}\) is an independent random variable to be specified later. We can further define the multitask function \(f^{\text{mt}}\colon[N]\times\mathcal{X}\to\mathbb{R}\) such that \(f^{\text{mt}}(i,\cdot)=f_{i}\), and the multitask kernel \[k\big{(}(i,x),(i^{\prime},x^{\prime})\big{)}=k_{\mathcal{T}}(i,i^{\prime}) \cdot k_{\mathcal{X}}(x,x^{\prime})\,, \tag{1}\] where \(k_{\mathcal{T}}\) is a kernel on the tasks. In certain cases, the latter might be given as input to the learner, either under the form of the task Gram matrix, or via task features and an assumed (e.g., linear) similarity [15]. However, in practice such information is usually not accessible to the learner. In such a case, a standard _agnostic_ approach to MT regression [5; 6; 10; 13; 21] then consists in leveraging a parameterized task kernel of the form \[k_{\mathcal{T}}(i,i^{\prime})=\big{[}K_{\text{task}}(b)\big{]}_{ii^{\prime}} \,,\quad\text{with}\quad K_{\text{task}}(b)=\frac{1}{1+b}I_{N}+\frac{b}{1+b} \frac{1_{N}\mathbbm{1}_{N}^{\top}}{N}\in\mathbb{R}^{N\times N}\,. \tag{2}\] Intuitively, parameter \(b\geq 0\) governs how similar the tasks are thought to be. When \(b=0\), we have \(K_{\text{task}}(b)=I_{N}\), such that \(k_{\mathcal{T}}(i,i^{\prime})=\delta_{ii^{\prime}}\), and the tasks are considered to be independent. When \(b\) goes to \(+\infty\), we have \(K_{\text{task}}(b)=1_{N}\mathbbm{1}_{N}^{\top}/N\), and all tasks are considered to be one single common task. Any choice of \(b\in(0,+\infty)\) corresponds to a tradeoff between these two regimes. We make this intuition explicit in Proposition 2 (Appendix A.1). Note that all quantities depending on the kernel do by definition depend on \(b\). We use the notation \(\lvert b\) to make this dependence explicit when relevant. Given a history of measurements \(\{(i_{s},x_{s}),y_{s}\}_{s=1}^{t}\), one may then estimate \(f^{\text{mt}}\), or equivalently the \(\{f_{i}\}_{i=1}^{N}\), by standard kernel Ridge regression using the MT kernel \(k\). One obtains the estimates \[\mu_{t}(i,x\,\lvert\,b) =\mathbf{k}_{t}(i,x)^{\top}\big{(}K_{t}+\lambda I_{t}\big{)}^{-1}\bm {y}_{1:t}\,, \tag{3}\] \[\sigma_{t}^{2}(i,x\,\lvert\,b) =k\big{(}(i,x),(i,x)\big{)}-\mathbf{k}_{t}(i,x)^{\top}\big{(}K_{t}+ \lambda I_{t}\big{)}^{-1}\mathbf{k}_{t}(i,x)\,, \tag{4}\] where \(\mathbf{k}_{t}(i,x)=\big{[}k\big{(}(i_{s},x_{s}),(i,x)\big{)}\big{]}_{s=1}^{t}\), \(K_{t}=\big{[}k\big{(}(i_{s},x_{s}),(i_{s^{\prime}},x_{s^{\prime}})\big{)} \big{]}_{s,s^{\prime}=1}^{t}\), \(\mathbf{y}_{1:t}=\big{[}y_{s}\big{]}_{s=1}^{t}\), and \(\lambda>0\) is some regularization parameter. Functions \(\mu_{t}\) and \(\sigma_{t}^{2}\) can be interpreted as the posterior mean and variance of a corresponding Gaussian Process model, see [22; 8]. In the next section, we will utilize \(\mu_{t}\) and \(\sigma_{t}^{2}\) to construct high-probability confidence intervals for the multitask function \(f^{\text{mt}}\). Information gain.An important quantity when analyzing (multitask) kernel regression is the so-called _(multitask) information gain_: \[\gamma_{T}^{\text{mt}}(b)=\frac{1}{2}\ln\big{|}I_{T}+\lambda^{-1}K_{T}\big{|}\,.\] It can be interpreted as the reduction in uncertainty about \(f^{\text{mt}}\) after having observed a given set of \(T\) datapoints. Similarly to single-task setups [22; 8], we use \(\gamma_{T}^{\text{mt}}\) in the next sections to characterize our confidence intervals and regret bounds. Note that \(\gamma_{T}^{\text{mt}}\) depends on the multitask kernel through \(K_{T}\), and hence on \(b\). In Section 3, we exploit the properties of our multitask kernel to obtain a sharper control over \(\gamma_{T}^{\text{mt}}\), which is then fundamental to derive improved regret bounds. ### Improved Confidence Intervals In this section, we utilize the regression estimates obtained in Equations (3) and (4) to construct high probability confidence intervals around the unknown multitask function \(f^{\text{mt}}\). First, we assume that \(\|f_{i}\|\leq B\) for all \(i\in[N]\), as it is standard in single-task regression. Moreover, let \(f_{\text{avg}}=(1/N)\sum_{i=1}^{N}f_{i}\) be the average task function, and define \[\epsilon=\max_{i}\ \|f_{i}-f_{\text{avg}}\|/B\,. \tag{5}\] Note that by definition \(\epsilon\in[0,2]\). Quantity \(\epsilon\) measures how much individual tasks deviate from the average task \(f_{\text{avg}}\). The smaller \(\epsilon\), the more similar the tasks are, the limit case being that all tasks are equal, attained at \(\epsilon=0\). At the other extreme, when \(\epsilon\gg 0\) tasks are highly distant and ought to be learned independently. The deviation \(\epsilon\) plays a crucial role in the subsequent analysis. A naive confidence interval.As discussed in [6], it is possible to construct the multitask feature map \(\widetilde{\psi}\) associated to \(k\). One may then rewrite \(f^{\text{mt}}(i,x)=\langle\widetilde{f},\widetilde{\psi}(i,x)\rangle\), where \(\widetilde{f}\) is a transformed version of \(f^{\text{mt}}\) which satisfies \(\big{\|}\widetilde{f}\big{\|}\leq B\sqrt{N(1+b\epsilon^{2})}\), see Appendices A.1 and A.2 for details. MT regression thus boils down to single-task regression, over the modified features \(\widetilde{\psi}(i,x)\), and with target function \(\widetilde{f}\). One can then employ well-known linear regression results to obtain confidence intervals for \(f^{\text{mt}}\). Using [1, Theorem 3.11, Remark 3.13] and the definition of \(\gamma_{t}^{\text{mt}}(b)\), with probability \(1-\delta\) we have that for all \(t\in\mathbb{N}\), \(i\in[N]\), and \(x\in\mathcal{X}\) it holds \(\big{|}\mu_{t}(i,x\,|\,b)-f^{\text{mt}}(i,x)\big{|}\leq\beta_{t}^{\text{naive }}(b)\cdot\sigma_{t}(i,x\,|\,b)\), where \[\beta_{t}^{\text{naive}}(b)=B\sqrt{N(1+b\epsilon^{2})}+\lambda^{-1/2}\sqrt{2 \big{(}\gamma_{t}^{\text{mt}}(b)+\ln(1/\delta)\big{)}}\,.\] Note that the above confidence interval was already established in [6; Theorem 1]. As expected, it depends on \(B\), \(N\), \(b\), and in a decreasing fashion with respect to \(\epsilon\). However, we argue that the above naive choice can be _extremely conservative_. Indeed, when \(b=0\), MT regression treats tasks independently, see Proposition 2. Hence, a valid confidence width from [2; 1; 8] is \(\mathcal{O}\big{(}B+\sqrt{\gamma_{t}^{\text{st}}}\big{)}\), where \(\gamma_{t}^{\text{st}}\) is the single-task maximum information gain. Instead, noting that \(\gamma_{t}^{\text{mt}}(0)=\mathcal{O}\big{(}N\gamma_{t}^{\text{st}}\big{)}\), see Proposition 1, the naive choice provides \(\beta_{t}^{\text{naive}}(0)=\sqrt{N}\cdot\mathcal{O}\big{(}B+\sqrt{\gamma_{ t}^{\text{st}}}\big{)}\), which is larger by a factor \(\sqrt{N}\). A similar suboptimality gap of \(\sqrt{N}\) can also be proven when \(b\) tends to \(+\infty\). Motivated by the above observation, we derive a novel confidence width that is less conservative than \(\beta_{t}^{\text{naive}}(b)\) for the whole range of possible kernel parameters \(b\). **Theorem 1** (Multitask confidence intervals).: _Let \(f^{\text{mt}}\colon[N]\times\mathcal{X}\to\mathbb{R}\) such that for all \(i\in[N]\), \(f_{i}\coloneqq f^{\text{mt}}(i,.)\) belongs to the RKHS associated to \(k_{\mathcal{X}}\) and \(\|f_{i}\|\leq B\). Moreover, let \(\mu_{t}\) and \(\sigma_{t}\) be the regression estimates of Equations (3) and (4) with task kernel \(k_{\mathcal{T}}(i,j)=[K_{\text{task}}(b)]_{ij}\), parameter \(\lambda\in[1/(1+b),1]\), and noise \(\{\xi_{\tau}\}_{\tau=1}^{\tau}\) i.i.d. \(1\)-sub-Gaussian. Then, with probability at least \(1-2\delta\),_ \[\big{|}\mu_{t}(i,x\,|\,b)-f^{\text{mt}}(i,x)\big{|}\leq\beta_{t}^{ \text{new}}(b)\cdot\sigma_{t}(i,x\,|\,b),\qquad\forall\,t\in\mathbb{N},i\in[ N],x\in\mathcal{X}\,,\] \[\text{where}\quad\beta_{t}^{\text{new}}(b) =\min\left\{\beta_{t}^{\text{naive}}(b),\,\beta_{t}^{\text{ small-b}}(b),\,\beta_{t}^{\text{large-b}}(b)\right\},\] \[\beta_{t}^{\text{small-b}}(b) =B(1+b\epsilon)\sqrt{\frac{1+bN}{1+b}}+\lambda^{-1/2}\sqrt{2(1+ bN)\big{(}\gamma_{t}^{\text{st}}+\ln(N/\delta)\big{)}}\,,\] \[\beta_{t}^{\text{large-b}}(b) =B\sqrt{\frac{(1+b\epsilon)^{2}}{1+b}}+\frac{2bN}{1+b}+\frac{2b(1 +b\epsilon)^{2}}{N\lambda^{2}(1+b)^{3}}\,t^{2}+\lambda^{-1/2}\sqrt{2\big{(} \gamma_{t}^{\text{mt}}(b)+\ln(1/\delta)\big{)}}\,.\] The obtained improved confidence width \(\beta_{t}^{\text{new}}(b)\) is the minimum between three confidence widths, see Figure 2. The first one is the naive one \(\beta_{t}^{\text{naive}}(b)\), obtained by standard arguments as outlined above, while \(\beta_{t}^{\text{small-b}}(b)\) and \(\beta_{t}^{\text{large-b}}(b)\) (dashed and dotted lines in Figure 2) are novel and useful for small and large values of \(b\), respectively. Indeed, note that we have \(\beta_{t}^{\text{small-b}}(b)\xrightarrow{b\to 0}\mathcal{O}\big{(}B+ \sqrt{\gamma_{t}^{\text{st}}}\big{)}\), which is the expected single-task confidence width and \(\sqrt{N}\) smaller than \(\beta_{t}^{\text{naive}}(0)\). Similarly, as \(b\) goes to \(+\infty\) we have \(\beta_{t}^{\text{large-b}}(b)\xrightarrow{b\to+\infty}\mathcal{O}\big{(}B \sqrt{b\epsilon^{2}+2N+2\epsilon^{2}t^{2}/N}\big{)}\xrightarrow{b\to+\infty} \mathcal{O}\big{(}\epsilon B\sqrt{b}\big{)}\), while \(\beta_{t}^{\text{naive}}(b)\xrightarrow{b\to+\infty}\mathcal{O}\big{(} \epsilon B\sqrt{Nb}\big{)}\). The obtained confidence width is therefore always smaller than the naive one, but also tighter by a factor \(\sqrt{N}\) for the extreme choices \(b=0\) and \(b=+\infty\). From a technical viewpoint, \(\beta_{t}^{\text{small-b}}\) and \(\beta_{t}^{\text{large-b}}\) are obtained by viewing MT regression as a single-task regression over the inflated features \(\widetilde{\psi}(i,x)\), as also done in [6]. However, unlike [6], we explicitly leverage the expressions of \(\widetilde{\psi}(i,x\,|\,b)\) and \(K_{\text{task}}(b)\) as functions of \(b\), see in particular Lemma 2, where the structure of \(K_{\text{task}}\) is critical. Moreover, we note that refined widths can be obtained if one has access to task-specific constants \(B_{i}\) and \(\epsilon_{i}\). For simplicity of exposition, we focus on uniform (over tasks) \(B\) and \(\epsilon\). Also, a tighter data-dependent \(\beta_{t}^{\text{large-b}}\) can be utilized as outlined in Appendix A.2. Finally, we remark that the obtained multitask intervals do not require i.i.d. data and thus apply to the _adaptive design_ setting where data are, e.g., sequentially acquired by the learner, as shown in the next section. ## 3 New Guarantees for Multitask Online Learning In this section, we show how the improved confidence interval established in Theorem 1 can be used to derive sharp regret guarantees for multitask online learning. To do so, we also prove novel bounds for the multitask information gain \(\gamma_{T}^{\text{mt}}(b)\). For \(t=1,2,\ldots\) the learning protocol is as follows: nature reveals task index \(i_{t}\in[N]\); the learner chooses strategy \(x_{t}\in\mathcal{X}\) and pays \(f^{\text{mt}}(i_{t},x_{t})\); the learner observes the noisy feedback \(y_{t}=f^{\text{mt}}(i_{t},x_{t})+\xi_{t}\). The goal is to minimize for any horizon \(T\) the multitask regret \[R^{\text{mt}}(T)=\sum_{t=1}^{T}\max_{x\in\mathcal{X}}f^{\text{mt}}(i_{t},x)- \sum_{t=1}^{T}f^{\text{mt}}(i_{t},x_{t})\,. \tag{6}\] In the next subsection, we provide a generic algorithm to minimize (6). In particular, we show that naive choices of parameters allow to recover previous approaches with their guarantees, while using the refined confidence width \(\beta_{t}^{\text{new}}(b)\) derived in Theorem 1 yields significant improvements. ### Algorithm and regret guarantees In line with the online learning literature, our approach is based on the multitask Upper Confidence Bound, defined for any \(t\in\mathbb{N}\) as \[\text{ucb}_{t}(i,x\,|\,b)=\mu_{t}\big{(}i,x\,|\,b\big{)}+\beta_{t}(b)\cdot \sigma_{t}\big{(}i,x\,|\,b\big{)}\,. \tag{7}\] Here \(\beta_{t}\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\) is a function which assigns a confidence width \(\beta_{t}(b)\) to each kernel parameter \(b\). We consider the general strategy MT-UCB (see Algorithm 1) which, at each round \(t\) selects \(x_{t}=\arg\max_{x\in\mathcal{X}}\text{ucb}_{t-1}(i_{t},x\,|\,b)\). As summarized in Table 1, both the strategy that runs \(N\) independent instances of IGP-UCB (one for each task), and GoB.Lin from [6] are particular cases of MT-UCB. Importantly, whenever \(\beta_{t}(b)\) is set such that \([\mu_{t}(\cdot,\cdot\,|\,b)\pm\beta_{t}(b)\cdot\sigma_{t}(\cdot,\cdot\,|\,b)]\) is a valid confidence interval for \(f^{\text{mt}}(\cdot,\cdot)\), the regret of MT-UCB can be controlled through the following lemma. **Lemma 1**.: _Suppose that \(\lambda\geq(N+b)/(N+bN)\), and that for all tasks \(i\), point \(x\), and time \(t\), we have \(f^{\text{mt}}(i,x\,|\,b)\in[\,\mu_{t}(i,x\,|\,b)\pm\beta_{t}(b)\cdot\sigma_{t} (i,x\,|\,b)\,]\). Then, the multitask regret of_ MT-UCB _satisfies_ \[R^{\text{mt}}(T)\leq 4\,\beta_{T}(b)\sqrt{\lambda\,T\gamma_{T}^{\text{mt}}(b)}\,.\] The main novelty of Lemma 1 is that the right-hand side scales with \(\lambda^{1/2}\), which might be chosen smaller than \(1\). This improvement is due to the fact that multitask posterior variances are smaller than \((N+b)/(N+bN)\leq 1\). The right-hand side also depends on the multitask information gain \(\gamma_{T}^{\text{mt}}(b)\), which is nontrivial to compute or upper bound. In the next proposition, we provide practical upper bounds of \(\gamma_{T}^{\text{mt}}(b)\), in terms of the kernel parameter \(b\) and the single-task information gain \(\gamma_{T}^{\text{st}}\). **Proposition 1**.: _Let \(\lambda\leq 1\), \(N\geq 2\), and \(T_{i}\geq 1\) for all \(i\in[N]\). Then, for any \(b\geq 0\), we have_ \[\gamma_{T}^{\text{mt}}(b)\leq N\gamma_{T}^{\text{st}}+\frac{b}{2}\left(T- \frac{N}{4}\right)-\frac{T}{2}\ln(1+b)\qquad\text{ and }\qquad\gamma_{T}^{\text{mt}}(b)\leq\gamma_{T}^{\text{st}}+\frac{T}{ \lambda b}\,.\] We can now combine Theorem 1, Lemma 1, and Proposition 1 to obtain our main result: a bound on the multitask regret of MT-UCB run with the confidence width \(\beta_{t}^{\text{new}}\) from Theorem 1 and a specific \(\lambda\). **Theorem 2**.: _Assume that \(B\geq 1\), and that \(\text{MT-UCB}\) is run with \(\beta_{t}=\beta_{t}^{\text{new}}\) from Theorem 1, and \(\lambda=(N+b)/(N+bN)\). Let \(b=N/\epsilon^{2}\) if \(T\leq N\), \(b=1/\epsilon^{2}\) if \(T\geq N\) and \(\epsilon\leq N^{-1/4}T^{-1/2}\), and \(b=0\) otherwise. Let \(R^{\text{st}}(T)=B\sqrt{T\gamma_{T}^{\text{st}}}+\sqrt{T\gamma_{T}^{\text{st}} }(\gamma_{T}^{\text{st}}+\ln(1/\delta))\) be the single task regret bound achieved by_ IGP-UCB _(up to constant factors). Then, there exists a universal constant \(C\) such that with probability \(1-2\delta\) we have (up to \(\log N\) factors)_ \[R^{\text{mt}}(T)\leq C\min\left\{\sqrt{N}R^{\text{st}}(T)\,\ \ R^{ \text{st}}(T)+\epsilon BT^{3/2}\Big{(}\sqrt{\gamma_{T}^{\text{st}}+\ln(1/ \delta)}+\epsilon\sqrt{T}\Big{)}\,\right.\] \[\ contrast with the independent bound, which does not exploit the task structure, the last two bounds show that multitask learning is always beneficial when the horizon \(T\) (and thus the additional \(\epsilon\)-related term) is small. As expected, this is particularly true when the number of tasks \(N\) is large: while the independent bound increases, the second bound _does not depend on \(N\)_. On the other hand, one can note that the condition on \(\epsilon\) to improve over independent becomes more constraining as the horizon \(T\) increases. This suggests that the benefit of multitask may vanish with the number of available points per task, an observation which is well-known by practitioners. As far as we know, this work is the first one to provide theoretical evidence of such phenomenon. We conclude this section by comparing Theorem 2 to existing results. As already mentioned in the above discussion, independent IGP-UCB is a particular case of MT-UCB, such that we cannot be worse than the independent approach. We incidentally recover its regret bound as the first bound in the minimum of Theorem 2. Regarding GoB.Lin, since it is also a specific instance of MT-UCB (for \(\beta_{t}=\beta_{t}^{\text{naive}}\) and \(\lambda=1\)), Lemma 1 allows to recover its regret bound (6, Theorem 1). **Corollary 1** (Regret of GoB.Lin [6]).: _For any \(b\), the multitask regret of GoB.Lin using parameter \(b\) satisfies with probability \(1-\delta\)_ \[R^{\text{\sf mr}}(T)\leq 4\beta_{T}^{\text{\sf minv}}(b)\sqrt{T\gamma_{T}^{ \text{\sf mr}}(b)}\leq 6\left(B\sqrt{N(1+be^{2})}+\sqrt{\gamma_{T}^{\text{ \sf mr}}(b)+\ln(1/\delta)}\right)\sqrt{T\gamma_{T}^{\text{\sf mr}}(b)}. \tag{8}\] If tasks are similar, i.e., when \(\epsilon\ll 1\), bound (8) suggests to choose \(b>0\); this does not impact too much the first term, but makes \(\gamma_{T}^{\text{\sf mr}}(b)\) smaller. However, we recall that the above bound instantiated with \(b=0\) does not recover the independent bound. It is instead \(\sqrt{N}\) bigger, since \(\beta_{t}^{\text{\sf minv}}\) is not tight at \(b=0\). Hence, the Gob.Lin analysis is not sufficient to show that multitask learning improves over independent learning. Our refined analysis, which uses instead \(\beta_{t}^{\text{\sf new}}\), closes this gap. ### Adapting to unknown task similarity In this section, we consider the case where parameter \(\epsilon\) (i.e., a bound on the task deviation from the average, see (5)) is a-priori unknown. Despite this challenge, we show that the regret bound of Theorem 2 can be approximately attained using an adaptive procedure, AdaMT-UCB (Algorithm 3), relegated to Appendix B.4 due to space limitations. The proposed approach is inspired by the model selection scheme of (19, Section 7) with a few important modifications that we will outline at the end of this section. AdaMT-UCB considers a plausible set of parameters \(\mathcal{E}=\{e_{1},\ldots,e_{|\mathcal{E}|}\}\subset(0,2]\) and, for each \(e\in\mathcal{E}\), initializes an instance of the MT-UCB algorithm with parameters set according to Theorem 2 assuming \(\epsilon=e\). We denote such an instance as MT-UCB\((e)\). Moreover, we use the notation \(\operatorname{ucb}_{t}^{e}\) to denote the upper confidence bounds constructed by MT-UCB\((e)\). We assume the existence of some \(e\in\mathcal{E}\) such that \(e\geq\epsilon\), so that at least one of the learners is _well-specified_ (i.e., its confidence bounds contain \(f^{\text{\sf mr}}\) with high probability). Our goal is to incur a regret which grows as the regret of the learner with the smallest \(e\) such that \(e\geq\epsilon\), since the smaller the \(e\) the smallest the regret bound (see Theorem 2), as long as \(e\) is a valid upper bound for \(\epsilon\). Let us identify with \(e^{\star}\) such learner. At each round \(t\), AdaMT-UCB uses learner \(e_{t}=\min\mathcal{E}\), and play the action \(x_{t}\) suggested by it, i.e., the maximizer of \(\operatorname{ucb}_{t}^{e_{t}}(i_{t},\cdot)\). Then, all MT-UCB\((e)\) learners are updated based on the observed reward. In the meantime, a _misspecification test_ is carried out to check whether learner \(e_{t}\) is well-specified. It compares the cumulative reward and the believed regret of learner \(e_{t}\), with a lower confidence estimate on such reward according to the other learners. If the test triggers, learner \(e_{t}\) is misspecified with high probability and gets removed from \(\mathcal{E}\). A _new epoch_ starts with the new set \(\mathcal{E}\). Let \(\overline{R^{\text{\sf mr}}_{\star}}(T)\) denote the regret bound (Theorem 2) of learner \(e^{\star}\) had it been chosen from round \(0\). We can state the following. **Theorem 3**.: _Assume that there exists \(e\in\mathcal{E}\) such that \(e\geq\epsilon\), and let \(M\) be the number of learners \(e\in\mathcal{E}\) such that \(e<\epsilon\) (i.e., the number of misspecified learners in \(\mathcal{E}\)). The regret of AdaMT-UCB satisfies with high probability \(R^{\text{\sf mr}}(T)=\tilde{\mathcal{O}}\big{(}\sqrt{M+1}\cdot\overline{R^{ \text{\sf mr}}_{\star}}(T)\big{)}\)._ Clearly, the number \(M\) of misspecified learners is not known in advance but is always less than \(|\mathcal{E}|\). Note that when \(\epsilon=0\), we have \(M=0\) and we recover the single task regret bound. Moreover, given \(\rho\leq 1\), we show in Appendix B.4 that one can attain a multiplicative accuracy \(\rho\) over \(\epsilon\), assuming that \(\epsilon\geq\epsilon_{\text{\sf min}}>0\), through an exponential grid with \(M\) being polylogarithmic in \(1/\rho\) and \(1/\epsilon_{\text{\sf min}}\). **Relation with the approach of [19].** Compared to (19, Section 7)--where the goal is to adapt to an unknown features' dimension--the set of learners considered in AdaMT-UCB share _the same dimension \(d\)._ This allows us to exploit the following two novelties with respect to [19]: (1) _all_ learners are updated from the data gathered from learner \(i_{t}\) (Line 6 in Algorithm 3), and (2) the lower confidence bounds \(L^{e}\) in the misspecification test (Line 8) are all computed using action \(x_{t}\) (i.e., the action recommended by learner \(i_{t}\)), as opposed to using the actions recommended by each learner \(e\). Both these points are only applicable to our setting, leading to a simpler regret analysis. ## 4 Multitask Active Learning The goal of the online learning setup of Section 3 is to optimize the tasks sequentially revealed by nature. In some situations (e.g., in [18] or the drug discovery problem considered in Section 5), however, we care about the performance of multiple tasks _simultaneously_, to eventually learn the best strategy for each one of them. Moreover, we ought to do so with minimal interactions \(T\), i.e., minimizing the queries of the function \(f^{\text{mt}}\). We capture this by the following _active learning_ protocol. **Learning protocol and regret.** At each round \(t\), the learner: chooses a strategy \(\{x_{t}^{i},i\in[N]\}\)_for each task_, chooses _which task_\(i_{t}\in[N]\) to query, and observes the noisy feedback \(y_{t}=f^{\text{mt}}(i_{t},x_{t})+\xi_{t}\). The learner's goal is to minimize the _active learning_ regret: \[R^{\text{mt}}_{\text{AL}}(T)=\sum_{t=1}^{T}\frac{1}{N}\sum_{i=1}^{N}\max_{x \in\mathcal{X}}f^{\text{mt}}(i,x)-\sum_{t=1}^{T}\frac{1}{N}\sum_{i=1}^{N}f^{ \text{mt}}(i,x_{t}^{i})\,.\] Compared to the online learning regret of Equation (6), the learner's performance at each round is here measured by the average reward coming from _each_ task (as opposed to just the task presented by nature). Moreover, compared to online learning, the learner faces the additional challenge of choosing--at each round--from which task information should be gathered. Intuitively, more difficult (or informative) tasks should be queried more often to ensure \(R^{\text{mt}}_{\text{AL}}(T)\) grows sublinearly. To the best of our knowledge, the above protocol and regret notion are novel in the multitask literature. ``` for t=1,...,T do \(x_{t}^{i}=\arg\max_{x\in\mathcal{X}}\text{ucb}_{t-1}(i,x),\,\forall i\in[N]\) \(i_{t}=\arg\max_{i\in[N]}\beta_{t-1}^{i}\sigma_{t-1}(i,x_{t}^{i})\) Observe: \(y_{t}=f^{\text{mt}}(i_{t},x_{t}^{i_{t}})+\xi_{t}\) Update \(\text{ucb}_{t}(\cdot,\cdot)\) and \(\sigma_{t}(\cdot,\cdot)\) based on observations. ``` **Algorithm 2**MT-AL In Algorithm 2 we present MT-AL, an efficient strategy that ensures sublinear active learning regret. Like in MT-UCB, MT-AL constructs confidence intervals around \(f^{\text{mt}}\) and, at each round, select strategy \(x_{t}^{i}=\arg\max_{x\in\mathcal{X}}\text{ucb}_{t-1}(i,x)\) for each task \(i\in[N]\). When it comes to selecting which task to query, MT-AL selects \(i_{t}\in\arg\max_{i\in[N]}\beta_{t-1}^{i}\sigma_{t-1}(i,x_{t}^{i})\), i.e., the task for which the believed optimizer \(x_{t}^{i}\) is subject to maximal uncertainty (we use generic task-dependent widths \(\beta_{t}^{i}\) for completeness). This rule, also known as _uncertainty sampling_ in the literature [20], intuitively makes sure the learner can control the regrets for the tasks not queried and leads to the following theorem. **Theorem 4**.: _Suppose that for all tasks \(i\), point \(x\), and time \(t\), we have that \(f^{\text{mt}}(i,x)\in[\,\mu_{t}(i,x)\pm\beta_{t}^{i}\cdot\sigma_{t}(i,x)\,]\). Then, the_ MT-AL _algorithm ensures the active learning regret is bounded by_ \[R^{\text{mt}}_{\text{AL}}\leq 2\sum_{t=1}^{T}\beta_{t}^{i_{t}}\sigma_{t}(i_{t},x _{t}^{i_{t}})\,,\] _where \(\{i_{t}\}\) is the sequence of queried tasks and \(\{x_{t}^{i_{t}}\}\) the strategies selected for each of them._ The above bound only relies on MT-AL utilizing valid intervals around \(f^{\text{mt}}\) and thus applies more broadly than our agnostic MT regression, e.g., when such intervals are constructed using a known multitask kernel \(k\big{(}(i,x),(i^{\prime},x^{\prime})\big{)}\). However, Theorem 4 shows the active learning regret heavily depends on the constructed intervals, similar to online learning. In MT-AL, these are additionally utilized for deciding which task to query at each round. When specialized to our agnostic MT kernel and improved confidence, we obtain the following. **Corollary 2**.: _Let_ MT-AL _utilize the MT regression estimates of Eq. (3)-(4) with parameters set according to Theorem 2. Moreover, let \(\overline{R^{\text{mt}}}(T)\) be the bound on the online learning regret obtained in Theorem 2. Then, with high probability, we have \(R^{\text{mt}}_{\text{AL}}(T)\leq\overline{R^{\text{mt}}}(T)\)._ Thus, MT-AL ensures the active learning regret is always bounded by its online learning counterpart. Moreover, the same considerations as in Theorem 2 apply also here, regarding the benefit of multitask learning over independent single-task regression for instance. ## 5 Experiments The goal of our experiments is to evaluate the effectiveness of the studied MT regression, and in particular of the improved confidence intervals obtained in Section 2, both in online learning and active learning setups. We utilize the following synthetic and real-world data. _Synthetic data:_ We generate tasks of the form \(f_{i}=(1-\delta)\cdot\bar{f}+\delta\cdot f_{\text{dev}}^{i},i\in[N]\), where \(\bar{f},f_{\text{dev}}^{i}\) are random unit vectors representing a common model and individual deviations, respectively. Moreover, actions consist of \(10^{4}\) vectors \(x\in\mathbb{R}^{d}\) from the sphere of radius 10. Observation noise is unit normal. _Drug discovery MHC-I data [24]:_ The goal is to discover the peptides with maximal binding affinity to each Major Histocompatibility Complex class-I (MHC-I) allele. The dataset from [24] contains the standardized binding affinities (IC\({}_{50}\) values) of different peptide candidates to the MHC-I alleles (tasks). For each allele, the dataset contains \(\sim 1000\) peptides represented as \(x\in\mathbb{R}^{45}\) feature vectors. For our experiments, we utilize the \(5\) alleles A-\(\{0201,0202,0203,2301,2402\}\), since they were shown in [24] to share binding similarity. Note that such a problem falls into our multitask active learning setup, since we would like to retrieve the best peptide for each allele minimizing the number of interactions (i.e., lab experiments). Nevertheless, we also consider its online learning analog where we care about finding the best peptides for each revealed allele. **Online learning.** At each round \(t\), a random task \(i_{t}\in[N]\) is observed and point \(x_{t}\) is selected according to the following baselines: (1) _Independent_, which runs \(N\) independent IGP-UCB [8] algorithms (corresponding to MT-UCB with \(b=0\)), (2) _Single_, which treats all tasks to be the same and runs a unique single-task IGP-UCB (corresponding to MT-UCB with \(b=+\infty\)), (3) MT-UCB which utilizes an appropriate parameter \(0<b<\infty\) as well as a bound on the tasks similarity \(\epsilon\) (for synthetic data this can be exactly computed, while for MHC-I data we use \(\epsilon=0.3\)) and utilizes the _naive_ (i.e., Gob.Lin) or _improved_ confidence bounds, and (4) AdaMT-UCB which is run with the same \(b\) but uses the set of plausible deviations \(\mathcal{E}=\{.1,.2,\ldots,1\}\) instead of knowing the true \(\epsilon\). For choosing \(b\), we sweep over possible values and select the best-performing one, keeping it fixed for all the baselines. **Active learning.** We follow the multitask active learning setup of Section 4. All baselines utilize confidence intervals from the agnostic MT regression of Section 2, where \(\epsilon\) and \(b\) are chosen as for online learning. Moreover, they all utilize the improved confidence intervals, unless otherwise specified. We compare: (1) _Unif._ which chooses the task \(i_{t}\) to be queried uniformly at random (but still selects \(x_{t}^{i}\in\operatorname*{arg\,max}_{x}\text{wb}_{t}^{i}(i,x)\)) and employs the _naive_ or the _improved_ confidence intervals, the offline contextual Bayesian optimization baselines (2) MTS [7] and (3) AE-LSVI [16], and (4) MT-AL which utilizes the _naive_ or the _improved_ confidence intervals. We report the cumulative regret (online and active learning, respectively) of the considered baselines in Figure 3, averaged over 5 runs. For the synthetic data, we report results for \(d=4,N=5,\delta=0.4\), but provide a full set of experiments for different parameters in Appendix D. In Figure 3 (a), both MT-UCB and AdaMT-UCB lead to superior performance compared to the _Independent_ and _Single_ baselines, demonstrating the benefits of MT regression. In addition, the improved confidence intervals significantly outperform the naive ones. Moreover, we observe AdaMT-UCB achieves comparable (sometimes even better, see Appendix D) performance to MT-UCB. Indeed, instead of using a conservative choice of \(\epsilon\), the misspecification test (Line 8 of Algorithm 3) of AdaMT-UCB allows to Figure 3: Online and active learning regrets on synthetic and drug discovery MHC-I data, respectively. When utilizing the improved confidence intervals, MT-UCB and MT-AL outperform the other baselines. use a smaller \(\epsilon\) and only increase it when there is evidence that the constructed intervals do not contain the true tasks. In active learning (Figure 3 (b)), we observe MT-AL has a significant advantage over the uniform sampling baselines and MTS, while performing comparably to AE-LSVI (both methods are similar as discussed in Appendix C.3). Moreover, its regret is bounded by the online learning regret of MT-UCB, conforming with Theorem 4. Importantly, the improved confidence intervals play a crucial role also here and enable a drastic performance improvement compared to the naive ones. ## 6 Future Directions We believe this paper opens up several future research directions. The derived confidence intervals, as well as our analysis of the multitask information gain, heavily exploit the structure of the task Gram matrix \(K_{\text{task}}(b)\), see Equation (2). However, it remains unclear whether these can be extended to more general kernels. According to the graph perspective of [13], \(K_{\text{task}}(b)\) can be seen as \(K_{\text{task}}(b)=I_{N}+L(b)\), where \(L(b)\in\mathbb{R}^{N\times N}\) is the Laplacian matrix of a _clique_ graph with vertices \([N]\) and edge weight \(b\). Hence, it would be interesting to extend our results to different graph structures. Furthermore, we believe the proposed multitask confidence intervals hold potential for various related domains., e.g., to assess uncertainty in safety-critical systems [3], or to balance exploration-exploitation in multitask reinforcement learning [23]. In such applications, the introduced notion of active learning regret can serve as measure for the overall sample-efficiency. ## Acknowledgements This work was partially supported by ELSA (European Lighthouse on Secure and Safe AI) funded by the European Union under grant agreement No. 101070617, the ELISE (European Learning and Intelligent Systems Excellence) project EU Horizon 2020 ICT-48 research and innovation action under grant agreement No. 951847, and the FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme.
2310.14665
Read Disturbance in High Bandwidth Memory: A Detailed Experimental Study on HBM2 DRAM Chips
We experimentally demonstrate the effects of read disturbance (RowHammer and RowPress) and uncover the inner workings of undocumented read disturbance defense mechanisms in High Bandwidth Memory (HBM). Detailed characterization of six real HBM2 DRAM chips in two different FPGA boards shows that (1) the read disturbance vulnerability significantly varies between different HBM2 chips and between different components (e.g., 3D-stacked channels) inside a chip, (2) DRAM rows at the end and in the middle of a bank are more resilient to read disturbance, (3) fewer additional activations are sufficient to induce more read disturbance bitflips in a DRAM row if the row exhibits the first bitflip at a relatively high activation count, (4) a modern HBM2 chip implements undocumented read disturbance defenses that track potential aggressor rows based on how many times they are activated. We describe how our findings could be leveraged to develop more powerful read disturbance attacks and more efficient defense mechanisms. We open source all our code and data to facilitate future research at https://github.com/CMU-SAFARI/HBM-Read-Disturbance.
Ataberk Olgun, Majd Osseiran, Abdullah Giray Yaglikci, Yahya Can Tugrul, Haocong Luo, Steve Rhyner, Behzad Salami, Juan Gomez Luna, Onur Mutlu
2023-10-23T08:01:48Z
http://arxiv.org/abs/2310.14665v3
# Understanding Read Disturbance in High Bandwidth Memory: ###### Abstract DRAM read disturbance is a significant and worsening safety, security, and reliability issue of modern DRAM chips that can be exploited to break memory isolation. Therefore, it is important to understand real DRAM chips' read disturbance characteristics. Two prominent examples of read-disturb phenomena are RowHammer and RowPress. Many existing DRAM modules of various form factors (e.g., DDR4) are vulnerable to RowHammer and RowPress. Unfortunately, no prior work extensively studies RowHammer and RowPress in modern high-bandwidth memory (HBM) chips, which are commonly used in modern GPUs and FPGAs. In this work, we experimentally demonstrate the effects of read disturbance and uncover the inner workings of undocumented in-DRAM read disturbance mitigation mechanisms in High Bandwidth Memory (HBM). Our detailed characterization of six real HBM2 DRAM chips shows that (1) the number of read disturbance errors (i.e., bitflips) and the number of row activations needed to induce the first read disturbance bitflip significantly varies between different HBM2 chips and different 3D-stacked channels, pseudo channels, banks, and rows inside an HBM2 chip. We observe that the variation in the average number of bitflips per DRAM row is more prominent across channels in some HBM2 chips than across all channels in all HBM2 chips. (2) The DRAM rows at the end and in the middle of a DRAM bank exhibit significantly fewer read disturbance bitflips than the rest of the rows. (3) It takes fewer additional activations to induce more read disturbance bitflips in a DRAM row if the row exhibits the first bitflip already at a relatively high activation count. (4) HBM2 chips exhibit read disturbance bitflips with only two row activations when rows are kept active for an extremely long time. We show that a modern HBM2 DRAM chip implements undocumented read disturbance defenses that can track potential aggressor rows based on how many times they are activated, and refresh their victim rows with every 17 periodic refresh operations. We draw key takeaways from our observations and discuss their implications for future read disturbance attacks and defenses. We explain how our findings could be leveraged to develop both i) more powerful read disturbance attacks and ii) more efficient read disturbance defense mechanisms. ## 1 Introduction Modern DRAM chips suffer from read disturbance [1, 2, 3, 4] issues that can be exploited to break memory isolation, threatening the safety, security, and reliability of modern DRAM-based computing systems. RowHammer [1] and RowPress [4] are two prominent examples of read disturbance. Repeatedly opening/activating and closing a DRAM row (i.e., aggressor row) _many times_ (e.g., tens of thousands) induces _RowHammer bitflips_ in physically nearby rows (i.e., victim rows). Keeping the aggressor row open for a long period of time (i.e., a large aggressor row on time, \(t_{AggON}\)) amplifies the effects of read disturbance and induces _RowPress bitflips_, _without many_ repeated aggressor row activations [4]. Numerous studies demonstrate that a malicious attacker can reliably cause read disturbance bitflips in a targeted manner to compromise system integrity, confidentiality, and availability [5, 6, 7, 1, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67]. Read disturbance worsens in new DRAM chips with smaller technology nodes, where RowHammer bitflips 1) happen with fewer row activations, e.g., \(10\times\) reduction in less than a decade [68] and 2) appear in more DRAM cells, compared to old DRAM chips [3, 38, 68, 69, 70, 71, 27, 35, 3, 27, 3, 32, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67]. To meet the high-bandwidth requirements of modern data-intensive applications (e.g., GPU workloads [72, 73]), DRAM designers develop High Bandwidth Memory (HBM) [74] DRAM chips, which contain multiple layers of 3D-stacked DRAM dies, using cutting-edge technology nodes.1 It is important to understand read disturbance in HBM DRAM chips that have new architectural characteristics (e.g., multiple layers of DRAM dies, area- and energy-intensive through-silicon vias), which might affect the chip's read disturbance vulnerability in currently-unknown ways. Such understanding can help identify potential read-disturbance-induced security and reliability issues in HBM-based systems and allow for effective and efficient defense mechanisms. Footnote 1: We use “chip” to refer to an _HBM2 stack_. An HBM2 stack contains one or multiple DRAM layers. We refer to each such layer using “DRAM die”. **Our goal** in this work is to experimentally analyze how vulnerable HBM DRAM chips are to read disturbance. To this end, we provide the first detailed experimental characterization of the RowHammer and the RowPress vulnerability in six modern HBM2 DRAM chips. We provide four main analyses in our study. First, we analyze the spatial variation in RowHammer vulnerability based on the physical location of victim rows in terms of two metrics: the fraction of DRAM cells that experience a bitflip in a DRAM row (\(BER\)) and the minimum hammer count necessary to cause a RowHammer bitflip (\(HC_{first}\)) (Section 4). Second, we analyze the number of aggressor row activations (i.e., hammer count) necessary to induce the first 10 bitflips in a DRAM row (Section 5). We demonstrate how many additional hammer count over \(HC_{first}\) is needed to induce
2303.05888
A Distributionally Robust Random Utility Model
This paper introduces the distributionally robust random utility model (DRO-RUM), which allows the preference shock (unobserved heterogeneity) distribution to be misspecified or unknown. We make three contributions using tools from the literature on robust optimization. First, by exploiting the notion of distributionally robust social surplus function, we show that the DRO-RUM endogenously generates a shock distributionthat incorporates a correlation between the utilities of the different alternatives. Second, we show that the gradient of the distributionally robust social surplus yields the choice probability vector. This result generalizes the celebrated William-Daly-Zachary theorem to environments where the shock distribution is unknown. Third, we show how the DRO-RUM allows us to nonparametrically identify the mean utility vector associated with choice market data. This result extends the demand inversion approach to environments where the shock distribution is unknown or misspecified. We carry out several numerical experiments comparing the performance of the DRO-RUM with the traditional multinomial logit and probit models.
David Müller, Emerson Melo, Ruben Schlotter
2023-03-10T12:46:34Z
http://arxiv.org/abs/2303.05888v1
# A Distributionally Robust Random Utility Model ###### Abstract This paper introduces the distributionally robust random utility model (DRO-RUM), which allows the preference shock (unobserved heterogeneity) distribution to be misspecified or unknown. We make three contributions using tools from the literature on robust optimization. First, by exploiting the notion of distributionally robust social surplus function, we show that the DRO-RUM endogenously generates a shock distribution that incorporates a correlation between the utilities of the different alternatives. Second, we show that the gradient of the distributionally robust social surplus yields the choice probability vector. This result generalizes the celebrated William-Daly-Zachary theorem to environments where the shock distribution is unknown. Third, we show how the DRO-RUM allows us to nonparametrically identify the mean utility vector associated with choice market data. This result extends the demand inversion approach to environments where the shock distribution is unknown or misspecified. We carry out several numerical experiments comparing the performance of the DRO-RUM with the traditional multinomial logit and probit models. **Keywords:** Discrete choice, Random utility, Convex analysis, Distributionally robust optimization. _JEL classification: C35, C61, D90._ Introduction The random utility model (RUM) introduced by Marschak (1959), Block and Marschak (1959), and Becker et al. (1963) has become the standard approach to model stochastic choice problems. The fundamental work of McFadden (1978a,c, 1981) makes the RUM an empirically tractable approach suitable for applications in many areas of applied microeconometric, including labor markets, industrial organization, health economics, transportation, and operations management. In particular, McFadden provides an economic foundation and econometric framework which connects observable to stochastic choice behavior. This latter feature makes the RUM suitable to deal with complex choice environments and welfare analysis (McFadden (2001) and Train (2009)). In a RUM a decision maker (DM) faces a discrete choice set of alternatives in which each option is associated with a _random_ utility. Then the DM chooses a particular option with a probability equal to the event that such alternative yields the highest utility among all available alternatives. Most of the applied literature models the random utility associated with each alternative as the sum of an _observable_ and _deterministic_ component and a _random preference shock_. Under this additive specification, different distributional assumptions on the random preference shock will generate different stochastic choice rules. Thus, all the effort is to provide conditions on the distribution of the preference shock such that the choice probabilities are consistent with the random utility maximization hypothesis (McFadden (1981)). More importantly, assuming that the shock distribution is known to the analyst, we can estimate the parameters describing the deterministic utility associated with each alternative, carry out counterfactual welfare analysis, and predict future choice behavior. From a modeling standpoint, this assumption means that the analyst can _correctly_ specify the shock distribution that describes the _unobserved_ heterogeneity in DM's behavior. In this paper, we develop a RUM framework that allows for the possibility that the analyst (or the DM) does not know the true shock distribution. In doing so, we propose a distributional robust framework that relaxes the assumption that the shock distribution is known in advance. In particular, we develop a RUM framework that allows for misspecification in the shock distribution. By modeling the uncertainty regarding the true distribution, we follow the distributionally robust optimization literature and consider an environment where the analyst has access to a reference distribution \(F\). This distribution corresponds to an approximation of the true statistical law generating the realizations of preference shocks. We refer to \(F\) as the nominal distribution. Accordingly, we model uncertainty distribution in terms of an _uncertainty set_, which consists of all probability distributions that are _close_ to \(F\). We rely on the concept of _statistical_ divergences to measure the distance between probability distributions. More precisely, we use the notion of \(\phi\)-divergences (Csiszar (1967); Liese and Vajda (1987)). Examples of \(\phi\)-divergences include the Kullback-Leibler, Renyi, and Cressie-Read distances, among many others. Thus, the uncertainty set contains the nominal \(F\) and all feasible distributions within a certain radius as measured by the \(\phi\)-divergence. Based on the uncertainty set, we introduce the robust social surplus function, corresponding to the maximum social surplus achievable over all feasible distributions. Like the traditional RUM, the robust social surplus is a convex function that contains all the relevant information to study and understand our distributionally robust RUM (DRO-RUM). ### Contributions We make three contributions. First, we show that the analysis of the DRO-RUM corresponds to the study of the properties of a strictly convex finite dimensional stochastic optimization program. This characterization directly implies that the _endogenous_ robust distribution associated with the DRO-RUM introduces correlation between the preference shocks, even when the nominal \(F\) may assume independence. Second, we show that the gradient of the robust social surplus function yields the choice probability vector. The latter result is a nontrivial generalization of the celebrated Williams-Daly-Zachary (WDZ) theorem to environments where the true shock distribution is unknown. Furthermore, we show that the DRO-RUM preserves the convex structure of the traditional RUM. In particular, we derive a robust Fenchel duality framework that connects the robust social surplus and its convex conjugate. In our third contribution, we characterize the empirical content of the DRO-RUM. Formally, we show that for an observed choice probability vector, there exists a _unique_ mean utility vector that rationalizes the observed data in terms of a DRO-RUM. In particular, we show that the mean utility vector corresponds to the gradient of the convex conjugate of the robust social surplus function. The economic content of our result comes from the fact the DRO-RUM can rationalize observed behavior (a choice probability vector) in terms of a unique mean utility vector, which corresponds to the unique solution of a strictly convex stochastic programming problem. To conclude our theoretical contributions, we carry out several numerical simulations discussing the properties of our framework. In particular, we compare the choice behavior of the DRO-RUM with the multinomial logit (MNL) and multinomial probit MNP models. We mainly focus on the impact of the so-called _robustness parameter_, which determines the size of the feasible set, impacting the choice probabilities and the surplus function. ### Related literature Our paper is related to several strands of literature. First, our paper relates to the literature on RUMs and convex analysis. The closest articles to ours are the works by Chiong et al. (2016), Galichon and Salanie (2021), and Fosgerau et al. (2021). Similar to us, these papers exploit the convex structure of the RUM to study the nonparametric identification of the mean utility vector when aggregate market data is available (observed choice probabilities). Our paper and results differ substantially from their work by allowing a more flexible framework regarding distributional assumptions. Second, our paper relates to the semiparametric choice model (SCM) literature. The work by Natarajan et al. (2009) introduces the SCM in an environment where the true _joint_ distribution is unknown, but the analyst has access to the set of marginal distributions associated with each alternative. This particular instance of the SCM is known as the marginal distribution model (MDM). Mishra et al. (2014) studies the MDM approach's theoretical and empirical performance. Mishra et al. (2012) study a second instance of the SCM, which exploits cross moments constraints. In particular, they assume that the true distribution is unknown but the analyst has access to the true variance-covariance matrix that captures the correlation structure across the set of discrete alternatives. At first glance, our approach is similar to SCM. As discussed by Feng et al. (2017), the latter are generally defined by a supremum over a set of distributions. We adapt to this definition by introducing the DRO-RUM, where the true distribution is unknown, and the analyst, therefore, considers all distributions in an uncertainty set. Despite the similarity between the general SCM and our approach, both frameworks have important differences. First, our approach requires no assumption on the marginals or variance-covariance matrix. Instead, our model only requires knowledge of a nominal distribution. Second, using the concept of \(\phi\)-divergence enables the researcher to incorporate robustness, where she can control the uncertainty concerning the shock distribution by selecting the _robustness parameter_. Hence, the feasible set is not determined explicitly by fixing some moments or marginal distributions but is rather implicitly constructed by choosing the nominal distribution and the magnitude of the _robustness parameter_. Moreover, our approach can generate different models by allowing the choice of several \(\phi\)-divergence functions and different nominal distributions. Third, we show that the DRO-RUM preserves the convex structure (and duality) of the traditional RUM approach. In particular, we generalize the WDZ and provide a robust Fenchel duality analysis. Fourth, we identify the mean utility vector nonparametrically by exploiting our robust convex duality results. This result allows us to rationalize aggregate market data (choice probabilities) in terms of a DRO-RUM. In particular, our identification result corresponds to a robust demand inversion method. Our paper is also related to the literature on robustness in macroeconomics (Hansen and Sargent (2001, 2008)). However, this literature focuses on recursive problems using the Kullback-Leibler distance. The recent paper by Christensen and Connault (2023) introduces robustness ideas to analyze the sensitivity of counterfactuals to parametric assumptions about the distribution of latent variables in structural models. Their focus is different from the problem we study in this paper. Finally, our paper is closely related to the literature on distributionally robust optimization. Shapiro (2017a) and Kuhn et al. (2019) provide an up-to-date treatment of the subject. Applications vary from inventory management to regularization in machine learning.1However, to our knowledge, this literature has not studied the problem of the distributional robustness of the RUM. Footnote 1: In Economics, one of the first papers studying robust optimization problems is Scarf (1958) The rest of the paper is organized as follows. Section 2 reviews the traditional RUM approach and introduces the problem of robustness. Section 3 presents the DRO-RUM model and discusses its main properties. The empirical content of the DRO-RUM approach is discussed in Section 4. Section 5 contents several numerical experiments comparing the outcome of the DRO-RUM with respect to MNL and MNP. Section 6 concludes the paper by providing an overview of possible extensions. **Notation**. Throughout the paper we use the following notation and definitions. Let us denote \(\bar{\mathbb{R}}=\mathbb{R}\cup\{-\infty,+\infty\}\) and consider extended real-valued functions \[f:\mathbb{V}\to\bar{\mathbb{R}},\] where \(\mathbb{V}\) is a finite dimensional real vector space. Consequently, we denote by \(\mathbb{V}^{*}\) its dual space consisting of all linear functionals. In particular, we often work with subspaces of \(\mathbb{R}^{n}\). The set defined by \[\text{dom}f=\{x\in\mathbb{V}:f(x)<+\infty\}\] is called the _(effective) domain_ of \(f\). A function is said to be _proper_ if it takes nowhere the value \(-\infty\) and \(\text{dom}f\neq\emptyset\). For a proper function \(f:\mathbb{V}\to\bar{\mathbb{R}}\) the set \(\partial f(x)\) represents its _subdifferential_ at \(x\in\text{dom}f\), i.e. \[\partial f(x)=\{g\in\mathbb{V}^{*}:f(y)\geq f(x)+\langle g,y-x\rangle,\quad \text{for all}\quad y\in\mathbb{R}^{n}\}\,,\] where \(g\in\mathbb{V}^{*}\) is said to be a _subgradient_. If the subdifferential set is a singleton, i. e. the subgradient \(g\) is unique, we denote by \(\nabla f(x)\) the _gradient_ of the function \(f\) at \(x\in\text{int}\,(\text{dom}f)\). The _convex conjugate_ of a proper function \(f:\mathbb{V}\to\bar{\mathbb{R}}\) is \[f^{*}(g)=\sup_{x\in\mathbb{V}}\left\{\langle x,g\rangle-f(x)\right\},\quad g \in\mathbb{V}^{*}.\] \(\mathbb{E}_{F}(\cdot)\) denotes the expectation operator with respect to a distribution \(F\). ## 2 The Random Utility Model Consider a decision maker (DM) making a utility-maximizing discrete choice among alternatives \(j\in\mathcal{J}=\{0,1,\ldots,J\}\). The utility of option \(j\) is \[\tilde{u}_{j}=u_{j}+\varepsilon_{j}, \tag{1}\] where \(u=(u_{0},u_{1},\ldots,u_{J})^{T}\) is deterministic and \(\varepsilon=(\varepsilon_{0},\varepsilon_{1},\ldots,\varepsilon_{J})^{T}\) is a vector of random utility shocks. The alternative \(0\) has the interpretation of an outside option. Following the discrete choice literature, we set \(u_{0}=0\). Following McFadden (1978a, 1981), the previous description corresponds to the classic additive random utility model (RUM). Our presentation of the RUM framework here will emphasize convex-analytic properties. **Assumption 1**: _The random vector \(\varepsilon\) follows a distribution \(F\) that is absolutely continuous with finite means, independent of \(u\), and fully supported on \(\mathbb{R}^{J+1}\)._ Assumption 1 leaves the distribution of \(\varepsilon\) unspecified, thus allowing for a wide range of choice probability systems far beyond the often-used logit model. The assumption allows arbitrary correlation between the \(\varepsilon_{j}\)'s may be important in applications. As a direct consequence of Assumption 1, the DM's choice probabilities correspond to: \[p_{j}(u)\equiv\mathbb{P}\left(u_{j}+\varepsilon_{j}=\max_{j^{\prime}\in \mathcal{J}}\left\{u_{j^{\prime}}+\varepsilon_{j^{\prime}}\right\}\right), \quad j=0,1,\ldots,J.\] An important object in the RUM framework is the _surplus function_ of the discrete choice model (so named by McFadden (1981)). It is given by \[W\left(u\right)=\mathbb{E}_{F}\left[\max_{j\in\mathcal{J}}\left\{u_{j}+ \varepsilon_{j}\right\}\right]. \tag{2}\] Under Assumption 1, \(W\) is convex and differentiable and the choice probability vector \(p(u)\) coincides with the gradient of \(W\)2: Footnote 2: The convexity of \(W\) follows from the convexity of the max function. Differentiability follows from the absolute continuity of \(\varepsilon\). \[\frac{\partial}{\partial u_{k}}W\left(u\right)=p_{k}(u)\,\,\,\text{for}\,\,k= 0,1,\ldots,J\] or, using vector notation, \(p\left(u\right)=\nabla W(u)\). The previous result is the celebrated Williams-Daly-Zachary (henceforth, WDZ) theorem, famous in the discrete choice literature (McFadden (1978a, 1981)). One of the most widely used RUMs is the multinomial logit (MNL) model, which assumes that the entries of \(\left(\varepsilon_{0},\varepsilon_{1},\ldots,\varepsilon_{J}\right)^{T}\) follow iid Gumbel distributions with scale parameter \(\eta\). Given this assumption, we can write the social surplus function in closed form: \[W(u)=\eta\log\left(\sum_{j=0}^{J}e^{u_{j}/\eta}\right)+\eta\gamma, \tag{3}\] where \(\gamma\) is the Euler-Mascheroni constant. It follows from (3) that the WDZ theorem implies that \(p_{j}(u)\) is given by: \[\frac{\partial W\left(u\right)}{\partial u_{j}}=\frac{e^{u_{j}/\eta}}{\sum_{ l=0}^{J}e^{u_{l}/\eta}}\quad\text{for}\,\,j\in\mathcal{J}. \tag{4}\] The MNL model belongs to a broader class of RUM models called generalized extreme value (GEV) models introduced by McFadden (1978b). This class of models is defined via a generating function \(G:\mathbb{R}_{+}^{J+1}\rightarrow\mathbb{R}_{+}\), which has to satisfy the following properties: 1. \(G\) is homogeneous of degree \(\frac{1}{\eta}>0\). 2. \(G\left(x_{0},x_{1},\ldots,x_{j},\ldots,x_{J}\right)\rightarrow\infty\) as \(x_{j}\rightarrow\infty\), \(j=0,1,\ldots,J\). 3. For the partial derivatives of \(G\) w.r.t. \(k\) distinct variables it holds: \[\frac{\partial^{k+1}G\left(x_{0},\ldots,x_{J}\right)}{\partial x_{j_{0}}, \partial x_{j_{1}}\cdots\partial x_{j_{k}}}\geq 0\text{ if }k+1\text{ is odd, }\quad\frac{\partial^{k+1}G\left(x_{0},\ldots,x_{J}\right)}{ \partial x_{j_{0}},\partial x_{j_{1}}\cdots\partial x_{j_{k}}}\leq 0\text{ if }k+1\text{ is even.}\] McFadden (1978b, 1981) show that a function \(G\) satisfying conditions (G1)-(G3) implies that the joint distribution of the random vector \(\varepsilon\) corresponds to the following probability density function: \[f_{\epsilon}\left(y_{0},y_{1},\ldots,y_{J}\right)=\frac{\partial^{J+1}\exp \left(-G\left(e^{-y_{0}},\ldots,e^{-y_{J}}\right)\right)}{\partial y_{0} \cdots\partial y_{J}},\] An essential property of the GEV class is that the social surplus function corresponds to (McFadden, 1978b) \[W(u)=\eta\ln G\left(e^{u}\right)+\eta\gamma,\] where \(\gamma\) is the Euler-Mascheroni constant. From the WDZ theorem it follows that the choice probability of the \(j\)-th alternative corresponds to: \[p_{j}(u)=\frac{\partial W(u)}{\partial u_{j}}=\eta\frac{\partial G\left(e^{u }\right)}{\partial e^{u_{j}}}\cdot\frac{e^{u_{j}}}{G\left(e^{u}\right)}\quad \forall j\in\mathcal{J}.\] It is easy to see that the generating function \[G(e^{u})=\sum_{j=0}^{J}e^{u_{j}/\eta}=1+\sum_{j=1}^{J}e^{u_{j}/\eta}\] leads to the MNL model. The main advantage of the GEV class is its flexibility to capture complex patterns correlation across the random variables \(\varepsilon_{j}\)'s. Examples of this are the Nested Logit (NL), the Paired Combinatorial Logit (PCL), the Ordered GEV (OGEV), and the Generalized Nested Logit (GNL) model, which are particular instances of the GEV family. ### A robust framework for the RUM A fundamental assumption in the RUM is that the shock distribution is known to the researcher (and the DM). This means that the distribution of \(\varepsilon\) is correctly specified. Our main goal in this paper is to relax this condition by allowing the distribution of \(\varepsilon\) to be unknown. Instead, the distribution of \(\varepsilon\) is an argument in an optimization problem that corresponds to the definition of the social surplus function. We formalize this idea by replacing expression (2) with the _robust social surplus_ function: \[W^{RO}(u)=\sup_{G\in\mathcal{M}(F)}\mathbb{E}_{G}\left[\max_{j\in\mathcal{J}} \left\{u_{j}+\varepsilon_{j}\right\}\right], \tag{5}\] where \(\mathcal{M}(F)\) is a set of probability distributions that are close to a predetermined distribution \(F\) which satisfies Assumption 1. This distribution \(F\) can be seen as a best guess or prior knowledge of the analyst regarding the joint distribution of error terms. We will refer to \(F\) as nominal distribution. In order to be more robust against misspecification the analyst takes into account all possible distributions that are close to the nominal distribution. A key aspect of our approach is related to the structure of the set \(\mathcal{M}(F)\). In Section 3 we specify the \(\mathcal{M}(F)\) in terms of \(\phi\)-divergence functions, which enables us to use the notion of statistical divergences between probability distributions (Csiszar (1967), Liese and Vajda (1987), and Pardo (2005)). Hence, we will refer to this as the _distributionally robust_-RUM (DRO-RUM). As we shall see, by doing this we are able to characterize the resulting DRO-RUM surplus function in terms of a convex finite dimensional optimization program. This characterization is key in studying the properties of the DRO-RUM approach. Let \(G^{\star}\) denote the distribution (or a limit of a sequence of distributions) that attains the optimal value in (5). The choice probability for alternative \(j\) under this model is given by (provided that it is well defined): \[p_{j}^{RO}(u)=\mathbb{P}_{G^{\star}}\left(j=\arg\max_{j^{\prime}\in\mathcal{J} }\{u_{j^{\prime}}+\varepsilon_{j^{\prime}}\}\right) \tag{6}\] From an economic standpoint, we can interpret the program (5) in two alternative ways. First, the _robust_-RUM considers a situation where a DM faces preference shocks but has some flexibility concerning the distribution generating those errors. Second, an analyst might not be sure about the distribution of the random vector \(\varepsilon\) but might consider a set of possible distributions instead. Thus, it is reasonable for the analyst to assume that the DM is rational and the shock distribution generating the social surplus corresponds to one of the elements in \(\mathcal{M}\). ### Connection with the semiparametric choice model It is worth pointing out that the definition of the RO-RUM is similar to the _semiparametric choice model_ (SCM), which has been recently introduced in the operation research literature (Natarajan et al. (2009)). The surplus functions are defined as the supremum over distributions in both model classes. By doing so, the SCM can capture complex substitution patterns and correlation between the different alternatives in the choice set \(\mathcal{J}\). Feng et al. (2017) provide a detailed overview of several discrete choice models, where the authors refer to SCM as a supremum over a general set of distributions. Thus, the _robust-RUM_ could be seen as an instance of a semi-parametric choice model. There are some existing instances of SCM in the literature. In their original paper, Natarajan et al. (2009) restrict the feasible set to joint distributions with given information on the marginal distributions. This particular instance of the SCM is known as the _marginal distribution model_ (MDM).3 A second class of SCMs exploits cross-moment constraints. In particular, Mishra et al. (2012) study the _cross-moment model_ (CMM), which considers the set \(\mathcal{M}\) to be the set of distributions consistent with a _known_ variance-covariance matrix.4 Footnote 3: In MDM, the marginal distributions of the random vector \(\varepsilon\) are fixed. Formally, we write \(\varepsilon_{i}\sim F_{i}\), where \(F_{i}\) is the marginal distribution function of the \(i\)-th error, \(i=1,\ldots,J\). In this case, we define \(\mathcal{M}\triangleq\textsc{Mar}=\{F:\varepsilon_{i}\sim F_{i}\quad\forall i \in\mathcal{J}\}\). Footnote 4: Formally, the CMM considers the set of distributions \(\mathcal{M}(0,\Sigma)=\{G:\mathbb{E}_{F}(\varepsilon)=0,\ \mathbb{E}_{G}( \varepsilon\varepsilon^{\top})=\Sigma\}\). In the definition of \(\mathcal{M}(0,\Sigma)\), the variance-covariance matrix \(\Sigma\) is assumed to be known. Despite the apparent similarity, our approach differs from the existing SCM in several aspects. As we shall see in the rest of the paper, our framework differs from the SCM in the specification of the set of distributions. In particular, in existing SCMs the analyst needs to construct a feasible set explicitly, for instance, by fixing the marginal distributions. In contrast, in our robust approach, the analyst specifies the feasible set implicitly by determining the nominal distribution \(F\) and by upper bounding the distance of other distributions to \(F\). Hence, the DRO-RUM approach does not require knowledge of the marginals of either variance-covariance matrix. In Section 3, we see that in the DRO-RUM, the researcher controls the distance by selecting the magnitude of a robustness parameter. Thus, our approach follows a rather different principle than existing SCMs. Additionally, we show that the DRO-RUM corresponds to the solution of a convex finite dimensional optimization problem. This latter fact allows us to extend the WDZ theorem to environments where the shock distribution is misspecified. Finally, Section 4 shows how the DRO-RUM enables us to recover the mean utility vector \(u\). ## 3 A Distributionally Robust - RUM model In this section, we formally introduce the DRO-RUM approach. Following the distributionally robust optimization literature, we consider an environment where the researcher (or the DM) has access to a reference distribution \(F\), which may be an approximation (or estimate) of the true statistical law governing the realizations of \(\varepsilon\). We refer to \(F\) as the _nominal_ distribution. Then, we define a set of probability distributions that are _close_ to \(F\). We rely on statistical distances to formalize the notion of distance between probability distributions. ### \(\phi\)-divergences We measure the distance between two probability distributions by the so-called \(\phi\)-divergence. Let \(\phi:\mathbb{R}\rightarrow(-\infty,+\infty]\) be a proper closed convex function such that \(\mathrm{dom}\ \phi\) is an interval with endpoints \(\alpha<\beta\), so, \(\mathrm{int}\left(\mathrm{dom}\ \phi\right)=(\alpha,\beta)\). Since \(\phi\) is closed, we have \(\lim_{t\rightarrow\alpha_{+}}\phi(t)=\phi(\alpha)\), if \(\alpha\) is finite and \(\lim_{t\rightarrow\beta_{-}}\phi(t)=\phi(\beta)\), if \(\beta\) is finite. Throughout the paper we assume that \(\phi\) is nonnegative and attains its minimum at the point \(1\in\operatorname{int}\left(\operatorname{dom}\,\phi\right)\), i.e. \(\phi(1)=0\). The class of such functions is denoted by \(\Phi\). **Definition 1**: _Given \(\phi\in\Phi\), the \(\phi\)-divergence of the probability measure \(G\) with respect to \(F\) is_ \[D_{\phi}(G\|F)=\left\{\begin{array}{cl}\int_{\mathbb{R}^{J+1}}\phi\left( \frac{g(\varepsilon)}{f(\varepsilon)}\right)f(\varepsilon)d\varepsilon&\text{ if }\mathrm{G}\ll\mathrm{F}\\ +\infty&\text{otherwise}\end{array}\right. \tag{7}\] _where \(f\) and \(g\) are the associated densities of \(F\) and \(G\) respectively._ To avoid pathological cases, throughout the paper, we assume the following: \[\phi(0)<\infty,0\cdot\phi\left(\frac{0}{0}\right)\equiv 0,0\cdot\phi\left( \frac{s}{0}\right)=\lim_{\varepsilon\to 0}\varepsilon\cdot\phi\left(\frac{s}{ \varepsilon}\right)=s\lim_{t\to\infty}\frac{\phi(t)}{t},\quad s>0. \tag{8}\] If the measure \(G\) is absolutely continuous w.r.t. \(F\), i. e. \(G\ll F\), the \(\phi\)-divergence can be conveniently written as: \[D_{\phi}(G\|F)=\mathbb{E}_{F}\left(\phi(L(\varepsilon))\right), \tag{9}\] where \(L(\varepsilon)\triangleq g(\varepsilon)/f(\varepsilon)\) is the likelihood ratio between the densities \(g\) and \(f\), also known as Radon-Nikodym derivative of the two measures. Using the expression (9) combined with the convexity of \(\phi\), Jensen's inequality implies that \[D_{\phi}(G\|F)\geq\phi\left(\mathbb{E}_{F}\left(L(\varepsilon)\right)\right)= \phi(1)=0 \tag{10}\] with equality if \(G=F\), so that \(D_{\phi}(G\|F)\) is a measure of distance of \(G\) from \(F\).5 Furthermore, the \(\phi\)-divergence functional is convex in both of its arguments. The following proposition summarizes these key properties. Footnote 5: We recall that \(1\in\operatorname{int}\,\operatorname{dom}\,\phi\) is the point where \(\phi\) attains its minimum \(0\). **Proposition 1**: _The \(\phi\)-divergence functional (7) is well-defined and nonnegative. It is equal to zero if and only if \(f_{1}(t)=f_{2}(t)\) a.e. Furthermore, \(D_{\phi}\) is convex on each of its arguments._ Proof.: The proof that \(D_{\phi}(G\|F)\) is well defined and nonnegative follows from (Ben-Tal and Teboulle, 1987, Prop. 1). The convexity of \(D_{\phi}\) follows from (Ben-Tal and Teboulle, 1987, Prop. 2). In our analysis, a key element will be the convex conjugate of \(\phi\). For \(\phi\in\Phi\) its conjugate denoted by \(\phi^{*}\) is: \[\phi^{*}(s)=\sup_{t\in\mathbb{R}}\{st-\phi(t)\}=\sup_{t\in\operatorname{dom} \,\phi}\{st-\phi(t)\}=\sup_{t\in\operatorname{int}\,\operatorname{dom}\, \phi}\{st-\phi(t)\}, \tag{11}\] where the last equality follows from (Rockafellar, 1970, Cor. 12.2.2). The conjugate \(\phi^{*}\) is a closed proper convex function, with \(\operatorname{int}\,\operatorname{dom}\,\phi^{*}=(a,b)\), where \[a=\lim_{t\to-\infty}t^{-1}\phi(t)\in[-\infty,+\infty);b=\lim_{t\to+\infty}t^{ -1}\phi(t)\in(-\infty,+\infty].\] Moreover, since \(\phi\) is convex and closed, we have for its bi-conjugate \(\phi^{**}=\phi\), (Rockafellar (1970)). It is worth noting that using the fact that \(1\) is the minimizer of \(\phi\) and it is in the interior of its domain, so \(\phi^{\prime}(1)=0\) holds. In addition, using the property that \(\phi\) is convex and closed, we have by Fenchel equality \(y=\phi^{\prime}(x)\) iff \(x=\phi^{*^{\prime}}(y)\). Applying this latter observation to \(x=1\) and \(y=0\) we obtain \(\phi^{*\prime}(0)=1\). ### The DRO-RUM framework The main idea is to consider an environment where the analyst (or a DM) does not know the true distribution governing realizations of the shock vector \(\varepsilon\). In this environment, the role of \(F\) is an approximation or some best guess of the "true" unknown distribution. Recognizing this ambiguity or potential misspecification of the distribution \(F\), we make use of the \(\phi\)-divergence to define the _uncertainty set_\(\mathcal{M}_{\phi}(F)\) as: \[\mathcal{M}_{\phi}(F)=\{G\ll F:D_{\phi}(G||F)\leq\rho\}, \tag{12}\] Formally, \(\mathcal{M}_{\phi}(F)\) is the set of all probability measures \(G\) that are absolutely continuous w.r.t \(F\), whose distance from \(F\), as measured by the \(\phi\)-divergence, is at most \(\rho\). The hyperparameter \(\rho\) is the radius of \(\mathcal{M}_{\phi}(F)\), which reflects how uncertain is the researcher (or the DM) about the plausibility of \(F\) being correct. Let us further elaborate on this interpretation. Following Hansen and Sargent (2001, 2008), Shapiro (2017b), and Kuhn et al. (2019), we interpret the set (12) as an environment in which the analyst (or the DM) has some best guess \(F\) of the true _unknown_ probability distribution, but does not fully trust it. For instance, the researcher may consider that the nominal distribution \(F\) corresponds to the Gumbel distribution. In this case, \(\mathcal{M}_{\phi}(F)\) accounts for many other probability distributions \(G\) to be feasible, where \(\rho\) determines the size of the feasible set. Endowed with the set \(\mathcal{M}_{\phi}(F)\), we can modify expression (5) to obtain a _distributionally robust_ surplus function. Thus, the surplus function of the DRO-RUM corresponds to the following optimization problem: \[W^{DRO}(u)=\sup_{G\in\mathcal{M}_{\phi}(\mathcal{F})}\left\{\mathbb{E}_{G} \left[\max_{j\in\mathcal{J}}\left\{u_{j}+\varepsilon_{j}\right\}\right]\right\} \tag{13}\] Some remarks are in order. First, a fundamental aspect of program (13) is the role of the parameter \(\rho\) which controls the size of \(\mathcal{M}_{\phi}(F)\). Because of this, we can interpret \(\rho\) as an _index of robustness_. More precisely, when \(\rho=0\) we get \(\mathcal{M}_{\phi}(F)=\{F\}\), which means that we recover the RUM under the distribution \(F\).6 On the other hand, when \(\rho\longrightarrow\infty\) the uncertainty set \(\mathcal{M}_{\phi}(F)\) admits a much larger set of possible distributions, including those that may not satisfy Assumption 1.7 The DRO-RUM aims to set \(\rho\) to reflect the perceived uncertainty that the researcher (or a DM) experiences about the distributional assumption for \(\varepsilon\). Footnote 7: To see this, we note that when \(\rho\longrightarrow\infty\) the \(\phi\)-divergence is unbounded. This latter fact implies that the set \(\mathcal{M}_{\phi}(F)\) consists of all distributions which are absolute continuous w.r.t. to \(F\). As \(F\) is fully supported, this only implies that the distributions in \(\mathcal{M}_{\phi}(F)\) must be continuous but certainly not fully supported on \(\mathbb{R}^{J+1}\). In fact, \(\mathcal{M}_{\phi}(F)\) may consist of distributions that are absolutely continuous w.r.t Lebesgue measure but without finite means. For instance, the Pareto distribution with shape parameter \(\alpha=1\) is absolutely continuous but fails to have a finite mean. The following lemma establishes some elementary properties of \(W^{DRO}(u)\). **Lemma 1**: _For the DRO-RUM the surplus function \(W^{DRO}(u)\) satisfies:_ 1. \(W^{DRO}(u+c\cdot e)=W^{DRO}(u)+c\) _for all_ \(c\in\mathbb{R},u\in\mathbb{R}^{J}\)_._ 2. \(W^{DRO}(u)\geq W^{DRO}(v)\) _for all_ \(u,v\in\mathbb{R}^{J}\) _with_ \(u\geq v\)_._ 3. \(W^{DRO}(u)\geq\max_{j\in\mathcal{J}}u_{i}+\min_{j\in\mathcal{J}}\mathbb{E}_{F }\left[\varepsilon_{i}\right]\)_._ 4. \(W^{DRO}(u)\) _is convex in_ \(u\)_._ _Proof._ 1. The definition provides \[W^{DRO}(u+c\cdot e)=\sup_{G\in\mathcal{M}_{\phi}(F)}\left\{\mathbb{E}_{G} \left[\max_{j\in\mathcal{J}}\left\{u_{j}+\varepsilon_{j}+c\right\}\right]\right\}\] Due to the linearity of the expectation, it holds \[c+\sup_{G\in\mathcal{M}_{\phi}(F)}\mathbb{E}_{G}\left[\max_{j\in\mathcal{J}} \left\{u_{j}+\varepsilon_{j}\right\}\right]=c+W^{DRO}(u).\] 2. Take any \(u,v\in\mathbb{R}^{J}\) with \(u\geq v\). First we note that for any arbitrary feasible distribution \(G\in\mathcal{M}_{\phi}(F)\) it holds \[W^{DRO}(u)\geq\mathbb{E}_{G}\left[\max_{j\in\mathcal{J}}\left\{u_{j}+ \varepsilon_{j}\right\}\right]\stackrel{{(*)}}{{\geq}}\mathbb{E}_ {G}\left[\max_{j\in\mathcal{J}}\left\{v_{j}+\varepsilon_{j}\right\}\right],\] where \((*)\) holds due to the monotonicity of the expectation. Taking the supremum on the right-hand side, we conclude that \(W^{DRO}(u)\geq W^{DRO}(v)\). 3. We deduce that for any \(i\in\mathcal{J}\) \[W^{DRO}(u)\geq\mathbb{E}_{F}\left[\max_{j\in\mathcal{J}}\left\{u_{j}+ \varepsilon_{j}\right\}\right]\geq\mathbb{E}_{F}\left[u_{i}+\varepsilon_{i} \right]\geq u_{i}+\min_{j\in\mathcal{J}}\mathbb{E}_{F}\left[\varepsilon_{j} \right],\] which is finite due to Assumption 1. 4. Let \(\alpha\in[0,1]\) and let \(u\) and \(v\) two deterministic utility vectors. For a fix distribution \(G\in\mathcal{M}_{\phi}(F)\), Then, due to the convexity of the \(\max\{\cdot\}\) operator, \[W^{DRO}(\alpha u+(1-\alpha)v)\leq\alpha\mathbb{E}_{G}\left(\max_{j\in\mathcal{J }}\{u_{j}+\varepsilon_{j}\}\right)+(1-\alpha)\mathbb{E}_{G}\left(\max_{j\in \mathcal{J}}\{v_{j}+\varepsilon_{j}\}\right).\] In the right-hand side, taking the supremum with respect to \(G\) over \(\mathcal{M}_{\phi}(F)\), we get \[W^{DRO}(\alpha u+(1-\alpha)v)\leq\alpha W^{DRO}(u)+(1-\alpha)W^{DRO}(v).\] Then the convexity of \(W^{DRO}(u)\) follows. The following result characterizes \(W^{DRO}(u)\). **Proposition 2**: _Let Assumption 1 hold and define the random variable \(H(u,\varepsilon)\triangleq\max_{j\in\mathcal{J}}\{u_{j}+\varepsilon_{j}\}\). Then, problem (13) is equivalent to solving the following finite-dimensional convex program:_ \[W^{DRO}(u)=\inf_{\lambda\geq 0,\mu\in\mathbb{R}}\left\{\lambda\rho+\mu+ \lambda\mathbb{E}_{F}\left[\phi^{*}\left(\frac{H(u,\varepsilon)-\mu}{\lambda} \right)\right]\right\}, \tag{14}\] _where \(\lambda\) is the Lagrange multiplier associated to the uncertainty set \(\mathcal{M}_{\phi}(F)\) and \(\mu\) the multiplier associated to \(G\) being a probabality measure. Furthermore, the program (14) is convex in \(\mu\) and \(\lambda\)._ _Proof._ This result follows from a direct application of (Ruszczynski and Shapiro, 2021, Prop. 7.9). For completeness, we provide the details of the argument. First, we note that for a fixed utility vector \(u\) and using the likelihood ratio \(L(\varepsilon)\triangleq dG(\varepsilon)/dF(\varepsilon)\), the DRO-RUM in (13) can be expressed as: \[W^{DRO}(u) = \sup_{G}\{\mathbb{E}_{G}(H(u,\varepsilon)):G\in\mathcal{M}_{\phi }(F)\} \tag{15}\] \[= \sup_{L\geq 0}\left\{\mathbb{E}_{F}[L(\varepsilon)H(u;\varepsilon) ]\mid\mathbb{E}_{F}[\phi(L(\varepsilon))]\leq\rho,\mathbb{E}_{F}[L(\varepsilon )]=1\right\}\] where the supremum is over a set of measurable functions. The Lagrangian of problem (15) is : \[\mathcal{L}(L,\lambda,\mu)=\int_{\mathbb{R}^{J+1}}[L(\varepsilon)H(u, \varepsilon)-\lambda\phi(L(\varepsilon))-\mu L(\varepsilon)]dF(\varepsilon)+ \lambda\rho+\mu. \tag{16}\] The Lagrangian dual of problem (16) is the problem \[\inf_{\lambda\geq 0,\mu\in\mathbb{R}}\sup_{L\geq 0}\mathcal{L}(L,\lambda,\mu) \tag{17}\] Since Slater condition holds for problem (16)8, there is no duality gap between (16) and its dual problem (17). Moreover, the dual problem has a nonempty and bounded set of optimal solutions. Footnote 8: For instance, we can take \(L(\varepsilon)=1\) for all \(\varepsilon\in\mathbb{R}^{J+1}\). By the interchangeability principle ((Rockafellar, 1976, Thm. 3A)), the maximum in (17) can be taken inside the integral, that is \[\sup_{L\geq 0} \int_{\mathbb{R}^{J+1}}[L(\varepsilon)H(u,\varepsilon)-\mu L( \varepsilon)-\lambda\phi(L(\varepsilon))]dF(\varepsilon)\] \[=\int_{\mathbb{R}^{J+1}}\sup_{t\geq 0}\{t(H(u,\varepsilon)-\mu)- \lambda\phi(t)\}dF(\varepsilon),\] Noting that \((\lambda\phi)^{*}(H(u,\varepsilon)-\mu)=\sup_{t\geq 0}\{t(H(u,\varepsilon)- \mu)-\lambda\phi(t)\}\), then it follows that \[W^{DRO}(u)=\inf_{\lambda\geq 0,\mu\in\mathbb{R}}\left\{\lambda\rho+\mu+ \mathbb{E}_{F}\left[(\lambda\phi)^{*}(H(u,\varepsilon)-\mu)\right]\right\}. \tag{18}\] To show the convexity with respect to \(\lambda\) and \(\mu\) we note that it suffices in (17) and (18) to take the \(\inf\) with respect to \(\lambda>0\) rather than \(\lambda\geq 0\), and that \((\lambda\phi)^{*}(y)=\lambda\phi^{*}(y/\lambda)\) for \(\lambda>0\). Therefore \(W^{DRO}(u)\) is given by the optimal value of the following problem: \[\inf_{\lambda>0,\mu\in\mathbb{R}}\left\{\lambda\rho+\mu+\lambda \mathbb{E}_{F}\left[\phi^{*}((H(u,\varepsilon)-\mu)/\lambda)\right]\right\} \tag{19}\] Note that \(\phi^{*}(\cdot)\) is convex. Hence, \(\lambda\phi^{*}(y/\lambda)\) is jointly convex in \(y\) and \(\lambda>0\). It follows that the objective function of problem (19) is a convex function of \(\lambda>0\) and \(\mu\in\mathbb{R}\) with \(y=H(u,\varepsilon)-\mu\). Hence (19) is a convex problem. \(\square\) An important implication of Proposition 2 is the fact that we can characterize the function \(W^{DRO}(u)\) as the solution of a _finite-dimensional convex_ optimization problem. The efficiency in solving program (14) strongly depends on expectation w.r.t. the nominal distribution \(F\) and the properties of the convex conjugate \(\phi^{*}\). The next corollary formalizes the connection between \(W(u)\) and \(W^{DRO}(u)\) when \(\rho=0\). **Corollary 1**: _Let Assumption 1 hold. Then for \(\rho=0\) we get \(W^{DRO}(u)=W(u)\)._ _Proof._ Let us look at problem (15). If \(\rho=0\) we get from one constraint that \[\mathbb{E}_{F}[\phi(L(\varepsilon))]\leq 0.\] Due to the definition of \(\phi\), this implies that \(L(\varepsilon)=1\). Hence, the Lagrangian simplifies since the supremum over the densities becomes trivial. Let us plug \(L(\varepsilon)=1\) into Equation (16): \[\mathcal{L}(L,\lambda,\mu)=\int_{\mathbb{R}^{J+1}}H(u,\varepsilon)-\lambda \cdot\underbrace{\phi(1)}_{=0}-\mu\cdot 1]dF(\varepsilon)+\lambda\cdot 0+\mu.\] The latter is equivalent to \[\mathbb{E}_{F}\left[H(u,\varepsilon)-\mu\right]+\mu=\mathbb{E}_{F}\left[H(u, \varepsilon)\right],\] where the last equality holds due to the linearity of expectation. We indeed recover \(W(u)\) for any distribution satisfying Assumption 1. ### A robust WDZ theorem A fundamental aspect of RUMs is the possibility of characterizing choice probabilities under specific distributional assumptions on \(\varepsilon\). Formally, and as a consequence of Assumption 1, the WDZ theorem establishes that the gradient of \(W(u)\) yields the choice probability vector \(p(u)\). In this section, we show that in the DRO-RUM, a similar result holds. In particular, we show that \(\nabla W^{DRO}(u)=p^{\star}(u)\) where \(p^{\star}(u)\) corresponds to the choice probability vector generated by the optimal solution to (14) approach. To establish this result, we need the following assumption. **Assumption 2**: \(\phi^{\ast}(s)\) _is strictly convex and differentiable with \(\phi^{\ast\prime}(s)\geq 0\) for all \(s\)._ We point out that many \(\phi\)-divergence functions satisfy Assumption 2. Table 1 overviews three popular \(\phi\)-divergences satisfying this assumption. As a direct implication of the Assumption 2 we can establish the strict convexity and uniqueness of an optimal solution to (14). **Lemma 2**: _Let Assumptions 1 and 2 hold. Then program (14) is strictly convex and has a unique optimal solution \(\lambda^{\star}\) and \(\mu^{\star}\)._ _Proof._ Due to Assumption 2, the function \(\phi^{\ast}\) is strictly convex. Following similar steps as Dacorogna and Marechal (2008), it follows that \(\lambda\cdot\phi^{\ast}(\frac{s}{\lambda})\), \(\lambda>0\), is strictly convex. Further, the sum of a convex and strictly convex is strictly convex. This latter fact immediately implies strict convexity of the objective function in \(\lambda\) and \(\mu\). Given the strict convexity in \(\lambda\) and \(\mu\), it follows that program (14) has a unique solution. \(\Box\) A second important implication of Assumption 2 is the possibility of characterizing the robust density associated to the optimal solution of the program (14). **Lemma 3**: _Let Assumptions 1 and 2 hold. For a fixed \(u\in\mathcal{U}\), let \(\lambda^{\star}>0\) and \(\mu^{\star}\in\mathbb{R}\) be the unique optimal solution to problem (14). Then the unique robust density \(g^{\star}(\varepsilon)\) corresponds to:_ \[g^{\star}(\varepsilon)=\phi^{\ast\prime}\left(\frac{H(u,\varepsilon)-\mu^{ \star}}{\lambda^{\star}}\right)f(\varepsilon)\quad\forall\varepsilon\in \mathbb{R}^{J+1}. \tag{20}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Divergence & \(\phi(t)\) & \(\phi^{\ast}(s)\) & Domain & \(\phi^{\ast^{\prime}}\) & \(\phi^{\ast^{\prime\prime}}\) \\ \hline _Kullback-Leibler_ & \(t\log t\) & \(e^{s-1}\) & \(\mathbb{R}\) & \(e^{s-1}\) & \(e^{s-1}\) \\ \hline _Reverse Kullback-Leibler_ & \(-\log(t)\) & \(-1-\log(-s)\) & \(\mathbb{R}_{--}\) & \(-\frac{1}{s}\) & \(\frac{1}{s^{2}}\) \\ \hline _Hellinger Distance_ & \((\sqrt{t}-1)^{2}\) & \(\frac{s}{1-s}\) & \(s<1\) & \(\frac{1}{(1-s)^{2}}\) & \(-\frac{2}{(s-1)^{3}}\) \\ \hline \end{tabular} \end{table} Table 1: \(\phi\)-divergences with their convex conjugates and first and second derivatives. Proof.: Define \(\Psi(\lambda,\mu):=\lambda\rho+\mu+\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H (u,\varepsilon)-\mu}{\lambda}\right)\right)\). Optimizing \(\Psi(\lambda,\mu)\) w.r.t \(\lambda\) and \(\mu\), the first order conditions combined with Assumption 2 yield that the optimal solution \(\lambda^{\star}\) and \(\mu^{\star}\) must satisfy \[\mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u,\varepsilon)- \mu^{\star}}{\lambda^{\star}}\right)\right) = 1\] \[\int_{\mathbb{R}^{J+1}}\phi^{*\prime}\left(\frac{H(u,\varepsilon)- \mu^{\star}}{\lambda^{\star}}\right)f(\varepsilon)d\varepsilon =1\] Define \(g^{\star}(\varepsilon)\triangleq\phi^{*\prime}\left(\frac{H(u,\varepsilon)- \mu^{\star}}{\lambda^{\star}}\right)f(\varepsilon)\). It follows that \(\int_{\mathbb{R}^{J+1}}g^{\star}(\varepsilon)d\varepsilon=1\). Furthermore, by Assumption 2, it follows that \(g^{\star}(\varepsilon)\geq 0\) for all \(\varepsilon\in\mathbb{R}^{J+1}\). Hence, we conclude that \(g^{\star}(\varepsilon)\) is indeed a probability density, and we call it the robust density associated with the problem (14). Some remarks are in order. First, the robust density \(g^{\star}\) depends on the choice of the \(\phi\)-divergence through its conjugate \(\phi^{*}\). Moreover, the robust density depends on the deterministic utility vector via \(H(u,\varepsilon)\), even though the nominal distribution \(F\) does not depend on \(u\) due to Assumption 1. In addition, \(g^{\star}\) allows us to define the robust distribution function \(G^{\star}\), which, as we shall see, plays a key role in providing an explicit form for \(W^{DRO}(u)\). Second, Lemma 3 establishes that the robust density \(g^{\star}(\varepsilon)\) incorporates correlation in the elements of the random vector \(\varepsilon\) through the factor \(\phi^{*\prime}((H(u,\varepsilon)-\mu^{\star})/\lambda^{\star})\). Thus, even though the nominal distribution \(F\) may assume that \(\varepsilon_{0},\varepsilon_{1},\ldots,\varepsilon_{J}\) are independent, the DRO-RUM approach introduces correlation of these terms. **Example 1**: _[KL-Divergence] We now consider the case of the Kullback-Leibler divergence. In doing so, we define \(\phi\) as follows:_ \[\phi(t)\triangleq t\log t,\;t\geq 0 \tag{21}\] _We note that in the previous expression, \(0\log 0=0\). Here_ \[\int_{\mathbb{R}^{J}}\phi(L(\varepsilon))dF(\varepsilon) \tag{22}\] _defines the Kullback-Leibler divergence, denoted \(D_{KL}(G\|F)\). For \(\lambda>0\) the conjugate of \(\lambda\phi\) is \((\lambda\phi)^{*}(y)=\lambda\left(e^{y/\lambda}-1\right)\). From Proposition 2 we know that_ \[W^{DRO}(u)=\inf_{\lambda\geq 0,\mu}\left\{\lambda\rho+\mu+\lambda e^{-\mu/ \lambda}\mathbb{E}_{F}\left[e^{H(u,\varepsilon)/\lambda}\right]-\lambda\right\} \tag{23}\] _In (23) minimizing with respect to \(\mu\) yields \(\mu^{\star}=\lambda\ln\mathbb{E}_{F}\left[e^{H(u,\varepsilon)/\lambda}\right]\). Plugging \(\mu^{\star}\) in (23) we obtain \(\lambda^{\star}\) as the solution to_ \[W^{DRO}(u)=\inf_{\lambda>0}\left\{\lambda\rho+\lambda\ln\mathbb{E}_{F}\left[ e^{H(u,\varepsilon)/\lambda}\right]\right\}. \tag{24}\] _It is well-known that in the case of the KL divergence (e.g., Hu and Hong (2012) and Hansen and Sargent (2001)), the "robust" density is given by:_ \[f^{DRO}(\varepsilon)=f(\varepsilon)\frac{e^{H(u,\varepsilon)/\lambda^{\star} }}{\mathbb{E}_{F}(e^{H(u,\varepsilon)/\lambda^{\star}})} \tag{25}\] _where \(f(\varepsilon)\) is the density associated to a nominal distribution, \(H(u,\varepsilon)=\max_{j\in\mathcal{J}}\{u_{j}+\varepsilon_{j}\}\) and \(\lambda^{\star}\) is the unique optimal solution to (24). To see how the optimal density (25) compares to the case where the nominal is a Gumbel distribution, in Figure 1._ The result in Lemma 3 enables us to characterize the choice probability vector \(p^{\star}(u)\) similarly to the celebrated WDZ theorem. **Theorem 1**: _Let Assumptions 1 and 2 hold. Let \(\lambda^{\star}\) and \(\mu^{\star}\) be the unique optimal solution to program (14), which induces \(g^{\star}\) and \(G^{\star}\) as the optimal density and distribution function, respectively. Then the following statements hold:_ * _The robust social surplus corresponds to the following:_ \[W^{DRO}(u)=\mathbb{E}_{G^{\star}}\left(\max_{j\in\mathcal{J}}\{u_{j}+ \varepsilon_{j}\}\right)\] * _The choice probability vector_ \(p^{\star}(u)\) _is_ \[\nabla W^{DRO}(u)=p^{\star}(u).\] Proof.: (i) To show the first part, let us define the function \(\Psi(\cdot)\) as follows \(\Psi(\lambda,\mu)\triangleq\lambda\rho+\mu+\lambda\mathbb{E}_{F}\left(\phi^{ \star}\left(\frac{H(u,\varepsilon)-\mu}{\lambda}\right)\right)\). Optimizing \(\Psi(\lambda,\mu)\) with respect to Figure 1: Three instances of \(\rho\) \(\lambda\) and \(\mu\) we get \[\frac{\partial\Psi(\lambda,\mu)}{\partial\lambda} = \rho+\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u,\varepsilon)-\mu}{ \lambda}\right)\right)\] \[+ \lambda\mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u, \varepsilon)-\mu}{\lambda}\right)\left(\frac{H(u,\varepsilon)-\mu}{-\lambda^{2 }}\right)\right)=0\] \[\frac{\partial\Psi(\lambda,\mu)}{\partial\mu} = 1-\mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u,\varepsilon) -\mu}{\lambda}\right)\right)=0\] Rearranging the first equation, we have: \[\lambda\rho+\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u, \varepsilon)-\mu}{\lambda}\right)\right)+\mu\mathbb{E}_{F}\left(\phi^{*\prime }\left(\frac{H(u,\varepsilon)-\mu}{\lambda}\right)\right)\] \[= \mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u,\varepsilon)- \mu}{\lambda}\right)H(u,\varepsilon)\right).\] Similarly, in the second equation, we have: \[\mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u,\varepsilon)-\mu}{\lambda} \right)\right)=1\] Combining both expressions we find that the optimal \(\lambda^{\star}\) and \(\mu^{\star}\) must satisfy: \[\lambda^{\star}\rho+\lambda^{\star}\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H (u,\varepsilon)-\mu^{\star}}{\lambda^{\star}}\right)\right)+\mu^{\star}= \mathbb{E}_{F}\left(\phi^{*\prime}\left(\frac{H(u,\varepsilon)-\mu^{\star}}{ \lambda^{\star}}\right)H(u,\varepsilon)\right).\] Using expression (20) in Lemma 3, we obtain: \[\Psi(\lambda^{\star},\mu^{\star})=\mathbb{E}_{F}\left(\phi^{*\prime}\left( \frac{H(u,\varepsilon)-\mu^{\star}}{\lambda^{\star}}\right)H(u,\varepsilon) \right)=\mathbb{E}_{G^{\star}}\left(\max_{j\in\mathcal{J}}\{u_{j}+\varepsilon _{j}\}\right).\] Hence, we conclude that \[W^{DRO}(u)=\mathbb{E}_{G^{\star}}\left(\max_{j\in J}\{u_{j}+\varepsilon_{j}\} \right).\] (ii) To show that \(\nabla W^{DRO}(u)=p^{\star}(u)\), we note that using the optimized value \(\Psi(\lambda^{\star},\mu^{\star})=\lambda^{\star}\rho+\mu^{\star}+\lambda^{ \star}\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u,\varepsilon)-\mu^{\star}}{ \lambda^{\star}}\right)\right)\) we get: \[\frac{\partial W^{DRO}(u)}{\partial u_{j}} = \frac{\partial\Psi(\lambda^{\star},\mu^{\star})}{\partial u_{j}}\] \[= \int_{\varepsilon\in\mathbb{R}^{J+1}}\left(\phi^{*\prime}\left( \frac{H(u,\varepsilon)-\mu^{\star}}{\lambda^{\star}}\right)\frac{\partial H(u, \varepsilon)}{\partial u_{j}}\right)f(\varepsilon)d\varepsilon\] \[= \mathbb{E}_{G^{\star}}\left(\frac{\partial H(u,\varepsilon)}{ \partial u_{j}}\right)\] \[= p_{j}^{\star}(u).\] As previous result holds for all \(j\in\mathcal{J}\), we conclude that \(\nabla W^{DRO}(u)=p^{\star}(u)\). Part (i) of the theorem establishes that given the optimal solutions \(\lambda^{\star}\) and \(\mu^{\star}\), the surplus function \(W^{DRO}(u)\) takes the familiar expected maximum form that characterizes the RUM (see Eq.(2)). The main difference between the characterization in part (i) and the surplus functions from RUM is that expression (1) corresponds to the expectation with respect to the robust distribution \(G^{\star}\). Part (ii) shows that the gradient of \(W^{DRO}(u)\) yields the choice probability vector \(p^{\star}(u)\). This latter result generalizes the WDZ to environments where the nominal distribution \(F\) may be misspecified or incorrect. In other words, Theorem 1 shows that the DRO-RUM preserves the expected maximum form and the gradient structure of the popular RUM. ## 4 Empirical content of the DRO-RUM In this section, we discuss the empirical content of the DRO-RUM. In particular, we show how our approach is suitable to recover the mean utility vector allowing for uncertainty about the true distribution generating \(\varepsilon\). To gain some intuition, consider a situation where the choice probability vector \(p\) is observed from market data. Then the analyst's goal is to find a vector \(u\) that rationalizes the observed \(p\). Following Berry (1994), this problem is known as the _demand inversion_. In particular, Berry (1994) shows that in the case of the MNL \(u\) satisfy the following \[p_{j}=\frac{e^{u_{j}}}{1+\sum_{j^{\prime}=1}^{J}e^{u_{j^{\prime}}}}\quad\text {for }j=1,\ldots,J.\] and \[p_{0}=\frac{1}{1+\sum_{j^{\prime}=1}^{J}e^{u_{j^{\prime}}}}\] Then using the previous expressions, we can solve for the mean utility vector \(u\) as a function of \(p\): \[\log(p_{j}/p_{0})=u_{j}\quad\text{for }j=1,\ldots,J.\] In other words, we can express \(u\) in terms of the observed choice probability vector \(p\). We can use a similar argument to find the vector \(u\) in the case of the nested logit, the random coefficient MNL model (Berry (1994); Berry et al. (1995)), and in the case of the inverse product differentiation logit model of Fosgerau et al. (2022). For general RUMs beyond the MNL and its variants, Galichon and Salanie (2021) develops a general approach based on convex duality and mass transportation techniques. They show that for any _fixed_ distribution of \(\varepsilon\) the mean utility vector \(u\) is identified from the observed choice probability \(p\). This section aims to show that the DRO-RUM can be used to study the demand inversion problem in environments where the analyst does not know the true distribution of \(\varepsilon\). Thus, our approach allows us to identify \(u\) under misspecification of the distribution governing the realizations of \(\varepsilon\). ### Robust demand inversion Our main result uses a distributionally robust version of the Fenchel equality for discrete choice models. In order to establish this result, we define \(\mathcal{U}\triangleq\{u\in\mathbb{R}^{J+1}:u_{0}=0\}\). In other words, \(\mathcal{U}\) is the set of mean utility vectors with the normalization \(u_{0}=0\) for the outside option. Our first step is to understand the properties of the convex conjugate of \(W^{DRO}(u)\): \[{W^{(DRO)}}^{*}(p)=\sup_{u\in\mathcal{U}}\left\{\left\langle u,p\right\rangle- W^{DRO}(u)\right\}. \tag{27}\] In particular, we are interested in understanding the behavior of \({W^{(DRO)}}^{*}(p)\) on its effective domain of: \[\operatorname{dom}{W^{(DRO)}}^{*}=\left\{p\in\mathbb{R}^{J+1}\,|\,{W^{DRO}}^{ *}(p)<\infty\right\}.\] The following lemma plays a key role in our analysis. **Lemma 4**: _Let Assumptions 1 and 2 hold. Then \(W^{DRO}(u)\) is strictly convex in \(u\)._ _Proof._ For given \(\lambda\) and \(\mu\), for \(u_{1},u_{2}\) and \(\alpha\in(0,1)\) we have \[\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(\alpha u_{1}+(1- \alpha)\,u_{2},\varepsilon)-\mu}{\lambda}\right)\right)\] \[\stackrel{{(\star)}}{{\leq}}\lambda\mathbb{E}_{F} \left(\phi^{*}\left(\frac{\alpha H(u_{1},\varepsilon)+(1-\alpha)\,H(u_{2}, \varepsilon)-\mu}{\lambda}\right)\right),\] where \((\star)\) holds due to the convexity of \(H\) and the monotonicity of \(\phi^{*}\) due to Assumption 2. Exploiting the strict convexity of \(\phi^{*}\) and the linearity and monotonicity of the expectation operator further yields: \[\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{\alpha H(u_{1}, \varepsilon)+(1-\alpha)\,H(u_{2},\varepsilon)-\mu}{\lambda}\right)\right)\] \[< \alpha\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u_{1}, \varepsilon)-\mu}{\lambda}\right)\right)+(1-\alpha)\,\lambda\mathbb{E}_{F} \left(\phi^{*}\left(\frac{H(u_{2},\varepsilon)-\mu}{\lambda}\right)\right).\] Thus it follows that \(W^{DRO}(u)\) is strictly convex in \(u\). \(\square\) The following theorem establishes the continuity and smoothness of \({W^{(DRO)}}^{*}\). **Theorem 2**: _Let Assumptions 1 and 2 hold. The convex conjugate \({W^{(DRO)}}^{*}\) is continuous on its domain \(\operatorname{dom}{W^{(DRO)}}^{*}\) which coincides with the probability simplex \(\Delta_{J+1}\). Furthermore, \({W^{(DRO)}}^{*}\) is continuously differentiable on \(\operatorname{int}\operatorname{dom}{W^{(DRO)}}^{*}\)._ _Proof._ Let us first show that \(\operatorname{dom}{W^{(DRO)}}^{*}\subseteq\Delta_{J+1}\). Fix a utility vector \(\bar{u}\) and take any \(p\in\mathbb{R}^{J+1}\) with \(\langle p,e\rangle\neq 1\). Then, using Lemma1(iii) we have \[{W^{(DRO)}}^{*}(p)\geq\sup_{\gamma\in\mathbb{R}}\langle p,\bar{u}+\gamma\cdot e \rangle-W^{DRO}(\bar{u}+\gamma\cdot e)\] \[\stackrel{{(iii)}}{{=}}\langle p,\bar{u}\rangle-W^{DRO}(\bar{u}) +\sup_{\gamma\in\mathbb{R}}\gamma\left(\langle e,p\rangle-1\right)=\infty.\] Next, we take any vector \(p\in\mathbb{R}^{J+1}\) with \(p_{i}<0\) for some \(i\in\{0,1,\ldots,J\}\). By Lemma 1 (ii), it follows that \[W^{(DRO)}{}^{*}(p)\geq\sup_{\gamma<0}\langle p,\gamma\cdot e_{i}\rangle-W^{DRO} (e_{i})\stackrel{{(ii)}}{{\geq}}\sup_{\gamma<0}\gamma\cdot p_{i},- W^{DRO}(0)=\infty.\] Hence, it remains to prove the reverse implication, i. e. \(\Delta_{J+1}\subseteq\operatorname{dom}W^{(DRO)}{}^{*}\). Therefore, we derive an upper bound for the convex conjugate on the simplex: \[\sup_{p\in\Delta_{J+1}}W^{(DRO)}{}^{*}(p)=\sup_{p\in\Delta_{J+1}} \left(\sup_{u\in\mathcal{U}}\langle p,u\rangle-W^{DRO}(u)\right)\] \[= \sup_{u\in\mathcal{U}}\left(\sup_{p\in\Delta_{J+1}}\langle p,u \rangle-W^{DRO}(u)\right).\] We apply (iii) from Lemma 1 which yields \[\sup_{u\in\mathcal{U}}\left(\sup_{p\in\Delta_{J+1}}\langle p,u\rangle-W^{DRO }(u)\right)=\sup_{u\in\mathcal{U}}\left(\max_{i\in\mathcal{J}}u_{i}-W^{DRO} (u)\right)\leq-\min_{i\in\mathcal{J}}\mathbb{E}_{F}\left[\varepsilon_{i} \right].\] Thus, the domain coincides with the simplex. For the continuity, we first observe that \(W^{(DRO)}{}^{*}\) is convex, and hence it is continuous on the relative interior of its domain. The Gale-Klee-Rockafellar theorem provides upper semi-continuity of \(W^{(DRO)}{}^{*}\) if the domain is polyhedral, which it is (Rockafellar, 1970). Furthermore, convex conjugates are always lower semi-continuous, and hence continuity follows. In order to establish that \(W^{(DRO)}{}^{*}\) is continuously differentiable, we note that Lemma 4 shows that \(W^{(DRO)}{}^{*}\) is strictly convex in \(u\). Then by Hiriart-Urruty and Lemarechal (1993, Thm. 4.1.1) we know that the strict convexity of \(W^{DRO}(u)\) implies that \(W^{(DRO)}{}^{*}(p)\) is continuously differentiable on \(\operatorname{int}\left(\operatorname{dom}W^{(DRO)}{}^{*}\right)\). The previous result is key in our goal of identifying the mean utilities. To see this we note that thanks to Theorem 1 we know that for alternative \(j\in\mathcal{J}:\) \[p_{j}=\frac{\partial W^{DRO}(u)}{\partial u_{j}}\] Furthermore, from Theorem 2 we get: \[u_{j}=\frac{\partial W^{(DRO)}{}^{*}(p)}{\partial p_{j}}\] where \(u\) achieves the maximum in (27). Then by Fenchel's duality theorem, we know that these two conditions are equivalent. Then, given the _robust_ distribution \(G^{\star}\), we conclude that \(u\) is identified from \(p\). In other words, we can find a vector \(u\) that rationalizes the observed choice probability vector \(p\). The following result establishes the empirical content of the DRO-RUM. **Theorem 3**: _Let Assumptions 1 and 2 hold. Then, the following statements are equivalent:_ 1. _The choice probability vector_ \(p\in\Delta_{J+1}\) _satisfies:_ \[p=\nabla W^{DRO}(u).\] (28) 2. _The deterministic utility vector_ \(u\in\mathcal{U}\) _satisfies:_ \[u=\nabla W^{(DRO)^{*}}(p).\] (29) 3. \((u^{\star},\lambda^{\star},\mu^{\star})\) _is the unique solution to the strictly convex optimization problem:_ \[-W^{(DRO)^{*}}(p)=\inf_{u\in\mathcal{U},\lambda\in\mathbb{R}_{+},\mu\in \mathbb{R}}\left\{\lambda\rho+\mu+\lambda\mathbb{E}_{F}\left(\phi^{*}\left( \frac{H(u,\varepsilon)-\mu}{\lambda}\right)\right)-\langle p,u\rangle\right\}.\] (30) Proof.: The equivalence of parts (i) and (ii) follows from Theorem 2, which allows us to invoke Fenchel equality to conclude the result. To show part (iii), let us look at \(W^{(DRO)^{*}}(p)\). By definition, we know that \[W^{(DRO)^{*}}(p)=\sup_{u\in\mathcal{U}}\{\langle p,u\rangle-W^{DRO}(u)\}.\] Proposition 2 implies that the previous expression corresponds to \[W^{(DRO)^{*}}(p)=\sup_{u\in\mathcal{U}}\left\{\langle p,u\rangle-\inf_{ \lambda>0,\mu\in\mathbb{R}}\left\{\lambda\rho+\mu+\lambda\mathbb{E}_{F}\left( \phi^{*}\left(\frac{H(u,\varepsilon)-\mu}{\lambda}\right)\right)\right\} \right\}.\] Equivalently, we have: \[W^{(DRO)^{*}}(p)=-\inf_{u\in\mathcal{U},\lambda>0,\mu\in\mathbb{R}}\left\{ \lambda\rho+\mu+\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u, \varepsilon)-\mu}{\lambda}\right)\right)-\langle p,u\rangle\right\}.\] Thus, we get: \[-W^{(DRO)^{*}}(p)=\inf_{u\in\mathcal{U},\lambda>0,\mu\in\mathbb{R}}\left\{ \lambda\rho+\mu+\lambda\mathbb{E}_{F}\left(\phi^{*}\left(\frac{H(u, \varepsilon)-\mu}{\lambda}\right)\right)-\langle p,u\rangle\right\}.\] Combining Lemmas 2 and 4 we get that the (30) is strictly convex in \(u,\lambda\), and \(\mu\). As a consequence, there exists a unique solution to the problem (30). As we discussed in the introduction of this section, for a fixed distribution of \(\varepsilon\), parts (i) and (ii) have been established in the Galichon and Salanie (2021). Our result differs from theirs in a fundamental aspect; we achieve the identification of the mean utility vector \(u\), relaxing the assumption that the distribution of \(\varepsilon\) is known. In other words, our result allows for nonparametric identification of \(u\) under (potential) misspecification of the shock distribution. Similarly, our result relates to dynamic discrete choice models' "inversion" approach.9 For instance the papers by Hotz and Miller (1993) and Arcidiacono and Miller (2011) establish that the mean utility vector \(u\) can be recovered as \(\nabla^{-1}W(p)=u\) Their approach only applies to the case of the MNL and GEV models. By exploiting convex optimization techniques, Fosgerau et al. (2021) extends Hotz and Miller (1993) and Arcidiacono and Miller (2011)'s inversion approach to models far beyond the GEV class. Similarly, Li (2018) considers a convex minimization algorithm to solve the demand inversion problem. He illustrates his method in the case of both the Berry et al. (1995) random coefficient logit demand model and the Berry and Pakes (2007) pure characteristics model. However, Fosgerau et al. (2021) and Li (2018)'s results only apply under the assumption that the distribution of \(\varepsilon\) is known. In contrast, part (iii) establishes that given a choice probability vector \(p\), we can identify the mean utility vector \(u\) as the unique solution of the strictly convex optimization program (30). This latter characterization captures the role of misspecification through the value of the Lagrange multipliers \(\lambda^{\star}\) and \(\mu^{\star}\). Thus, Theorem 3 provides a distributionally robust nonparametric identification result. ### A robust random coefficient model To see how Theorem 3 can be applied, we analyze the random coefficient model assuming that the \(\phi\)-divergence corresponds to the Kullback-Leibler distance. Following Berry et al. (1995) and Galichon and Salanie (2021), we consider a random coefficient model with \(\varepsilon=Ze+T\eta\), where \(e\) is a random vector on \(\mathbb{R}^{k}\) with distribution \(F_{e}\), \(Z\) is a \(|\mathcal{J}|\times k\) matrix, \(T>0\) is a scalar parameter, and \(\eta\) is a vector of \(|\mathcal{J}|\) Gumbel random variables, whose distribution function is \(F_{\eta}\). Assume that \(e\) and \(\eta\) are statistically independent. Fixing the distributions \(F_{e}\) and \(F_{\eta}\), we can use the iterated expectation, combined with the independence of \(e\) and \(\eta\) ((Galichon and Salanie, 2021, Eqs. B.6-B.7)) we get that \[W(u) = \mathbb{E}_{F_{e}}\left(\mathbb{E}_{\eta}\left(\max_{j\in \mathcal{J}}\{u_{j}+(Ze)_{j}+T\eta_{j}\}\right)|e\right),\] \[= \mathbb{E}_{F_{e}}\left(W(u+Ze)\right),\] where \(W(u+Ze)=\int_{\mathbb{R}^{J+1}}\max_{j\in\mathcal{J}}\{u_{j}+(Ze)_{j}+T\eta_{ j}\}f_{\eta}d\eta\). Using the fact that \(\eta\) follows a Gumbel distribution, we find that \[W(u+Ze)=T\log\left(\sum_{j\in\mathcal{J}}e^{\frac{u_{j}+(Ze)_{j}}{T}}\right).\] Let us assume that \(F_{e}\) approximates the true distribution generating \(e\). Then we can define \(W^{DRO}(u)\) as follows: \[W^{DRO}(u)=\sup_{G_{e}\in\mathcal{M}_{\phi}(F_{e})}\mathbb{E}_{G_{e}}\left(T \log\left(\sum_{j\in\mathcal{J}}e^{\frac{u_{j}+(Ze)_{j}}{T}}\right)\right)\] To apply Theorem 3, we note that \(H(u,\varepsilon)=T\log\left(\sum_{j\in\mathcal{J}}e^{\frac{u_{j}+(Ze)_{j}}{T}}\right)\). Then, using the Kullback-Leibler distance, we have that for an observable choice probability vector \(p\), the identified mean utility vector \(u^{\star}\) corresponds to the solution of the following program: \[-W^{(DRO)^{*}}(p)=\inf_{u\in\mathcal{H},\lambda>0}\left\{\rho\lambda+\lambda\ln \mathbb{E}_{F_{e}}\|e^{u+Ze}\|_{T^{-1}}-\langle p,u\rangle\right\}, \tag{31}\] where \(\|e^{u+Ze}\|_{T^{-1}}\triangleq\left(\sum_{j\in\mathcal{J}}e^{\frac{u_{j}+(Ze)_ {j}}{T}}\right)^{T}\). The program (31) allows us to identify the mean utility vector enabling some degree of misspecification in the distribution of \(e\). It is worth remarking that program (31) is fairly tractable, so we can use traditional stochastic programming algorithms to find its unique solution. ## 5 Numerical Experiments In this section, we discuss numerical simulations of our approach. We compare the DRO-RUM with the MNL and MNP models.10 Footnote 10: We recall that the MNP assumes that the error terms follow a normal distribution with a specific variance-covariance matrix. Our main goal is to analyze the effect of the robustness index \(\rho\) on the choice probabilities. We consider a scenario with four alternatives where \(\mathcal{J}=\left\{0,1,2,3\right\}\). Our first parametrization of the utility vector \(u\) is \(u=\left(0,1,2,2.1\right)^{T}\). Based on this specification, we proceed to calculate the choice probabilities. In the case of the MNL, the choice probabilities are computed via Eq. (4), where the scale parameter equals one ( \(\eta=1\)). In addition, we set the location parameter of each Gumbel error is assumed to zero. For the MNP, we consider two different parametrizations for the variance-covariance matrix of the random error vectors; \(\mathcal{N}(0,\Sigma_{1})\) and \(\mathcal{N}(0,\Sigma_{2})\) where \[\Sigma_{1}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right),\qquad\Sigma_{2}=\left(\begin{array}{cccc}2&-0.5& 0.5&1.3\\ -0.5&2&0&0.15\\ 0.5&0&2&1\\ 1.3&0.15&1&2\end{array}\right).\] We call the latter model MNP-dep and the former MNP-indep, as the random errors \(\varepsilon^{(i)}\), \(i=1,\ldots,4\) are independent in the former model. We use 10,000,000 draws from the error vectors to stabilize the simulations to simulate the choice probabilities. For the DRO-RUMs, we choose the Kulback-Leibler- divergence case presented in Example 1. We assume that the error terms of the nominal distribution are iid Gumbel distributed with location parameter zero and scale parameter one. This yields a way to examine the behavior and numerical stability of the DRO-RUM, and the impact of \(\rho\) on the choice probabilities. The robust choice probabilities are simulated similarly to the MNP models. However, for the case of DRO-RUM we have to generate samples from the distribution defined by the density \(25\). First, the optimal \(\lambda^{*}\) in (24) as well as \(\mathbb{E}_{F}\left(e^{H(u,\varepsilon)/\lambda^{*}}\right)\) are estimated using \(50,000,000\) simulations from 4 iid Gumbel distributions. Based on the optimized parameters a higher dimensional acceptance-rejection algorithm provides an efficient sampling method. For performance, the code was written in Julia.11 Footnote 11: The code can be found on Github under [https://github.com/rubsc/rejection_DRO_RUM](https://github.com/rubsc/rejection_DRO_RUM). We present the results in Table 2 In the previous table, the first row displays the choice probability for the MNL. The second and third rows show the choice probabilities for the MNP-indep and MNP-dep. The fourth row shows the behavior of the DRO-RUM when \(\rho=0.1\). For this parametrization, the DRO-RUM yields choice probabilities that are similar (not equal) to the ones displayed by the MNP-dep. Similar behavior is observed for the case of \(\rho=0.7\). Rows six to eight show the behavior of the DRO-RUM as we increase \(\rho\). As expected, as the value of \(\rho\) increases, the choice probabilities look similar to the uniform choice between alternatives. In particular, for the case of \(\rho=4.3\) we note that DRO-RUM assigns probabilities similar to the uniform case. Intuitively, a large \(\rho\), represents a situation where the analyst is highly uncertain about the true distribution. Thus, her behavior is overly cautious and considers a large set of possible (and feasible) distributions. Hence, when \(\rho\longrightarrow\infty\), the analyst's best choice is to guess uniform probabilities. Similarly, from the DM's perspective, large values of \(\rho\) indicate a cautious and flexible choice of the error term. Consequently, the random error term might follow a distribution that completely counteracts the deterministic utilities' effects and guarantees the same overall random utility for every alternative. Indeed, the robust surplus function (23) is strongly increasing with a larger index of robustness as shown in Figure 2, where we plot the surplus function evaluated at \(u=\left(0,1,2,2.1\right)^{T}\) for different values of \(\rho\). \begin{table} \begin{tabular}{c|c|c|c|c} & Alternative 1 & Alternative 2 & Alternative 3 & Alternative 4 \\ \hline MNL & 5.1885\% & 14.1037\% & 38.3379\% & 42.3699\% \\ \hline MNP-indep & 1.6243\% & 10.2996\% & 41.443\% & 46.6331\% \\ \hline MNP-dep & 1.7877\% & 19.359\% & 37.9993\% & 40.854\% \\ \hline \(\rho=0.1\) & 9.2391\% & 18.4783\% & 33.695\% & 38.587\% \\ \hline \(\rho=0.7\) & 13.2132\% & 19.7823\% & 31.6066\% & 35.3979\% \\ \hline \(\rho=1.3\) & 15.4725\% & 21.5501\% & 30.5269\% & 32.4505\% \\ \hline \(\rho=2.2\) & 18.1501\% & 21.46\% & 29.7536\% & 30.6363\% \\ \hline \(\rho=4.3\) & 21.5843\% & 23.2006\% & 27.1235\% & 28.0916\% \\ \end{tabular}. \end{table} Table 2: Choice Probabilities for utility vector \(u=\left(0,1,2,2.1\right)^{T}\) A well-known pitfall of the MNL model is that it satisfies the independence of irrelevant alternatives (IIA) property. The IIA property establishes that the ratio between the probabilities of any two alternatives only depends on the differences between the utilities of these two alternatives. This property follows directly via formula (4). A direct implication of this fact is that when the deterministic utility of one alternative changes, the choice probabilities change proportionally so that the probability ratio between alternatives remains constant. In contrast, the DRO-RUM incorporates some dependence structure into the MNL.12 Hence, it is interesting to simulate choice probabilities for a slight change in the deterministic utility vector. In Table 3, we summarize the choice the probabilities for the alternatives with utility vector \(\tilde{u}=\left(0,1,2,2.2\right)^{T}\). Figure 2: Robust surplus \(W^{DRO}\) for the utility vector \(u\) and different values of robustness index \(\rho\). The violation of IIA, is visualized in Figure 3. Note that in the MNL, the decrease in choosing alternative 4 evenly increases the probability of choosing one of the alternatives \(1-3\), indicated by the dotted line. At the same time, the substitution patterns for the robust models are way more flexible. ## 6 Final remarks In this paper, we have introduced the DRO-RUM, which allows the shock distribution to be unknown or misspecified. We have shown that the DRO-RUM Figure 3: Relative change in probabilities if deterministic utility vector changes from \(\tilde{u}\) to \(u\). preserves the tractability and convex structure of the traditional RUM. Furthermore, we characterized the empirical content of the DRO-RUM, establishing that for an observed choice probability vector, there exists a unique mean utility vector that rationalizes the observed behavior in terms of a DRO-RUM. Finally, we showed the stability and numerical properties of our approach. Several extensions are possible. First, a natural question is about the econometric performance of the DRO-RUM using market data. This is in particular interesting, as our our approach provides the analyst with a rich class of various models. In fact, different models can be created by simply choosing a different \(\phi\) - divergence and/or nominal distribution. In this context, it is also interesting to examine the impact of the robustness parameter \(\rho\). Second, results of RUM could be analyzed in the framework of robustness. For instance, the results in this paper could help study two-sided matching markets with transferable utility. Similarly, our results can help study robust identification in dynamic discrete choice models. The algorithmic aspects of the DRO-RUM could be analyzed. Recently for example, a new family of prox-functions on the probability simplex based on discrete choice models has been introduced by Muller et al. (2022). Hence, it is interesting to see if prox-functions can be generated from the DRO-RUM. Additionally, theoretical extensions of the distributionally robust approach are conceivable. A natural way to do this is to rely on different statistical distance concepts, e. g. Wasserstein-Distance, and analyze their tractability. Moreover, the properties of such other robust models could be compared with the DRO-RUM.
2302.03108
Reduction for asynchronous Boolean networks: elimination of negatively autoregulated components
To simplify the analysis of Boolean networks, a reduction in the number of components is often considered. A popular reduction method consists in eliminating components that are not autoregulated, using variable substitution. In this work, we show how this method can be extended, for asynchronous dynamics of Boolean networks, to the elimination of vertices that have a negative autoregulation, and study the effects on the dynamics and interaction structure. For elimination of non-autoregulated variables, the preservation of attractors is in general guaranteed only for fixed points. Here we give sufficient conditions for the preservation of complex attractors. The removal of so called mediator nodes (i.e. vertices with indegree and outdegree one) is often considered, and frequently does not affect the attractor landscape. We clarify that this is not always the case, and in some situations even subtle changes in the interaction structure can lead to a different asymptotic behaviour. Finally, we use properties of the more general elimination method introduced here to give an alternative proof for a bound on the number of attractors of asynchronous Boolean networks in terms of the cardinality of positive feedback vertex sets of the interaction graph.
Robert Schwieger, Elisa Tonello
2023-02-06T20:16:01Z
http://arxiv.org/abs/2302.03108v2
# Reduction for asynchronous Boolean networks: ###### Abstract To simplify the analysis of Boolean networks, a reduction in the number of components is often considered. A popular reduction method consists in eliminating components that are not autoregulated, using variable substitution. In this work, we show how this method can be extended, for asynchronous dynamics of Boolean networks, to the elimination of vertices that have a negative autoregulation, and study the effects on the dynamics and interaction structure. For elimination of non-autoregulated variables, the preservation of attractors is in general guaranteed only for fixed points. Here we give sufficient conditions for the preservation of complex attractors. The removal of so called mediator nodes (i.e. vertices with indegree and outdegree one) is often considered, and frequently does not affect the attractor landscape. We clarify that this is not always the case, and in some situations even subtle changes in the interaction structure can lead to a different asymptotic behaviour. Finally, we use properties of the more general elimination method introduced here to give an alternative proof for a bound on the number of attractors of asynchronous Boolean networks in terms of the cardinality of positive feedback vertex sets of the interaction graph. ## 1 Introduction With increasingly powerful technologies in molecular genetics it is possible to obtain large amount of data which lead to increasingly larger models of complex regulatory networks. This poses problems and limitations on the analysis of such models. While this applies especially to quantitative models (e.g., differential or stochastic models [7, 16, 23]), qualitative models are also increasingly affected. Among the latter, logical models are widely used [1, 22, 2, 9]. Despite their simplicity, the combinatorial explosion with the increasing number of components makes the rigorous analysis of many models unattainable. An approach to deal with this problem consists in reducing the size of the original network. There are mainly two strategies in use. The first one relies on trap spaces, i.e. invariant subspaces of the state space [8]. The second approach, on which we will focus here, relies on the assumption that some of the updates, i.e. changes in the components, are happening faster than others. This idea has been developed for the Boolean as well as for the more general multi-valued case [10, 11, 24]. The method allows, in the Boolean formalism, to substitute a variable with the expression defining its update rule. This approach is only possible if the variable is not autoregulated. In terms of asynchronous state transition graphs, the absence of autoregulation guarantees that, for each pair of neighbour states that differ in the variable being eliminated, exactly one of the two states is the source of a transition in the direction of the variable being eliminated. The other state is therefore the target of this transition, and can be selected as the "representative" state; all transitions from representative states are preserved by the elimination. In this setting, we observe that there is a natural way of extending the elimination to variables that are negatively autoregulated. In presence of a possible negative autoregulation, a pair of neighbour states that differ in the variable being eliminated can be connected by transitions in both directions. In this case it is not necessary to choose a representative, and since the two states are part of the same strongly connected component, transitions from any of the two states can be preserved in the reduction. The elimination method introduced in this work implements this idea. We show that this extended method affects the interaction graph in a similar way to the original reduction method, with some differences that can concern the introduction of loops. While the preservation of fixed points needs to be refined to account for attractors consisting of two states that can collapse to one, we prove that the total number of attractors cannot decrease with the reduction, as for the original method. Using these properties, we give an alternative proof for a result, due to Richard [17], that establishes a bound on the number of attractors of asynchronous Boolean networks in terms of the cardinality of positive feedback vertex sets of the interaction graphs. The reduced networks of the method introduced in [10, 11, 24] can be computed quite easily, making the approach applicable to very large networks. While fixed points are always preserved by the elimination of variables that are not autoregulating, in some cases this reduction approach can change the dynamics of the networks significantly. Therefore, some effort has been invested in finding conditions on the structure of the network for which it can be guaranteed that a reduction not only preserves fixed points, but all attractors. In [19, 20], the authors suggested the merging of vertices which have in- and outdegree one, so-called simple mediator nodes [20] (also called linear variables in [12]). Here we take a detailed look at these assumptions and show that there are unfortunately still certain cases where attractors are not preserved, despite the claim in [20]. This result does not impact the usefulness of the method suggested in [19, 20] since such counterexamples can be quite artificial in nature. In Section 2 we set the required notation and give a brief summary of some properties of the reduction method described in [10, 11, 24]. We then introduce a generalisation of this reduction method that can be applied to variables with negative autoregulation, and use it to derive a simple proof for a bound on the number of attractors of Boolean networks (Section 3). In Section 4 we discuss the preservation of cyclic attractors under elimination of intermediate components, with or without negative autoregulation. ## 2 Background and notation A Boolean network is a map \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\), where \(\mathbb{B}=\{0,1\}\). We call \(V=\{1,\ldots,n\}\) the set of components of the Boolean network, and \(\mathbb{B}^{n}\) the set of states. Given \(x\in\mathbb{B}^{n}\) and \(I\subseteq V\), we denote by \(\bar{x}^{I}\) the element of \(\mathbb{B}^{n}\) such that \(\bar{x}^{I}_{i}=1-x_{i}\) for \(i\in I\) and \(\bar{x}^{I}_{i}=x_{i}\) for \(i\notin I\). We write \(\bar{x}\) for \(\bar{x}^{V}\), and, given \(i\in V\), we write \(x^{i}\) for \(x^{\{i\}}\). In addition, given \(a\in\mathbb{B}\), \(x^{i=a}\) denotes the element of \(\mathbb{B}^{n}\) obtained from \(x\) by setting the \(i^{th}\) component to \(a\). This work deals with elimination of variables. From a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) with \(n\) components we will define a Boolean network \(\tilde{f}\colon\mathbb{B}^{n-1}\to\mathbb{B}^{n-1}\) with \(n-1\) components. To simplify the notation, after removing variable \(v\) we will use the indices \(\tilde{V}=\{1,\ldots,v-1,v+1,\ldots,n\}\) to identify components of the Boolean network \(\tilde{f}\) and of states in \(\mathbb{B}^{n-1}\). We will write \(\pi\colon\mathbb{B}^{n}\to\mathbb{B}^{n-1}\) for the projection onto the components \(\tilde{V}\). The _asynchronous dynamics_ or _asynchronous state transition graph_\(AD(f)\) associated to a Boolean network \(f\) with set of components \(V\) is a directed graph with set of vertices or _states_\(\mathbb{B}^{n}\), and set of edges or _transitions_ defined by \(\{(x,\bar{x}^{i})\mid i\in V,f_{i}(x)\neq x_{i}\}\). The asynchronous dynamics is frequently considered when modelling gene regulatory networks [22, 9, 1]. Given \(x\in\mathbb{B}^{n}\), the _(local) interaction graph_\(G(f)(x)\) of \(f\) at \(x\) is the signed directed graph with set of vertices \(V\) and admitting an edge from \(j\) to \(i\) of sign \(s\in\{-1,1\}\) if and only if \(s=\big{(}f_{i}(\bar{x}^{j})-f_{i}(x)\big{)}(\bar{x}^{j}_{j}-x_{j})\). The _(global) interaction graph_\(G(f)\) of \(f\) is the union of the local interaction graphs, i.e. the signed multidirected graph with set of vertices \(V\) and set of edges given by the union of the edges in \(G(f)(x)\) for all \(x\in\mathbb{B}^{n}\). If \(G\) has an edge from \(j\) to \(i\), then \(j\) is said to be a _regulator_ of \(i\). A loop in \(G\), that is, an edge of the form \((i,i)\) is also called an _autoregulation_ of the variable \(i\). The interaction graph is used to summarize the relationships between variables. Its features can often be related to properties of state transition graphs (see e.g. [14, 4, 18]). Edges in state transition graphs and interaction graphs will be denoted with arrows (e.g., \(x\to y\) for the edge \((x,y)\)). A path in a directed graph \(G\) is defined by a sequence of edges \(x^{1}\to x^{2}\to\cdots\to x^{k-1}\to x^{k}\). We call the number of edges defining the path the _length_ of the path, and the vertices in the path the _support_ of the path. If the edges are signed, we define the sign of the path as the product of the signs of its edges. If all vertices in the path are distinct, with the possible exception of the first and the last vertices, we say that the path is _elementary_. If the first and the last vertices in an elementary path coincide, we call the path a _cycle_. If a path is a cycle of length one, it will also be called a _loop_. A _trap set_ is a subset \(T\) of \(\mathbb{B}^{n}\) such that, for any \(x\in T\) and \(x\to y\) transition in \(AD(f)\), \(y\) is in \(T\). The minimal trap sets are call the _attractors_ of \(AD(f)\). Attractors are called _fixed points_ if they contain only one state, _cyclic attractors_ otherwise. ### Elimination of non-autoregulated components A reduction method has been introduced for Boolean and more general discrete networks [10, 11, 24], which allows to eliminate components that do not admit loops in the interaction graph. The method has been extensively applied [3, 19, 21, 6, 13, 15, 25, 5]. In the first part of this work we investigate the elimination of components that admit negative autoregulation, in the Boolean case. The approach provides an extension of the original method, and opens new venues for application. Before introducing our extension, in this section we summarize some properties of the method introduced in [10, 11] and in the Boolean case in [24], to ease the introduction of the new approach. Consider a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) and a vertex \(v\in V\) such that there is no loop at \(v\) in \(G(f)\), that is, \(f_{v}(x)=f_{v}(\bar{x}^{v})\) for all \(x\in\mathbb{B}^{n}\). Define the map \[\mathcal{R}\colon\mathbb{B}^{n} \to\mathbb{B}^{n},\] \[x \mapsto(x_{1},\ldots,x_{v-1},f_{v}(x),x_{v+1},\ldots,x_{n}).\] We say that \(\mathcal{R}\) maps each state to its _representative state_ in \(\mathbb{B}^{n}\). The absence of loops at \(v\) in \(G(f)\) implies that \(\mathcal{R}(x)=\mathcal{R}(\bar{x}^{v})\) for each \(x\in\mathbb{B}^{n}\), and consequently there are exactly \(2^{n-1}\) representative states. For simplicity, denote by \(\pi\colon\mathbb{B}^{n}\to\mathbb{B}^{n-1}\) the projection onto the variables \(V\setminus\{v\}\). Since \(\mathcal{R}(x)=\mathcal{R}(y)\) for all \(x,y\in\mathbb{B}^{n}\) for which \(\pi(x)=\pi(y)\) holds, there is a unique map \(\mathcal{S}\colon\mathbb{B}^{n-1}\to\mathbb{B}^{n}\) that satisfies \(\mathcal{S}\circ\pi=\mathcal{R}\) (see Fig. 1 left). We can then define the reduced Boolean network \(\tilde{f}\colon\mathbb{B}^{n-1}\to\mathbb{B}^{n-1}\) as follows (see Fig. 1 right): \[\tilde{f}=\pi\circ f\circ\mathcal{S}. \tag{1}\] The effect of the elimination on the asynchronous dynamics is represented in Fig. 3 (left). For convenience, as mentioned in the background, we use the set \(V\setminus\{v\}\) to index the components of \(\tilde{f}\) and of states in \(\mathbb{B}^{n-1}\). We give a small example for illustration. **Example 2.1**.: Consider the Boolean network \(f\) defined as \[f\colon\mathbb{B}^{3} \to\mathbb{B}^{3},\] \[x \mapsto((\bar{x}_{2}\wedge x_{3})\vee(x_{2}\wedge\bar{x}_{3}),(x_{ 1}\wedge x_{3})\vee(\bar{x}_{1}\wedge\bar{x}_{3}),(\bar{x}_{1}\wedge\bar{x}_{2 })\vee(x_{2}\wedge x_{3})),\] Its state transition graph is depicted in Fig. 2 left. We remove variable \(x_{2}\). Thus, in the above terminology \(\pi\) is the projection onto the first and third component, \(\mathcal{S}\) is given by \(\mathcal{S}\colon\mathbb{B}^{2}\to\mathbb{B}^{3},(x_{1},x_{3})\mapsto(x_{1},(x _{1}\wedge x_{3})\vee(\bar{x}_{1}\wedge\bar{x}_{3}),x_{3})\) and \(\mathcal{R}\) maps \((x_{1},x_{2},x_{3})\) to \((x_{1},(x_{1}\wedge x_{3})\vee(\bar{x}_{1}\wedge\bar{x}_{3}),x_{3})\). The representative states \(001\), \(010\), \(100\) and \(111\) are represented in boxes in Fig. 2. To remove \(x_{2}\) we substitute \(x_{2}\) with \((x_{1}\wedge x_{3})\vee(\bar{x}_{1}\wedge\bar{x}_{3})\) in \(f_{1}\) and \(f_{3}\). We obtain: Figure 1: Commutative diagrams that illustrate the definition of the reduction method described in [10, 11, 24]. \[\tilde{f}\colon\mathbb{B}^{2} \to\mathbb{B}^{2},\] \[x \mapsto(\bar{x}_{1},x_{3}).\] The state transition graph of the reduced network is represented in Fig. 2 right. In the above example we see that some edges "disappear" during the reduction. For example there is an edge from \(101\) to \(100\) in the state transition graph of the original Boolean network while there is no edge from \(\pi(101)=11\) to \(\pi(100)=10\) in the reduced one. On the other hand, the outgoing edges from the representative states can be found also in the reduced network. The following results can be found, with slightly different statements, in [10, 11]. We will prove generalizations of these results in the next section. **Proposition 2.2**.: _Consider a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) such that there is no loop at \(v\) in \(G(f)\)._ 1. _For each_ \(x\in\mathbb{B}^{n}\)_, if_ \(x\neq\mathcal{R}(x)\) _there is a transition in_ \(AD(f)\) _from_ \(x\) _to_ \(\mathcal{R}(x)\)_._ 2. _For all_ \(x,y\in\mathbb{B}^{n}\)_, if_ \(\mathcal{R}(x)\to y\) _is a transition in_ \(AD(f)\)_, then_ \(\pi(x)=\pi(\mathcal{R}(x))\to\pi(y)\) _is a transition in_ \(AD(\tilde{f})\)_._ 3. _For each_ \(x\in\mathbb{B}^{n}\) _and_ \(i\in V\setminus\{v\}\) _such that there is no edge_ \(v\to i\) _in_ \(G(f)\)_, if_ \(x\to\bar{x}^{i}\) _is a transition in_ \(AD(f)\)_, then_ \(\mathcal{R}(x)\to\overline{\mathcal{R}(x)}^{i}\) _is a transition in_ \(AD(f)\) _and_ \(\pi(x)\to\pi(\bar{x}^{i})\) _is a transition in_ \(AD(\tilde{f})\)_._ Even though, in general, the number of attractors can change during the reduction, the number of fixed points remains the same. **Theorem 2.3**.: _Consider a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) such that there is no loop at \(v\) in \(G(f)\)._ 1. _If_ \(x\in\mathbb{B}^{n}\) _is a fixed point for_ \(f\)_, then_ \(x=\mathcal{R}(x)\)_,_ \(\pi(x)\) _is a fixed point for_ \(\tilde{f}\) _and no other fixed point for_ \(f\) _is projected on_ \(\pi(x)\)_._ 2. _If_ \(x\in\mathbb{B}^{n-1}\) _is a fixed point for_ \(\tilde{f}\)_, then_ \(\mathcal{S}(x)\) _is a fixed point for_ \(f\)_._ 3. _If_ \(T\subseteq\mathbb{B}^{n}\) _is a trap set for_ \(f\)_, then_ \(\pi(T)\) _is a trap set for_ \(\tilde{f}\)_._ 4. _If_ \(\tilde{A}\subseteq\mathbb{B}^{n-1}\) _is a cyclic attractor for_ \(\tilde{f}\)_, there exists at most one attractor for_ \(AD(f)\) _intersecting_ \(\pi^{-1}(\tilde{A})\)_._ ## 3 Generalisation to vertices with optional negative autoregulation The goal of this section is to generalize the elimination method from the last section to variables with negative autoregulation. The method summarised in Section 2.1 applies to the elimination of variables which are not autoregulated. In this case the eliminated variable is replaced by its update function. To Figure 2: Illustration of the reduction method described in [10]. Representative states are shown in boxes. When the second variable is eliminated, transitions starting from representative states are preserved. The asynchronous dynamics on \(\mathbb{B}^{3}\) on the left reduces to the asynchronous dynamics on \(\mathbb{B}^{2}\) on the right. generalize this idea, we substitute a variable with a more complicated expression derived from its update function. If there is no autoregulation in a state, this expression coincides with the update function of the variable. If there is a negative autoregulation, the variable we want to remove oscillates at some state. In this case, the expression is constructed in such a way that all transitions from the given state and its neighbouring state in the direction of the eliminated variable are preserved in the reduced network. Fix a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) and a vertex \(v\in V\) such that there is no positive loop at \(v\) in \(G(f)\). Since \(v\) is potentially autoregulated, the definition of representative state of the previous section cannot be applied. Since the value of component \(v\) might oscillate, we have to introduce two new functions, the maps \[\mathcal{R}^{i}\colon\mathbb{B}^{n} \to\mathbb{B}^{n},\] \[x \mapsto(x_{1},\ldots,x_{v-1},f_{v}(x^{v=i}),x_{v+1},\ldots,x_{n}),\] for \(i\in\{0,1\}\). For \(x\in\mathbb{B}^{n}\), \(\mathcal{R}^{0}(x)\) and \(\mathcal{R}^{1}(x)\) differ if \(f_{v}(x^{v=0})\neq f_{v}(x^{v=1})\), that is, if component \(v\) is autoregulated at \(x\). **Remark 3.1**.: Observe that, if \(\mathcal{R}^{0}(x)\neq\mathcal{R}^{1}(x)\), since \(v\) is not positively autoregulated we have \(f_{v}(x^{v=0})=1\) and \(f_{v}(x^{v=1})=0\), and therefore \(\mathcal{R}^{0}(x)=x^{v=1}\), \(\mathcal{R}^{1}(x)=x^{v=0}\). Clearly we have \(\pi(x)=\pi(\mathcal{R}^{0}(x))=\pi(\mathcal{R}^{1}(x))\), and, similarly to the case of the previous section, there are two unique maps \(\mathcal{S}^{0},\mathcal{S}^{1}\colon\mathbb{B}^{n-1}\to\mathbb{B}^{n}\) that satisfy \(\mathcal{R}^{0}=\mathcal{S}^{0}\circ\pi\), \(\mathcal{R}^{1}=\mathcal{S}^{1}\circ\pi\). We can now introduce the reduced Boolean network \(\tilde{f}\colon\mathbb{B}^{n-1}\to\mathbb{B}^{n-1}\) defined by \[\tilde{f}_{i}(x)=\begin{cases}f_{i}(\mathcal{S}^{0}(x))\wedge f_{i}(\mathcal{ S}^{1}(x))&\text{ if }x_{i}=1,\\ f_{i}(\mathcal{S}^{0}(x))\lor f_{i}(\mathcal{S}^{1}(x))&\text{ if }x_{i}=0.\end{cases} \tag{2}\] Observe that if \(v\) is not autoregulated the equalities \(\mathcal{R}^{0}=\mathcal{R}^{1}\), \(\mathcal{S}^{0}=\mathcal{S}^{1}\) hold and therefore in this case the definition of \(\tilde{f}\) coincides with the definition of \(\tilde{f}\) in Eq. (1). In other words, the above reduction method is a generalization of the reduction method reviewed in the last section. If \(v\) is instead autoregulated at \(x\) and \(\mathcal{R}^{0}(x)\neq\mathcal{R}^{1}(x)\), the intuition is that all the outgoing edges of both \(\mathcal{R}^{0}(x)\) and \(\mathcal{R}^{1}(x)\) along the components \(V\setminus\{v\}\) are preserved in the reduced network. Fig. 3 (right) gives an illustration of this idea. Figure 3: Illustration of the effect of elimination of one variable (\(v\)) on asynchronous state transition graphs in case of no loops in \(G(f)(x)\) at \(v\) (left) and a negative loop in \(G(f)(x)\) at \(v\) (right). In the first case, only transitions that start at the representative state \(\mathcal{R}(x)\) are preserved. In the second case, transitions out of both \(\mathcal{R}^{0}(x)\) and \(\mathcal{R}^{1}(x)\) are preserved. **Lemma 3.2**.: _Consider a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) such that there is no positive loop at \(v\) in \(G(f)\). Then:_ 1. _For each_ \(x\in\mathbb{B}^{n}\) _such that_ \(x\neq\mathcal{R}^{0}(x)\) _or_ \(x\neq\mathcal{R}^{1}(x)\) _there is a transition in_ \(AD(f)\) _from_ \(x\) _to_ \(\mathcal{R}^{0}(x)\) _or to_ \(\mathcal{R}^{1}(x)\) _and_ \(\{\mathcal{R}^{0}(x),\mathcal{R}^{1}(x)\}\) _is strongly connected._ 2. _For all_ \(x\in\mathbb{B}^{n}\) _and_ \(j=0,1\)_, if_ \(\mathcal{R}^{j}(x)\to y\) _is a transition in_ \(AD(f)\) _in direction_ \(i\in V\setminus\{v\}\)_, then_ \(\pi(x)=\pi(\mathcal{R}^{j}(x))\to\pi(\overline{x}^{i})\) _is a transition in_ \(AD(\tilde{f})\)_._ 3. _For each_ \(x\in\mathbb{B}^{n}\) _and_ \(i\in V\setminus\{v\}\) _such that there is no edge from_ \(v\) _to_ \(i\) _in_ \(G(f)\)_, if_ \(x\to\bar{x}^{i}\) _is a transition in_ \(AD(f)\)_, then_ \(\mathcal{R}^{j}(x)\to\overline{\mathcal{R}^{j}(x)}^{i}\) _is a transition in_ \(AD(f)\) _for_ \(j=0,1\) _and_ \(\pi(x)\to\pi(\bar{x}^{i})\) _is a transition in_ \(AD(\tilde{f})\)_._ 4. _If_ \(x\to\bar{x}^{i}\) _is a transition in_ \(AD(\tilde{f})\)_, then there exists_ \(j\in\{0,1\}\) _such that_ \(\mathcal{S}^{j}(x)\to\overline{\mathcal{S}^{j}(x)}^{i}\) _is a transition in_ \(AD(f)\)_. In addition, from each_ \(y\in\pi^{-1}(x)\) _there exists a path to_ \(\overline{\mathcal{S}^{j}(x)}^{i}\)_._ Proof.: 1. If \(\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\), then \(\bar{x}^{v}=\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\) and there is a transition in \(AD(f)\) from \(x\) to \(\mathcal{R}^{0}(x)\) and \(\mathcal{R}^{1}(x)\) as a consequence of the definition of asynchronous state transition graph. If \(\mathcal{R}^{0}(x)\neq\mathcal{R}^{1}(x)\), then from Remark 3.1 we have \(f_{v}(x^{v=0})=1\) and \(f_{v}(x^{v=1})=0\), and there is a transition in \(AD(f)\) from \(x^{v=0}\) to \(\overline{x^{v=0}}^{v}=\mathcal{R}^{0}(x)\) and from \(x^{v=1}\) to \(\overline{x^{v=1}}^{1}=\mathcal{R}^{1}(x)\). 2. If \(\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\), then \(\tilde{f}_{i}(\pi(x))=f_{i}(\mathcal{R}^{0}(x))=f_{i}(\mathcal{R}^{1}(x))\neq \mathcal{R}^{0}(x)_{i}=\mathcal{R}^{1}(x)_{i}=x_{i}\). If \(\mathcal{R}^{0}(x)\neq\mathcal{R}^{1}(x)\), then from Remark 3.1 we have \(\mathcal{R}^{0}(x)=x^{v=1}\), \(\mathcal{R}^{1}(x)=x^{v=0}\), and either \(f_{i}(\mathcal{R}^{0}(x))\neq x_{i}\) or \(f_{i}(\mathcal{R}^{1}(x))\neq x_{i}\). If \(x_{i}=1\), then \(\tilde{f}_{i}(\pi(x))=f_{i}(\mathcal{R}^{0}(x))\wedge f_{i}(\mathcal{R}^{1}(x ))=0\). If \(x_{i}=0\), then \(\tilde{f}_{i}(\pi(x))=f_{i}(\mathcal{R}^{0}(x))\lor f_{i}(\mathcal{R}^{1}(x ))=1\), as required. 3. Since \(i\) does not depend on \(v\), we have \(f_{i}(\mathcal{R}^{0}(x))=f_{i}(\mathcal{R}^{1}(x))=f_{i}(x)\neq x_{i}=\mathcal{ R}^{0}(x)_{i}=\mathcal{R}^{1}(x)_{i}\), which gives the first part. For the second, it is sufficient to observe that \(\tilde{f}_{i}(\pi(x))=f_{i}(\mathcal{R}^{0}(x))=f_{i}(\mathcal{R}^{1}(x))\neq \pi(x)_{i}\). 4. Since \(i\neq v\) we have \(x_{i}=\mathcal{S}^{0}(x)_{i}=\mathcal{S}^{1}(x)_{i}\). Since there is a transition \(x\to\bar{x}^{i}\) in \(AD(\tilde{f})\), by definition of \(\tilde{f}\) either \(f_{i}(\mathcal{S}^{0}(x))\neq x_{i}\) or \(f_{i}(\mathcal{S}^{1}(x))\neq x_{i}\), that is, either \(\mathcal{S}^{0}(x)\to\overline{\mathcal{S}^{0}(x)}^{i}\) or \(\mathcal{S}^{1}(x)\to\overline{\mathcal{S}^{1}(x)}^{i}\) is a transition in \(AD(f)\). The second part follows from point (i). The following result generalizes Theorem 2.3. For Theorem 3.3 (iii) note that if \(v\) is not autoregulated the set \(\{\mathcal{S}^{0}(x),\mathcal{S}^{1}(x)\}\) has cardinality one, hence it is a fixed point and the result generalizes Theorem 2.3 (ii). **Theorem 3.3**.: _Consider a Boolean network \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) such that there is no positive loop at \(v\) in \(G(f)\). Then:_ 1. _if_ \(x\in\mathbb{B}^{n}\) _is a fixed point for_ \(f\)_, then_ \(x=\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\)_,_ \(\pi(x)\) _is a fixed point for_ \(\tilde{f}\) _and no other fixed point for_ \(f\) _is projected on_ \(\pi(x)\)_._ 2. _if_ \(\{x,\bar{x}^{v}\}\) _is a cyclic attractor for_ \(AD(f)\)_, then_ \(\pi(x)\) _is a fixed point for_ \(\tilde{f}\)_._ 3. _if_ \(x\in\mathbb{B}^{n-1}\) _is a fixed point for_ \(\tilde{f}\)_, then the set_ \(\{\mathcal{S}^{0}(x),\mathcal{S}^{1}(x)\}\) _is an attractor of_ \(AD(f)\)_._ 4. _if_ \(T\subseteq\mathbb{B}^{n}\) _is a trap set for_ \(f\)_, then_ \(\pi(T)\) _is a trap set for_ \(\tilde{f}\)_._ 5. _if_ \(\{x,\bar{x}^{i}\}\) _is a cyclic attractor of_ \(AD(f)\) _for some_ \(x\in\mathbb{B}^{n}\) _and_ \(i\neq v\)_, then the set_ \(\{\pi(x),\pi(\bar{x}^{i})\}\) _is a cyclic attractor of_ \(AD(\tilde{f})\)_._ 6. _if_ \(\tilde{A}\subseteq\mathbb{B}^{n-1}\) _is an attractor for_ \(\tilde{f}\)_, there exists at most one attractor for_ \(AD(f)\) _intersecting_ \(\pi^{-1}(\tilde{A})\) Proof.: 1. Since \(x\) is fixed, by Lemma 3.2 (i) we have \(x=\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\). \(\pi(x)\) is fixed for \(\tilde{f}\) as a consequence of Lemma 3.2 (iv), and the absence of a positive loop at \(v\) gives that \(\bar{x}^{v}\) is not fixed. 2. Consequence of Lemma 3.2 (iv). 3. The set \(\{\mathcal{S}^{0}(x),\mathcal{S}^{1}(x)\}\) consists either of one state if \(\mathcal{S}^{0}(x)=\mathcal{S}^{1}(x)\), or two strongly connected states if \(\mathcal{S}^{0}(x)\neq\mathcal{S}^{1}(x)\) (Lemma 3.2(i)). In addition, the set is a trap set by Lemma 3.2 (ii). 4. If \(x\in\pi(T)\) and \(x\to\bar{x}^{i}\) is a transition in \(AD(\tilde{f})\), by Lemma 3.2 (iv) there is a transition \(\mathcal{S}^{j}(x)\to\overline{\mathcal{S}^{j}(x)}^{i}\) for some \(j\in\{0,1\}\). Take \(y\in T\) such that \(\pi(y)=x\). If \(\mathcal{S}^{0}(x)=\mathcal{S}^{1}(x)\), then either \(y=\mathcal{S}^{0}(x)=\mathcal{S}^{1}(x)\) or, by Lemma 3.2(i), there is a transition from \(y\) to \(\mathcal{S}^{0}(x)=\mathcal{S}^{1}(x)\). If \(\mathcal{S}^{0}(x)\neq\mathcal{S}^{1}(x)\), then \(\mathcal{S}^{0}(x)\) and \(\mathcal{S}^{1}(x)\) are strongly connected by Lemma 3.2(i) and hence belong to \(T\). In both cases \(\overline{\mathcal{S}^{j}(x)}^{i}\) is in \(T\) and \(\bar{x}^{i}=\pi(\overline{\mathcal{S}^{j}(x)}^{i})\in\pi(T)\). 5. Since \(\{x,\bar{x}^{i}\}\) is a cyclic attractor of \(AD(f)\) we have \(f_{v}(x)=f_{v}(\bar{x}^{i})=x_{v}\) and therefore \(\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)=x\), \(\mathcal{R}^{0}(\bar{x}^{i})=\mathcal{R}^{1}(\bar{x}^{i})=\bar{x}^{i}\). Then \(\{\pi(x),\pi(\bar{x}^{i})\}\) is strongly connected as a consequence of Lemma 3.2 (ii). It is a trap set by the previous point. 6. From point \((i)\) of Lemma 3.2, we know that from all states in \(\mathbb{B}^{n}\) there is a transition to \(\mathcal{R}^{0}(\mathbb{B}^{n})\cup\mathcal{R}^{1}(\mathbb{B}^{n})=\mathcal{S}^ {0}(\pi(\mathbb{B}^{n}))\cup\mathcal{S}^{1}(\pi(\mathbb{B}^{n}))\). Hence it is sufficent to show that, for each pair \(x,y\in\mathcal{S}^{0}(\tilde{A})\cup\mathcal{S}^{1}(\tilde{A})\), there exists a path from \(x\) to \(y\) in \(\pi^{-1}(\tilde{A})\). Write \(x=\mathcal{S}^{j}(a)\), \(y=\mathcal{S}^{k}(b)\) for some \(j,k\in\{0,1\}\) and \(a,b\in\tilde{A}\). Since \(\tilde{A}\) is strongly connected, there exists a path from \(a\) to \(b\) in \(\tilde{A}\). Consider a transition \(c\to\bar{c}^{i}\) in this path. By Lemma 3.2(iv), there exists \(h\in\{0,1\}\) such that there exist paths from \(\mathcal{S}^{0}(c)\) and from \(\mathcal{S}^{1}(c)\) to \(\overline{\mathcal{S}^{h}(c)}^{i}\) in \(AD(f)\), with \(\pi(\overline{\mathcal{S}^{h}(c)}^{i})=\bar{c}^{i}\). By point \((i)\) of Lemma 3.2 there exists a path from \(\overline{\mathcal{S}^{h}(c)}^{i}\) to \(\mathcal{R}^{0}(\overline{\mathcal{S}^{h}(c)}^{i})=\mathcal{S}^{0}(\pi( \overline{\mathcal{S}^{h}(c)}^{i}))=\mathcal{S}^{0}(\bar{c}^{i})\) and \(\mathcal{R}^{1}(\overline{\mathcal{S}^{h}(c)}^{i})=\mathcal{S}^{1}(\pi( \overline{\mathcal{S}^{h}(c)}^{i}))=\mathcal{S}^{1}(\bar{c}^{i})\), which concludes. Denote by \(S(f)\) the number of fixed points of \(f\), by \(A(f)\) the number of cyclic attractors of \(f\), and by \(A(f,i)\) the number of cyclic attractors of \(AD(f)\) consisting of two states that differ in component \(i\). **Corollary 3.4**.: _If \(\tilde{f}\) is obtained from \(f\) by eliminating component \(v\), then_ 1. \(S(\tilde{f})=S(f)+A(f,v)\) _and hence_ \(S(f)\leq S(\tilde{f})\)_._ 2. _For all_ \(i\neq v\)_,_ \(A(f,i)\leq A(\tilde{f},i)\)_._ 3. \(S(f)+A(f)\leq S(\tilde{f})+A(\tilde{f})\)_._ Proof.: \((i)\) is a corollary of points \((i)\), \((ii)\) and \((iii)\) of Theorem 3.3. \((ii)\) is a consequence of point \((v)\) of Theorem 3.3, and \((iii)\) of part \((vi)\) of Theorem 3.3. The inequalities of the corollary can be strict. For point \((i)\), take \(f(x_{1},x_{2})=(1,\bar{x}_{1}\vee\bar{x}_{2})\). Then \(S(f)=0\), \(A(f)=A(f,2)=1\) and after removing the second component we have \(\tilde{f}(x_{1})=1\), \(S(\tilde{f})=1\), \(A(\tilde{f})=A(\tilde{f},1)=0\). For point \((ii)\), the map \(f(x_{1},x_{2})=(\bar{x}_{2},x_{1})\) after removing variable \(x_{2}\) gives \(\tilde{f}(x_{1})=\bar{x}_{1}\), \(A(f)=A(f,1)=A(f,2)=0\), \(A(\tilde{f})=A(\tilde{f},1)=1\). For the third point, see Example 4.2. ### Interaction graph The following result is a consequence of the properties of the reduction method described in [10]. It states that the classical reduction cannot introduce new paths in the interaction graph: if a path exists in the interaction graph of the reduced Boolean network, a path of the same sign must exist in the interaction graph of the original network. We will prove here a generalized version for the case of the removal of potentially negatively autoregulated components. **Proposition 3.5**.: _If \(G(f)\) has no loops at \(v\), and \(G(\tilde{f})\) has a path from \(j\) to \(i\) of sign \(s\), then \(G(f)\) has a path from \(j\) to \(i\) of sign \(s\)._ The goal of this section is to generalize this result to negatively autoregulated components that are removed. However, as the following example shows we need to be careful here. Indeed, if \(G(f)\) has a negative loop at \(v\), then the conclusion of Proposition 3.5 does not necessarily hold for paths that are negative loops. **Example 3.6**.: The Boolean network \(f(x_{1},x_{2})=(\bar{x}_{2},\bar{x}_{2})\) reduces to \(\tilde{f}(x_{1})=\bar{x}_{1}\) when the second variable is eliminated. The graph \(G(\tilde{f})\) has a negative loop in \(1\), whereas \(G(f)\) has no negative circuit containing \(1\). We first examine the negative loop case, then show that the result in Proposition 3.5 can be extended to the reduction method intruduced in this paper for the case of positive loops and paths of length at least two. **Proposition 3.7**.: _If \(G(\tilde{f})\) has a negative loop at \(i\), then \(G(f)\) has a negative cycle with support contained in \(\{i,v\}\)._ Proof.: Take \(x\) such that \(G(\tilde{f})\) has a negative loop at \(x\) and \(x_{i}=0\), so that \(\tilde{f}_{i}(x)=1\) and \(\tilde{f}_{i}(\bar{x}^{i})=0\). Consider \(y\in\mathbb{B}^{n}\) such that \(\pi(y)=x\), then, by definition of \(\tilde{f}\), either \(f_{i}(\mathcal{R}^{0}(y))\) or \(f_{i}(\mathcal{R}^{1}(y))\) is equal to \(1\), and either \(f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))\) or \(f_{i}(\mathcal{R}^{1}(\bar{y}^{i}))\) is equal to \(0\). Consider two cases. (1) Suppose that there exists \(a\in\{0,1\}\) such that \(f_{i}(\mathcal{R}^{a}(y))=1\) and \(f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))=0\). Then \[(-1)\cdot(\bar{y}^{i}_{i}-y_{i}) =\tilde{f}_{i}(\bar{x}^{i})-\tilde{f}_{i}(x)=f_{i}(\mathcal{R}^{a }(\bar{y}^{i}))-f_{i}(\mathcal{R}^{a}(y))\] \[=f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))-f_{i}(\overline{\mathcal{R}^ {a}(y)}^{i})+f_{i}(\overline{\mathcal{R}^{a}(y)}^{i})-f_{i}(\mathcal{R}^{a}(y)).\] If \(f_{i}(\overline{\mathcal{R}^{a}(y)}^{i})=0\), then there is a negative loop with support \(\{i\}\) at \(\mathcal{R}^{a}(y)\). Otherwise, we have \(\mathcal{R}^{a}(\bar{y}^{i})\neq\overline{\mathcal{R}^{a}(y)}^{i}\) and \(f_{v}(\overline{y^{vva}}^{i})\neq f_{v}(y^{v=a})\). Hence \[-1=\frac{f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))-f_{i}(\mathcal{R}^{a}(y))}{\bar{ y}^{i}_{i}-y_{i}}=\frac{f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))-f_{i}(\overline{ \mathcal{R}^{a}(y)}^{i})}{f_{v}(\overline{y^{v=a}}^{i})-f_{v}(y^{v=a})}\frac{f _{v}(\overline{y^{v=a}}^{i})-f_{v}(y^{v=a})}{\bar{y}^{i}_{i}-y_{i}},\] that is, there is a negative cycle in \(G(f)\) with support \(\{i,v\}\). (2) Suppose now that, for \(a=0\) and \(a=1\), if \(f_{i}(\mathcal{R}^{a}(y))=1\) then \(f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))=1\), and if \(f_{i}(\mathcal{R}^{a}(\bar{y}^{i}))=0\) then \(f_{i}(\mathcal{R}^{a}(y))=0\). Then we must have \(f_{i}(\mathcal{R}^{0}(y))\neq f_{i}(\mathcal{R}^{1}(y))\) and \(f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))\neq f_{i}(\mathcal{R}^{1}(\bar{y}^{i}))\). In particular, \(\mathcal{R}^{0}(y)\neq\mathcal{R}^{1}(y)\) and by Remark 3.1 there is a negative loop at \(v\) in \(G(f)\). **Proposition 3.8**.: _If \(G(\tilde{f})\) has a positive loop at \(i\), then \(G(f)\) has either a positive loop at \(i\) or a positive cycle with support \(\{i,v\}\)._ Proof.: Take \(x\) such that \(G(\tilde{f})(x)\) has a positive loop at \(i\) and w.l.o.g. \(x_{i}=0\), so that \(\tilde{f}_{i}(x)=0\) and \(\tilde{f}_{i}(\bar{x}^{i})=1\). Consider \(y\in\mathbb{B}^{n}\) such that \(\pi(y)=x\), then, by definition of \(f\), \(f_{i}(\mathcal{R}^{0}(y))=f_{i}(\mathcal{R}^{1}(y))=0\) and \(f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))=f_{i}(\mathcal{R}^{1}(\bar{y}^{i}))=1\). We can write \[\bar{y}^{i}_{i}-y_{i}=f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))-f_{i}(\mathcal{R}^{0 }(y))=f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))-f_{i}(\overline{\mathcal{R}^{0}(y)}^ {i})+f_{i}(\overline{\mathcal{R}^{0}(y)}^{i})-f_{i}(\mathcal{R}^{0}(y)).\] If \(f_{i}(\overline{\mathcal{R}^{0}(y)}^{i})=1\), then the equality can be simplified to \(\bar{y}^{i}_{i}-y_{i}=f_{i}(\overline{\mathcal{R}^{0}(y)}^{i})-f_{i}(\mathcal{R }^{0}(y))\) and there is a positive loop at \(i\) in \(G(f)(\mathcal{R}^{0}(y))\). Otherwise, we have \(\mathcal{R}^{0}(\bar{y}^{i})\neq\overline{\mathcal{R}^{0}(y)}^{i}\) and \(f_{v}(\overline{y^{v=0}}^{i})\neq f_{v}(y^{v=0})\). Hence \[1=\frac{f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))-f_{i}(\mathcal{R}^{0}(y))}{\bar{y}^{ i}_{i}-y_{i}}=\frac{f_{i}(\mathcal{R}^{0}(\bar{y}^{i}))-f_{i}(\overline{ \mathcal{R}^{0}(y)}^{i})}{f_{v}(\overline{y^{v=0}}^{i})-f_{v}(y^{v=0})}\frac{f_{v }(\overline{y^{v=0}}^{i})-f_{v}(y^{v=0})}{\bar{y}^{i}_{i}-y_{i}},\] that is, there is a positive cycle in \(G(f)\) with support \(\{i,v\}\). **Proposition 3.9**.: _If \(G(\tilde{f})\) has edge from \(j\) to \(i\) of positive (resp. negative) sign and \(i\neq j\), then \(G(f)\) has an edge \(j\to i\) or a path \(j\to v\to i\) of positive (resp. negative) sign._ Proof.: Suppose that \(\tilde{f}_{i}(\bar{x}^{j})\neq\tilde{f}_{i}(x)\). Take \(y\in\mathbb{B}^{n}\) such that \(\pi(y)=x\). From Eq. (2) we have \[\tilde{f}_{i}(x)=\begin{cases}f_{i}(\mathcal{R}^{0}(y))\wedge f_{i}(\mathcal{R }^{1}(y))&\text{ if }x_{i}=1,\\ f_{i}(\mathcal{R}^{0}(y))\lor f_{i}(\mathcal{R}^{1}(y))&\text{ if }x_{i}=0,\end{cases}\] and \[\tilde{f}_{i}(\bar{x}^{j})=\begin{cases}f_{i}(\mathcal{R}^{0}(\bar{y}^{j})) \wedge f_{i}(\mathcal{R}^{1}(\bar{y}^{j}))&\text{ if }\bar{x}^{j}_{i}=x_{i}=1,\\ f_{i}(\mathcal{R}^{0}(\bar{y}^{j}))\lor f_{i}(\mathcal{R}^{1}(\bar{y}^{j}))& \text{ if }\bar{x}^{j}_{i}=x_{i}=0.\end{cases}\] Since \(i\neq j\) and \(\tilde{f}_{i}(\bar{x}^{j})\neq\tilde{f}_{i}(x)\), we must have either \(f_{i}(\mathcal{R}^{0}(y))=f_{i}(\mathcal{R}^{1}(y))\) or \(f_{i}(\mathcal{R}^{0}(\bar{y}^{j}))=f_{i}(\mathcal{R}^{1}(\bar{y}^{j}))\). Consider the case where \(f_{i}(\mathcal{R}^{0}(\bar{y}^{j}))=f_{i}(\mathcal{R}^{1}(\bar{y}^{j}))\), the case \(f_{i}(\mathcal{R}^{0}(y))=f_{i}(\mathcal{R}^{1}(y))\) being symmetrical. Then \(\tilde{f}_{i}(\bar{x}^{j})=f_{i}(\mathcal{R}^{0}(\bar{y}^{j}))=f_{i}(\mathcal{ R}^{1}(\bar{y}^{j}))\). Take \(a\in\{0,1\}\) such that \(\tilde{f}_{i}(x)=f_{i}(\mathcal{R}^{a}(y))\). Then we can write \[0\neq s\cdot(\bar{y}^{j}_{j}-y_{j}) =f_{i}(\mathcal{R}^{a}(\bar{y}^{j}))-f_{i}(\mathcal{R}^{a}(y))\] \[=f_{i}(\mathcal{R}^{a}(\bar{y}^{j}))-f_{i}(\overline{\mathcal{R} ^{a}(y)}^{j})+f_{i}(\overline{\mathcal{R}^{a}(y)}^{j})-f_{i}(\mathcal{R}^{a}( y)),\] where \(s\) is the sign of the edge \(j\to i\) in \(G(\tilde{f})\). If \(f_{i}(\overline{\mathcal{R}^{a}(y)}^{j})-f_{i}(\mathcal{R}^{a}(y))=s\cdot( \bar{y}^{j}_{j}-y_{j})\), then there is an edge \(j\to i\) in \(G(f)\) with the required sign. Suppose that \(f_{i}(\overline{\mathcal{R}^{a}(y)}^{j})-f_{i}(\mathcal{R}^{a}(y))=0\), then \(f_{i}(\mathcal{R}^{a}(\bar{y}^{j}))-f_{i}(\overline{\mathcal{R}^{a}(y)}^{j} )=s\cdot(\bar{y}^{j}_{j}-y_{j})\) and \(f_{v}(\overline{y^{v=a}}^{j})\neq f_{v}(y^{v=a})\). Therefore \[\frac{f_{i}(\mathcal{R}^{a}(\bar{y}^{j}))-f_{i}(\overline{\mathcal{R}^{a}(y) }^{j})}{f_{v}(\overline{y^{v=a}}^{j})-f_{v}(y^{v=a})}\cdot\frac{f_{v}( \overline{y^{v=a}}^{j})-f_{v}(y^{v=a})}{\overline{y^{v=a}}^{j}_{j}-y^{v=a}}= \frac{f_{i}(\mathcal{R}^{a}(\bar{y}^{j}))-f_{i}(\mathcal{R}^{a}(y))}{ \overline{y^{j}_{j}-y_{j}}}=s,\] and there is a path \(j\to v\to i\) in \(G(f)\) with the required sign. The following is a corollary of the previous proposition. **Proposition 3.10**.: _If \(G(\tilde{f})\) has an elementary path from \(j\) to \(i\) of sign \(s\) that is not a loop, then \(G(f)\) has an elementary path from \(j\) to \(i\) of sign \(s\)._ **Example 3.11**.: Not all edges in \(G(f)\) are preserved by the reduction. For instance, the map \(f(x_{1},x_{2})=(x_{1}\wedge\bar{x}_{2},x_{1}\wedge\bar{x}_{2})\) reduces, after elimination of the second variable, to the constant function \(\tilde{f}(x_{1})=0\). ### Application: positive feedback vertex sets and bound on the number of attractors Using the properties of the variable elimination method introduced in this paper, we give an alternative proof for a bound on the number of attractors of asynchronous Boolean networks in terms of positive feedback vertex sets of the interaction graph. The result can be found in [17]. Recall that a _positive feedback vertex set_ of a signed directed graph \(G\) is a set of vertices that intersects every positive cycle of \(G\). The idea is to show that the size of the minimum positive feedback vertex set does not increase with the reduction. After some reduction steps a network with \(|I|\) components is obtained, giving the upper bound \(2^{|I|}\) on the number of attractors and fixed points. For the proof we use the following lemma. **Lemma 3.12**.: _Suppose that \(I\) is a positive feedback vertex set for \(f\) that does not contain \(v\). Then \(G(f)\) has no positive loop at \(v\), and \(I\) is a positive feedback vertex set for the network \(\tilde{f}\) obtained by eliminating \(v\)._ Proof.: Take a positive cycle in \(G(\tilde{f})\) with support \(C\). By Propositions 3.8 and 3.10 there exists a positive cycle in \(G(f)\) with support in \(C\cup\{v\}\). Since \(I\) is a positive feedback vertex set and \(v\) is not in \(I\), we have \(I\cap C\neq\emptyset\), as required. **Theorem 3.13**.: _([17]) Consider \(f\colon\mathbb{B}^{n}\to\mathbb{B}^{n}\) and suppose that \(I\) is a positive feedback vertex set of \(G(f)\). Then \(AD(f)\) has at most \(2^{|I|}\) attractors._ Proof.: We can apply the reduction method described in Eq. (2) eliminating vertices that do not belong to a positive feedback vertex set of minimum size, until a network \(\tilde{f}\colon\mathbb{B}^{m}\to\mathbb{B}^{m}\) is obtained such that all positive feedback vertex sets have size \(m\). Since variables that do not belong to minimum positive vertex sets are eliminated, by Lemma 3.12 the size of minimum positive feedback vertex sets cannot increase with the reduction. The conclusion follows from the fact that the number of attractors of \(\tilde{f}\) is greater or equal to the number of attractors of \(f\) (Corollary 3.4 (iii)). ## 4 Preservation of cyclic attractors We now turn our attention to a different problem. A critical question when using network reduction concerns the preservation or loss of information. The identification of properties that are preserved can help clarify the accuracy of information that can be obtained from the analysis of the reduced network in lieu of the full network. When studying a network model one should also consider that, even if no network reduction is explicitly applied, implicit reduction steps might have been introduced in the construction of the model, for instance when certain components are merged into one, or signaling pathways are simplified. In the analysis of Boolean networks special importance is given to the attractors. A natural question is therefore: under which conditions are attractors _preserved_ by the network reduction? ### Definition and examples To express and illustrate structural conditions on the interaction graphs, in this section we will adhere to the following conventions. We will write \(i\xrightarrow{s}j\) for an edge with sign \(s\) from \(i\) to \(j\), whereas \(i\to j\) will denote the existence of an edge of any sign. In addition, to represent classes of interaction graphs in compact form, we will summarize subgraphs using subsets of vertices. For instance, given \(X,Y\subseteq V\), \(X\to Y\) will denote an interaction graph consisting of arbitrary signed directed graphs with vertices in \(X\) and \(Y\) respectively, and at least one edge from some variable in \(X\) to some variable in \(Y\). We will also denote the possibility of existence of an edge from a vertex to another using dashed arrows. Thus, for instance, \(X\to Y\) will denote the possible existence of an edge from some variable in \(X\) to some variable in \(Y\). Before we answer the question posed in the introduction of this section, we need to clarify the meaning of the term "preservation". In agreement with Definition 2.3 in [20], we consider the following definition. **Definition 4.1**.: We say that the attractors of \(f\) are _preserved_ by the elimination of \(v\) if the following two conditions are satisfied: 1. for each attractor \(A\) of \(AD(f)\), \(\pi(A)\) is an attractor of \(AD(\tilde{f})\), and 2. for each attractor \(\tilde{A}\) of \(AD(\tilde{f})\), there exists a unique attractor \(A\) of \(AD(f)\) such that \(\pi(A)=\tilde{A}\). Note that, if attractors are preserved, their number cannot change as a result of the reduction. **Example 4.2**.: Consider the map defined by \[f(x_{u},x_{v},x_{w})=(\bar{x}_{u},x_{u},(x_{u}\wedge x_{w})\vee(\bar{x}_{v} \wedge x_{w})\vee(x_{u}\wedge\bar{x}_{v}))\] with interaction graph \[\begin{CD}-1\,\raisebox{-1.0pt}{\includegraphics[width=1.0pt]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig Then \(f\) has only one attractor (the full space), whereas the map obtained by elimination of \(v\) has two attractors. The asynchronous state transition graphs for \(f\) and \(\tilde{f}\) are represented in Fig. 4. The example shows how the removal of a simple mediator vertex (a component with only one regulator and only one target) can have an impact on the number of attractors. In [20], the authors claim that the removal of a mediator variable \(v\) does not impact the number of attractors if the regulator and target of \(v\) are not regulators of each other. Here we show that, on the contrary, the removal of a single mediator vertex from a chain of mediator variables of arbitrary length can change the number of attractors. In other words, for each \(n\geq 1\), one can construct a map with interaction graph of the form (3) such that the reduced network obtained by eliminating any of the variables \(v_{i}\), \(i=1,\ldots,n+1\), has a different number of cyclic attractors. Write \(\mathbb{0}\) and \(\mathbb{1}\) for the states with all components equal to \(0\) or \(1\), respectively. The idea for the construction of the map is as follows. We have a chain of \(n+1\) mediator variables \(v_{1},\ldots,v_{n+1}\). Downstream of the chain, the networks has \(n+2\) variables \(W\), that regulate each other and have \(v_{n+1}\), the last mediator variable, as their unique regulator outside of \(W\). The variables in \(W\) regulate a variable \(u\) whose unique role is to regulate the first mediator variable \(v_{1}\). We want the initial network to have a unique steady state in \(\mathbb{1}\). We build it so that, starting from the state \(\mathbb{0}\), exactly \(n+1\) mediator variables are required to switch on all the variables in \(W\), so that the state \(\mathbb{1}\) cannot be reached from \(\mathbb{0}\) when one of the mediator variables is removed, and another attractor must exist in the reduced network. To impose such behaviour, we build the following dependencies for the variables in \(W\): they can only be updated from \(0\) to \(1\) in order (first \(w_{1}\), then \(w_{2}\), etc.), and the variables in odd positions require, in order to change from \(0\) to \(1\), the condition \(v_{n+1}=1\) for the last mediator variable, whereas the variables in even positions require \(v_{n+1}=0\). Therefore, in order to reach the state \(\mathbb{1}\), we are forced to have alternating values for the mediator variables. The values of the mediator variables are simply propagated from the variable \(u\). The Boolean function \(f_{u}\) is defined so that \(x_{u}\) is forced to be \(1\) when some variables in \(W\) are equal to \(1\), and can otherwise oscillate freely, thus providing the oscillating input to the chain of mediator nodes. The detailed definition of the map is given in the following thereom. **Theorem 4.3**.: _For each \(n\geq 1\) there exists a Boolean network \(f\) with interaction graph \(G(f)\) that admits a path of length \(n\) of variables with indegree one and outdegree one, such that the network \(\tilde{f}\) obtained from \(f\) by removing any of the variables in the path satisfies \(S(\tilde{f})>S(f)\)._ Proof.: We define a Boolean network \(f\) of dimension \(2(n+2)\) with interaction graph of the form given in Eq. (3). Denote \(u,v_{1},\ldots,v_{n+1},w_{1},\ldots,w_{n+2}\) the variables of the Boolean network. Set \(V=\{v_{1},\ldots,v_{n+1}\}\) Figure 4: Asynchronous state transition graphs for the map \(f(x_{u},x_{v},x_{w})=(\bar{x}_{u},x_{u},(x_{u}\wedge x_{w})\vee(\bar{x}_{v} \wedge x_{w})\vee(x_{u}\wedge\bar{x}_{v}))\) (left) and the one obtained from \(f\) by eliminating \(v\) (right). The state transition graphs have one cyclic attractor and two cyclic attractors respectively. \(W=\{w_{1},\ldots,w_{n+2}\}\), and \[f_{u}(x_{u},x_{V},x_{W}) =\left(\bigwedge_{j=1}^{n+2}x_{w_{j}}\right)\vee\left(\bar{x}_{u} \wedge\bigwedge_{j=1}^{n+2}\bar{x}_{w_{j}}\right)\vee\left(x_{u}\wedge\over \bigwedge_{j=1}^{\overline{n+2}}x_{w_{j}}\wedge\bigwedge_{j=1}^{\overline{n+2}} \bar{x}_{w_{j}}\right)\] \[f_{v_{1}}(x_{u},x_{V},x_{W}) =x_{u},\] \[f_{v_{i}}(x_{u},x_{V},x_{W}) =x_{v_{i-1}}\text{ for }i=2,\ldots,n+1,\] \[f_{w_{i}}(x_{u},x_{V},x_{W}) =\left(\bigwedge_{j=1}^{n+2}x_{w_{j}}\right)\vee\left(x_{v_{n+1}} \wedge\bigwedge_{j=1}^{i-1}x_{w_{j}}\wedge\bigwedge_{j=i}^{n+2}\bar{x}_{w_{j}} \right)\text{ for }i=1,\ldots,n+2\text{ if }i\equiv 1\text{ mod }2,\] \[f_{w_{i}}(x_{u},x_{V},x_{W}) =\left(\bigwedge_{j=1}^{n+2}x_{w_{j}}\right)\vee\left(\bar{x}_{v_ {n+1}}\wedge\bigwedge_{j=1}^{i-1}x_{w_{j}}\wedge\bigwedge_{j=i}^{n+2}\bar{x}_{w _{j}}\right)\text{ for }i=1,\ldots,n+2\text{ if }i\equiv 0\text{ mod }2.\] The variables \(v_{1},\ldots,v_{n+1}\) are a chain of variables with indegree and outdegree equal to one in the interaction graph of \(f\). The first conjunction in each function is to ensure that \(\mathbb{1}\) is a fixed point. The definition of \(f_{u}\) is such that component \(u\) can be changed from \(0\) to \(1\) and vice versa, as long as the variables in \(W\) are all equal to \(0\). The definition of \(f_{w_{i}}\) shows a dependency on the previous variables \(w_{1},\ldots,w_{i-1}\), and is different for variables in odd and even positions in \(W\) in terms of the dependency on the last mediator variable \(v_{n+1}\). We prove that \((a)\) for \(AD(f)\), the fixed point \(\mathbb{1}\) is the unique attractor and \((b)\)\(AD(\tilde{f})\) has an additional cyclic attractor. \((a)\) It is easy to see that \(z=\mathbb{1}\) is a fixed point for \(f\). We show that, for each \(y\in\mathbb{B}^{2n+4}\), \(y\neq z\), there exists a path in \(AD(f)\) from \(y\) to \(z\). 1. Case \(y_{W}=\mathbb{1}\): we have \(f_{u}(y)=1\), and the reachability of \(z\) from \(y\) is direct from the definition of \(f\). 2. Case \(y_{W}=\mathbb{0}\): first observe that there is a path from \(y\) to \(y^{1}\) that satisfies \(y^{1}_{u}=1\), \(y^{1}_{v_{i}}=1\) for \(i=1,\ldots,n+1\) and \(y^{1}_{W}=\mathbb{0}\). From \(y^{1}\), one can switch component \(u\) to zero and construct a path to the state \(y^{2}\) defined by \(y^{2}_{u}=0\), \(y^{2}_{v_{i}}=0\) for \(i=1,\ldots,n\), \(y^{2}_{v_{n+1}}=1\) and \(y^{2}_{W}=\mathbb{0}\). Then component \(u\) can be switched back to one, and its value propagated to component \(v_{n-1}\), while keeping component \(v_{n}\) to zero. Continuing with this construction, one can reach a state \(y^{\prime}\) that satisfies \[y^{\prime}_{v_{n+2-i}}\equiv i\text{ mod }2\text{ for }i=1,\ldots,n+1,y^{\prime}_{W}=0.\] Using \(y^{\prime}_{v_{n+1}}=1\) one can then update the value of component \(w_{1}\). After this, to update the value of \(w_{2}\), one needs to first propagate the value of component \(v_{n}\) to change component \(v_{n+1}\) to zero, and so forth. One can therefore reach states \(z^{1},\ldots,z^{n+1}\) that satisfy \[z^{1}_{v_{n+1}}=1,z^{1}_{w_{1}}=1,z^{1}_{w_{2}}=0,\ldots,z^{1}_{w_{n+1}}=0,\] \[z^{2}_{v_{n+1}}=0,z^{2}_{w_{1}}=1,z^{2}_{w_{2}}=1,z^{2}_{w_{3}}=0, \ldots,z^{2}_{w_{n+1}}=0,\] \[...\] \[z^{n+1}_{v_{n+1}}\equiv n+1\text{ mod }2,z^{n+1}_{W}=\mathbb{1}.\] Hence, we have a path from \(y\) to \(z\) by point \((i)\). 3. In the remaining cases, it is easy to see that there is a path from \(y\) to a state \(y^{\prime}\) with \(y^{\prime}_{W}=\mathbb{0}\). We conclude using point \((ii)\). \((b)\) Consider now the asynchronous state transition graph for the network \(\tilde{f}\colon\mathbb{B}^{2n+3}\to\mathbb{B}^{2n+3}\) obtained from \(f\) by eliminating one of the variables \(v_{1},\ldots,v_{n}\). Without loss of generality, we can consider the case where \(v_{1}\) is eliminated. Consider the set of states \(A\) reachable from \(\mathbb{0}\in\mathbb{B}^{2n+3}\) in \(AD(\tilde{f})\), and define \(\alpha=\max_{y\in A}\sum_{j=1}^{n+1}y_{w_{j}}\). We show that \(\alpha\leq n\), and therefore \(\mathbb{1}\notin A\), and \(AD(\tilde{f})\) admits at least one attractor distinct from \(\mathbb{1}\). Take any \(y\in A\) and a path from \(\mathbb{0}\) to \(y\), and call \(y^{\prime}\) the last state in the path that satisfies \(y^{\prime}_{W}=\mathbb{0}\). Denote by \(p\) the path from \(y^{\prime}\) to \(y\). Note that component \(u\) does not change in \(p\). In addition, \(\alpha-1\) is bounded by the number of times component \(v_{n+1}\) changes in \(p\). Since component \(u\) is fixed in \(p\), \(\alpha-1\) is bounded by the cardinality of \(\{i\in\{2,\ldots,n\}\mid y^{\prime}_{v_{i}}\neq y^{\prime}_{v_{i+1}}\}\). Hence \(\alpha\) is bounded by \(n\), which concludes. The result demonstrates how a very modest change in the interaction graph can have a significant impact on the asymptotic behaviour of asynchronous dynamics. In the next section we show that attractors are preserved if cycles containing an intermediate variable are not allowed, and the regulators of the intermediate variables do not directly regulate the targets of the intermediate variables. ### Sufficient conditions We now give sufficient conditions on the interaction graph of a Boolean network for the preservation of attractor to hold. For the proof we will use the following lemma. **Lemma 4.4**.: _Consider a Boolean network \(f\) with set of components \(V\). Take \(W\subset V\) and \(I\subseteq V\setminus W\), \(I\neq\emptyset\) and suppose that for all \(j\in W\) there is no path from \(j\) to \(I\) in \(G(f)\). If there exists a path in \(AD(f)\) from a state \(x\) to a state \(y\) with \(y_{I}=\bar{x}_{I}\), then there exists a path in \(AD(f)\) from \(x\) to a state \(z\) such that \(z_{I}=\bar{x}_{I}\) and \(z_{W}=x_{W}\)._ Proof.: Write \(W^{\prime}\) for the vertices that are reachable from \(W\) in \(G(f)\). Observe that, if \(u\to\bar{u}^{i}\) is a transition in \(AD(f)\) and \(i\) is not reachable from \(W\) in \(G(f)\), then, for any subset \(W^{\prime\prime}\) of \(W^{\prime}\), the transition \(\bar{u}^{W^{\prime\prime}}\to\bar{u}^{W^{\prime\prime}\cup\{i\}}\) exists in \(AD(f)\). Now consider a path \(x=x^{1}\to\cdots\to x^{m}=y\) from \(x\) to \(y\), and write \(i_{1},\ldots,i_{m}\) for the sequence of components being updated along the path from \(x\) to \(x^{m}\). Consider the subsequence obtained from \(i_{1},\ldots,i_{m}\) by removing all indices in \(W^{\prime}\). Then, by the previous observation, the subsequence defines a trajectory in \(AD(f)\) from \(x\) to a state \(z\) that satisfies \(z_{I}=y_{I}=\bar{x}_{I}\) and \(z_{W}=x_{W}\). **Theorem 4.5**.: _Suppose that the interaction graph of \(f\) is of the form_ (4) _for some \(U_{1},U_{2},W\subset V\), \(v\in V\). Then the attractors of \(f\) are preserved by the elimination of \(v\)._ Proof.: Write \(\tilde{f}\) for the network obtained by elimination of \(v\), and set \(U=U_{1}\cup U_{2}\). Start by observing that, by Proposition 3.10, the interaction graph \(G(\tilde{f})\) takes the form Without loss of generality, we can write a state \(x\) in \(\mathbb{B}^{n}\) as \(x=(x_{U},x_{v},x_{W})=(x_{U_{1}},x_{U_{2}},x_{v},x_{W})\). In the proof, we use the notation \(x\rightsquigarrow y\) to indicate the existence of a path from \(x\) to \(y\). 1. Consider point \((i)\) of Definition 4.1. If \(A\) is an attractor for \(AD(f)\), by Theorem 3.3\((iv)\)\(\pi(A)\) is a trap set for \(AD(\tilde{f})\). It remains to show that \(\pi(A)\) is strongly connected. It is sufficient to show that for each transition \(x\to\bar{x}^{i}\) in \(AD(f)\) with \(x\in A\) there is a path from \(\pi(x)\) to \(\pi(\bar{x}^{i})\) in \(AD(\tilde{f})\). By Lemma 3.2 we only have to consider the case of \(i\in W\) such that \(v\to i\) is an edge in \(G(f)\), and \(x\neq\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\), that is, \(f_{i}(x)\neq x_{i}\) and \(x_{v}\neq f_{v}(x)\). In this case, we are not directly guaranteed a transition from \(\pi(x)\) to \(\pi(\bar{x}^{i})\) in \(AD(\tilde{f})\), and we have to construct an alternative path. The idea is to create a path to a state where component \(i\) changes and that is a "representative state", so that the transition involving variable \(i\) is preserved with the elimination of \(v\). Since \(x_{v}\in f_{v}(A)\), there exists a path in \(AD(f)\) from \(x\) to a state \(z\in A\) such that \(z_{v}\neq f_{v}(z)=x_{v}\neq f_{v}(x)\): \[x=(x_{U_{1}},x_{U_{2}},x_{v},x_{W})\rightsquigarrow z=(z_{U_{1}},z_{U_{2}},z_{v}, z_{W}),\ \ x_{v}=f_{v}(z)\neq z_{v}.\] We now apply Lemma 4.4 twice: * component \(v\) depends only on \(U_{2}\cup\{v\}\), and there is no path from \(W\) to \(U_{2}\cup\{v\}\) in \(G(f)\), hence we can assume that \(z_{W}=x_{W}\), that is \[x=(x_{U_{1}},x_{U_{2}},x_{v},x_{W})\thicksim z=(z_{U_{1}},z_{U_{2}},z_{v},x_{W}),\ \ x_{v}=f_{v}(z)\neq z_{v}.\] * \(x\) and \(z\) are in the attractor, hence there is a path from \(z\) to \(x\) in \(AD(f)\). In particular, there is a path from \(z\) to a state \(z^{\prime}\) with \(z^{\prime}_{U_{1}}=x_{U_{1}}\): \[z=(z_{U_{1}},z_{U_{2}},z_{v},x_{W})\thicksim z^{\prime}=(x_{U_{1}},z^{\prime}_ {U_{2}},z^{\prime}_{v},z^{\prime}_{W}).\] Since there is no path from \(U_{2}\cup\{v\}\cup W\) to \(U_{1}\) in \(G(f)\), we can assume \(z_{U_{2}\cup\{v\}\cup W}=z^{\prime}_{U_{2}\cup\{v\}\cup W}\). In particular, we can assume that \(z\) satisfies \(z_{W}=x_{W}\) and \(z_{U_{1}}=x_{U_{1}}\), obtaining a path \[x=(x_{U_{1}},x_{U_{2}},x_{v},x_{W})\thicksim z=(x_{U_{1}},z_{U_{2}},z_{v},x_{W} )\to y=(x_{U_{1}},z_{U_{2}},\bar{z}^{v}_{v},x_{W}),\] where we defined \(y=\bar{z}^{v}\) and the transition \(z\to y\) derives from the definition of \(z\). We now look at deriving a path in \(AD(\tilde{f})\) from this path. Since there is no path from \(v\) to \(U\) in \(G(f)\), by Lemma 3.2 (iii) there is a path from \(\pi(x)=(x_{U_{1}},x_{U_{2}},x_{W})\) to \(\pi(z)=(x_{U_{1}},z_{U_{2}},x_{W})\) in \(AD(\tilde{f})\). In addition, since \(i\) depends only on \(U_{1}\), \(v\) and \(W\), we have \(f_{i}(y)=f_{i}(x_{U_{1}},z_{U_{2}},x_{v},x_{W})=f_{i}(x)\neq x_{i}=y_{i}\), and there is a transition from \(y\) to \(\bar{y}^{i}\). It is easy to see that Lemma 3.2 applies and there is a transition from \(\pi(z)=\pi(y)\) to \(\pi(\bar{y}^{i})\) in \(AD(\tilde{f})\). In summary, we obtained a path \[\pi(x)=(x_{U_{1}},x_{U_{2}},x_{W})\thicksim\pi(z)=\pi(y)=(x_{U_{1}},z_{U_{2}}, x_{W})\to\pi(\bar{y}^{i})=(x_{U_{1}},z_{U_{2}},\bar{x}^{i}_{W}).\] Since there is a path from \(\bar{y}^{i}\) to \(\bar{x}^{i}\) in \(A\), and the variables in \(U\) do not depend on \(v\), we can again apply Lemma 3.2 (iii) and find that there is a path from \(\pi(\bar{y}^{i})\) to \(\pi(\bar{x}^{i})\) in \(AD(\tilde{f})\), obtaining a path from \(\pi(x)\) to \(\pi(\bar{x}^{i})\) as needed. 2. Consider now point \((ii)\) of Definition 4.1. Given an attractor \(\tilde{A}\) for \(AD(\tilde{f})\), by Theorem 3.3\((vi)\) there is at most one attractor intersecting \(\pi^{-1}(\tilde{A})\). It remains to show that \(\pi^{-1}(\tilde{A})\) contains a trap set for \(f\). To this end, we show that \(B=\{x\in\pi^{-1}(\tilde{A})\mid x_{v}\in f_{v}(\pi^{-1}(\tilde{A}))\}\) is a trap set. Take \(x\in B\) and \(\bar{x}^{i}\) successor for \(x\) in \(AD(f)\). We have to show that \(\bar{x}^{i}\) is in \(B\). If \(i=v\), then \(f_{v}(x)\neq x_{v}\) and since \(x\) is in \(\pi^{-1}(\tilde{A})\), \(f_{v}(x)\) is in \(f_{v}(\pi^{-1}(\tilde{A}))\) and the successor is in \(B\). For \(i\neq v\), since \(\pi(x)\) is in \(\tilde{A}\), it is sufficient to show that there is a path in \(AD(\tilde{f})\) from \(\pi(x)\) to \(\pi(\bar{x}^{i})\), since this implies that \(\pi(\bar{x}^{i})\) is in \(\tilde{A}\) and therefore \(\bar{x}^{i}\) is in \(B\). As for the first part of the proof, by Lemma 3.2, we only have to consider the case where there is an edge \(v\to i\) in \(G(f)\) and \(x\neq\mathcal{R}^{0}(x)=\mathcal{R}^{1}(x)\), that is, \(f_{i}(x)\neq x_{i}\) and \(f_{v}(x)\neq x_{v}\). By definition of \(B\), there exists \((z_{U},z_{W})\in\tilde{A}\) and \(z_{v}\in\{0,1\}\) such that \(f_{v}(z)=x_{v}\) with \(z=(z_{U},z_{v},z_{W})\). If \(z_{v}=x_{v}\), then since there is no positive loop at \(v\) in \(G(f)\) we must have \(f_{v}(\bar{z}^{v})=z_{v}\) and \(z\) is a successor for \(\bar{z}^{v}\). If instead \(z_{v}\neq x_{v}\), then \(\bar{z}^{v}\) is a successor for \(z\) with \(\bar{z}^{v}_{v}=x_{v}\) In the first case take \(y=z\), in the second define \(y=\bar{z}^{v}\). In both cases we have \(y_{v}=x_{v}\), \(y_{U}=z_{U}\), \(y_{W}=z_{W}\). By hypothesis, there exists a path in \(AD(\tilde{f})\) from \(\pi(x)=(x_{U},x_{W})\) to \(\pi(y)=(y_{U},y_{W})\). We apply again Lemma 4.4 twice: * since there is no path from \(W\) to \(U\) in \(G(\tilde{f})\), we can assume that \(y_{W}=x_{W}\); * since there is a path from \(\pi(y)\) to \(\pi(x)\) and no path from \(U_{2}\cup W\) to \(U_{1}\) in \(G(\tilde{f})\), we can assume \(y_{U_{1}}=x_{U_{1}}\). We therefore obtained a path \[\pi(x)=(x_{U_{1}},x_{U_{2}},x_{W})\thicksim\pi(y)=(x_{U_{1}},y_{U_{2}},x_{W}).\] Since \(i\) does not depend on \(U\), we have \(f_{i}(y)=f_{i}(x_{U_{1}},y_{U_{2}},x_{v},x_{W})=f_{i}(x)\neq x_{i}=y_{i}\) and again it is easy to see that point (ii) of Lemma 3.2 applies and there is a transition from \(\pi(y)=(y_{U},x_{W})\) to \(\pi(\bar{y}^{i})=(y_{U},\bar{x}_{W}^{i})\). Finally, there is a path in \(AD(\tilde{f})\) from \(\pi(\bar{y}^{i})=(y_{U},\bar{x}_{W}^{i})\) to \(\pi(x)=(x_{U},x_{W})\), and since there is no path from \(i\) to \(U\) in \(G(\tilde{f})\), by Lemma 4.4 there is a path in \(AD(\tilde{f})\) from \(\pi(\bar{y}^{i})=(y_{U},\bar{x}_{W}^{i})\) to \(\pi(\bar{x}^{i})=(x_{U},\bar{x}_{W}^{i})\), which concludes. ## 5 Conclusion Boolean networks are frequently used as modelling tools, with associated dynamics often defined under asynchronous updating. Elimination of variables can be considered to simplify the computational burden [10, 3, 19, 21, 20, 13]. While preservation of fixed points and of some reachability properties can be shown [10], the asymptotic behaviour of the full and reduced networks can differ. In this work we gave conditions on the interaction graph that ensure that cyclic attractors are preserved (Theorem 4.5), and presented examples showing the differences in asymptotic behaviour that can arise when these conditions are not satisfied. In particular, we showed that Boolean networks with very similar interaction graphs, differing only in a single intermediate in a chain of intermediate variables, can have different asymptotic behaviours (Theorem 4.3). We also illustrated how the reduction method can be extended to variables that are negatively autoregulated, and discussed the effects of this elimination on the attractors (Lemma 3.2 and Theorem 3.3). We showed that a known bound on the number of attractors of asynchronous dynamics [17] is a corollary of these properties (Theorem 3.13). The approach presented here broadens the applicability of variable elimination in the investigation of Boolean network dynamics. Further extensions of this elimination approach to discrete systems with more than two levels will be considered in the future. ## Acknowledgements Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
2302.00401
Deterministic equivalent and error universality of deep random features learning
This manuscript considers the problem of learning a random Gaussian network function using a fully connected network with frozen intermediate layers and trainable readout layer. This problem can be seen as a natural generalization of the widely studied random features model to deeper architectures. First, we prove Gaussian universality of the test error in a ridge regression setting where the learner and target networks share the same intermediate layers, and provide a sharp asymptotic formula for it. Establishing this result requires proving a deterministic equivalent for traces of the deep random features sample covariance matrices which can be of independent interest. Second, we conjecture the asymptotic Gaussian universality of the test error in the more general setting of arbitrary convex losses and generic learner/target architectures. We provide extensive numerical evidence for this conjecture, which requires the derivation of closed-form expressions for the layer-wise post-activation population covariances. In light of our results, we investigate the interplay between architecture design and implicit regularization.
Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro
2023-02-01T12:37:10Z
http://arxiv.org/abs/2302.00401v1
# Deterministic equivalent and error universality ###### Abstract This manuscript considers the problem of learning a random Gaussian network function using a fully connected network with frozen intermediate layers and trainable readout layer. This problem can be seen as a natural generalization of the widely studied random features model to deeper architectures. First, we prove Gaussian universality of the test error in a ridge regression setting where the learner and target networks share the same intermediate layers, and provide a sharp asymptotic formula for it. Establishing this result requires proving a deterministic equivalent for traces of the deep random features sample covariance matrices which can be of independent interest. Second, we conjecture the asymptotic Gaussian universality of the test error in the more general setting of arbitrary convex losses and generic learner/target architectures. We provide extensive numerical evidence for this conjecture. In light of our results, we investigate the interplay between architecture design and implicit regularization. ## 1 Introduction Despite the incredible practical progress in the applications of deep neural networks to almost all fields of knowledge, our current theoretical understanding thereof is still to a large extent incomplete. Recent progress on the theoretical front stemmed from the investigation of simplified settings, which despite their limitations are often able to capture some of the key properties of "real life" neural networks. A notable example is the recent stream of works on random features (RFs), originally introduced by [1] as a computationally efficient approximation technique for kernel methods, but more recently studied as a surrogate model for two-layers neural networks in the lazy regime [2, 3, 4, 5]. RFs are a particular instance of random neural networks, whose statistical properties have been investigated in a sizeable body of works [6, 7, 8, 9, 10]. The problem of training the readout layer of such networks has been addressed in the shallow (one hidden layer) case by [4, 5], who provide sharp asymptotic characterizations for the test error. A similar study in the generic deep case is, however, still missing. In this manuscript, we bridge this gap by considering the problem of learning the last layer of a deep, fully-connected random neural network, hereafter referred to as the _deep random features_ (dRF) model. More precisely, our **main contributions** in this manuscript are: * In Section 3, we state Theorem 3.3, which proves an asymptotic deterministic equivalent for the traces of the product of deterministic matrices with both conjugate kernel and sample covariance matrix of the layer-wise post-activations. * As a consequence of Thm. 3.3, in Section 4 we derive a sharp asymptotic formula for the test error of the dRF model in the particular case where the target and learner networks share the same intermediate layers, and when the readout layer is trained with the squared loss. This result establishes the Gaussian equivalence of the test error for ridge regression in this setting. * Finally, we conjecture (and provide strong numerical evidence for) the Gaussian universality of the dRF model for general convex losses, and generic target/learner network architectures. More specifically, we provide exact asymptotic formulas for the test error that leverage recent progress in high-dimensional statistics [11] and a closed-form formula for the population covariance of network activations appearing in [12]. These formulas show that in terms of second-order statistics, the dRF is equivalent to a linear network with noisy layers. We discuss how this effective noise translates into a depth-induced implicit regularization in Section 5. A GitHub repository with the code employed in the present work can be found here. ### Related work _Random features_ were first introduced by [1]. The asymptotic spectral density of the single-layer conjugate kernel was characterized in [3, 13, 14]. Sharp asymptotics for the test error of the RF model appeared in [4, 15] for ridge regression, [5, 16] for general convex losses and [17, 18] for other penalties. The implicit regularization of RFs was discussed in [19]. The RFs model has been studied in many different contexts as a proxy for understanding overparametrisation, e.g. in uncertainty quantification [20], ensembling [21, 22], the training dynamics [23, 24], but also to highlight the limitations of lazy training [25, 26, 27, 28]; _Deep random networks_ were shown to converge to Gaussian processes in [6, 7]. They were also studied in the context of inference in [29, 30], and as generative priors to inverse problems in [31, 32, 33]. The distribution of outputs of deep random nets was characterized in [9, 10]. Close to our work is [8], which provide exact formulas for the asymptotic spectral density and Stieltjes transform of the NTK and conjugate kernel in the proportional limit. Our formulas for the sample and population covariance are complementary to theirs. The test error of linear-width deep networks has been recently studied in [34, 35] through the lens of Bayesian learning; _Gaussian universality_ of the test error for the RFs model was shown in [4], conjectured to hold for general losses in [5] and was proven in [36, 37]. Gaussian universality has also been shown to hold for other classes of features, such as two-layer NTK [38], kernel features [19, 24, 39, 40]. [11] provided numerical evidence for Gaussian universality of more general feature maps, including pre-trained deep features. _Deterministic equivalents_ of sample covariance matrices have first been established in [41, 42] for separable covariances, generalizing the seminal work [43] on the free convolution of spectra in an anisotropic sense. More recently these results have been extended to non-separable covariances, first in tracial [44], and then also in anisotropic sense [45, 46]. ## 2 Setting & preliminaries Let \((\mathrm{x}^{\mu},y^{\mu})\in\mathbb{R}^{d}\times\mathcal{Y}\), \(\mu\in[n]\coloneqq\{1,\cdots,n\}\), denote some training data, with \(\mathrm{x}^{\mu}\sim\mathcal{N}(0_{d},\Omega_{0})\) independently and \(y^{\mu}=f_{\star}(\mathrm{x}^{\mu})\) a (potentially random) target function. This work is concerned with characterising the learning performance of generalised linear estimation: \[\hat{y}=\sigma\left(\frac{\theta^{\top}\varphi(\mathrm{x})}{\sqrt{k}}\right), \tag{1}\] with _deep random features_ (dRF): \[\varphi(\mathrm{x})\coloneqq\underbrace{(\varphi_{L}\circ\varphi_{L-1}\circ \cdots\circ\varphi_{2}\circ\varphi_{1})}_{L}(\mathrm{x}), \tag{2}\] where the post-activations are given by: \[\varphi_{\ell}(h)=\sigma_{\ell}\left(\frac{1}{\sqrt{k_{\ell-1}}}W_{\ell} \cdot h\right),\quad\ell\in[L]. \tag{3}\] The weights \(\{W_{\ell}\in\mathbb{R}^{k_{k}\times k_{\ell-1}}\}_{\ell\in[L]}\) are assumed to be independently drawn Gaussian matrices with i.i.d. entries \((W_{\ell})_{ij}\sim\mathcal{N}(0,\Delta_{\ell})\;\;\forall 1\leq i\leq k_{\ell}\), \(1\leq j\leq k_{\ell-1}\). To alleviate notation, sometimes it will be convenient to denote \(k_{L}=k\). Only the readout weights \(\theta\in\mathbb{R}^{k}\) in (1) are trained according to the usual regularized _empirical risk minimization_ procedure: \[\hat{\theta}=\operatorname*{argmin}_{\theta\in\mathbb{R}^{k}}\left[\sum_{\mu =1}^{n}\ell(y^{\mu},\theta^{\top}\varphi(\mathrm{x}^{\mu}))+\frac{\lambda}{2} ||\theta||^{2}\right], \tag{4}\] where \(\ell:\mathcal{Y}\times\mathbb{R}\to\mathbb{R}_{+}\) is a loss function, which we assume convex, and \(\lambda>0\) sets the regularization strength. To assess the training and test performances of the empirical risk minimizer (4), we let \(g:\mathcal{Y}\times\mathbb{R}\to\mathbb{R}_{+}\) be any performance metric (e.g. the loss function itself or, in the case of classification, the probability of misclassifying), and define the test error: \[\epsilon_{g}(\hat{\theta})\coloneqq\mathbf{E}\left[g(y,\hat{\theta}^{\top} \varphi(\mathrm{x}))\right] \tag{5}\] Our main goal in this work is to provide a sharp characterization of (5) in the proportional asymptotic regime \(n,d,k_{\ell}\to\infty\) at fixed \(\mathcal{O}(1)\) ratios \(\alpha\coloneqq\nicefrac{{n}}{{d}}\) and \(\gamma_{\ell}\coloneqq\nicefrac{{k_{\ell}}}{{d_{\ell}}}\) for all layer index \(\ell\in[L]\). This requires a precise characterization of the _sample and population covariances_ and the _Gram_ matrices of the post-activations. ### Background on sample covariance matrices Marchenko-Pastur and free probability:We briefly introduce basic nomenclature on sample covariance matrices. For a random vector \(x\in\mathbb{R}^{d}\) with mean zero \(\mathbf{E}\,x=0\) and covariance \(\Sigma:=\mathbf{E}\,xx^{\top}\in\mathbb{R}^{d\times d}\), we call the matrix \(\widehat{\Sigma}:=\mathcal{X}\mathcal{X}^{\top}/n\in\mathbb{R}^{d\times d}\) obtained from \(n\) independent copies \(x_{1},\ldots,x_{n}\) of \(x\) written in matrix form as \(\mathcal{X}:=(x_{1},\ldots,x_{n})\) the _sample covariance matrix_ corresponding to the _population covariance matrix_\(\Sigma\). The _Gram matrix_\(\widehat{\Sigma}:=\mathcal{X}^{\top}\mathcal{X}/n\in\mathbb{R}^{n\times n}\) has the same non-zero eigenvalues as the sample covariance matrix but unrelated eigenvectors. The systematic mathematical study of sample covariance and Gram matrices has a long history dating back to [47]. While in the "classical" statistical limit \(n\to\infty\) with \(d\) being fixed the sample covariance matrix converges to the population covariance matrix \(\widehat{\Sigma}\to\Sigma\), in the proportional regime \(d\sim n\gg 1\) the non-trivial asymptotic relationship between the spectra of \(\widehat{\Sigma}\) and \(\Sigma\) has first been obtained in the seminal paper [43]: the empirical spectral density \(\mu(\widehat{\Sigma}):=d^{-1}\sum_{\lambda\in\mathrm{Spec}(\widehat{\Sigma})} \delta_{\lambda}\) of \(\widehat{\Sigma}\) is approximately equal to the _free multiplicative convolution_ of \(\mu(\Sigma)\) and a Marchenko-Pastur distribution \(\mu_{\mathrm{MP}}^{\epsilon}\) of aspect ratio \(c=d/n\), \[\mu(\widehat{\Sigma})\approx\mu(\Sigma)\boxtimes\mu_{\mathrm{MP}}^{d/n}. \tag{6}\] Here the free multiplicative convolution \(\mu\boxtimes\mu_{\mathrm{MP}}^{\epsilon}\) may be defined as the unique distribution \(\nu\) whose Stieltjes transform \(m=m_{\nu}(z):=\int(x-z)^{-1}\,\mathrm{d}\nu(x)\) satisfies the scalar _self-consistent equation_ \[zm=\frac{z}{1-c-czm}m_{\mu}\left(\frac{z}{1-c-czm}\right). \tag{7}\] The spectral asymptotics (6) originally were obtained in the case of Gaussian \(\mathcal{X}\) or, more generally, for separable correlations \(\mathcal{X}=\sqrt{\Sigma}Y\) for some i.i.d. matrix \(Y\in\mathbb{R}^{d\times n}\). These results were later extended [44] to the general case under essentially optimal assumptions on concentrations of quadratic forms \(x^{\top}Ax\) around their expectation \(\mathbb{T}r\,A\Sigma\). Deterministic equivalents:It has only been recognised much later [41, 42] that the relationship (6) between the asymptotic spectra of \(\Sigma\) and \(\widehat{\Sigma},\widetilde{\Sigma}\) actually extends to eigenvectors as well, and that the resolvents \(\widehat{G}(z):=(\widehat{\Sigma}-z)^{-1}\), \(\widetilde{G}(z):=(\widetilde{\Sigma}-z)^{-1}\) are asymptotically equal to _deterministic equivalents_ \[\widehat{M}(z):=-\frac{(\Sigma\widetilde{m}(z)+I_{d})^{-1}}{z},\quad\widetilde {M}(z):=\widetilde{m}(z)I_{n}, \tag{8}\] also in an _anisotropic_ rather than just a tracial sense, highlighting that despite the simple relationship between their averaged traces \[\widehat{m}(z):=m_{\mu(\Sigma)\boxtimes\mu_{\mathrm{MP}}^{\epsilon}}(z),\quad \widetilde{m}(z)=\frac{c-1}{z}+c\widehat{m}(z),\] the sample covariance and Gram matrices carry rather different non-spectral information. The anisoptric concentration of resolvents (or in physics terminology, the self-averaging) has again first been obtained in the Gaussian or separable cases [41, 42]. The extension to general sample covariance matrices was only achieved much more recently [45, 46] under Lipschitz concentration assumptions. In this work we specifically use the deterministic equivalent for sample covariance matrices with general covariance from [46] and extend it to cover Gram matrices. Application to the deep random features model:In this work we apply the general theory of anisotropic deterministic equivalents to the deep random features model. As discussed in Section 4, to prove error universality even for the simple ridge regression case, it is not enough to only consider the spectral convergence of the matrices, and a stronger result is warranted. The application of non-linear activation functions makes the model neither Gaussian nor separable, hence our analysis relies on the deterministic equivalents from [46] and our extension to Gram matrices, which appear naturally in the explicit error derivations. ### Notation We will adopt the following notation: * For \(A\in\mathbb{R}^{n\times n}\) we denote \(\left\langle A\right\rangle:=\nicefrac{{1}}{{n}}\operatorname{tr}A\). * For matrices \(A\in\mathbb{R}^{n\times m}\) we denote the operator norm (with respect to the \(\ell^{2}\)-vector norm) by \(\left\|A\right\|\), the max-norm by \(\left\|A\right\|_{\max}:=\max_{ij}\left|A_{ij}\right|\), and the Frobenius norm by \(\left\|A\right\|_{\mathrm{F}}^{2}:=\sum_{ij}\left|A_{ij}\right|^{2}\). * For any distribution \(\mu\) we denote the push-forward under the map \(\lambda\mapsto a\lambda+b\) by \(a\otimes\mu\oplus b\) in order to avoid confusion with e.g. the convex combination \(a\mu_{1}+(1-a)\mu_{2}\) of measures \(\mu_{1},\mu_{2}\). * We say that a sequence of random variables \((X_{n})_{n}\) is _stochastically dominated_ by another sequence \((Y_{n})_{n}\) if for all small \(\epsilon>0\) and large \(D<\infty\) it holds that \(P(X_{n}>n^{\mathrm{r}}Y_{n})\leq n^{-D}\) for large enough \(n\), and in this case write \(X_{n}\prec Y_{n}\). ## 3 Deterministic equivalents Consider the sequence of variances defined by the recursion \[r_{\ell+1}=\Delta_{\ell+1}\operatorname{\mathbf{E}}_{\xi\sim N(0,r_{\ell})} \left[\sigma_{\ell}(\xi)^{2}\right] \tag{9}\] with initial condition \(r_{1}\coloneqq\Delta_{1}\nicefrac{{(\Omega_{0})}}{{d}}\) and coefficients \[\kappa_{1}^{\ell} =\frac{1}{r_{\ell}}\operatorname{\mathbf{E}}_{\xi\sim N(0,r_{ \ell})}\left[\xi\sigma_{\ell}(\xi)\right],\] \[\kappa_{*}^{\ell} =\sqrt{\operatorname{\mathbf{E}}_{\xi\sim N(0,r_{\ell})}\left[ \sigma_{\ell}(\xi)^{2}\right]-r_{\ell}\left(\kappa_{1}^{\ell}\right)^{2}}. \tag{10}\] ### Rigorous results on the multi-layer sample covariance and Gram matrices Our main result on the anisotropic deterministic equivalent of dRFs follows from iterating the following proposition. We consider a data matrix \(X_{0}\in\mathbb{R}^{d\times n}\) whose Gram matrix concentrates as \[\left\|\frac{X_{0}^{\top}X_{0}}{d}-r_{1}I\right\|_{\max}\prec\frac{1}{\sqrt{n }},\quad\left\|\frac{X_{0}}{\sqrt{d}}\right\|\prec 1 \tag{11}\] for some positive constant \(r_{1}\). The Assumption (11) for instance is satisfied if the columns \(\mathrm{x}\) of \(X_{0}\) are independent with mean \(\operatorname{\mathbf{E}}x=0\) and covariance \(\operatorname{\mathbf{E}}\mathrm{x}\mathrm{x}^{\top}=\Omega_{0}\in\mathbb{R}^ {d\times d}\) (together with some mild assumptions on the fourth moments), in which case \(r_{1}=\left\langle\Omega_{0}\right\rangle\) is the normalised trace of the covariance. We then consider \(X_{1}\coloneqq\sigma_{1}(W_{1}X_{0}/\sqrt{d})\) assuming the entries of \(W_{1}\in\mathbb{R}^{k_{1}\times d}\) are iid. \(N(0,1)\) elements, and \(\sigma_{1}\) satisfies \(\operatorname{\mathbf{E}}_{\xi\sim N(0,1)}\sigma_{1}\big{(}\sqrt{r_{1}}\xi \big{)}=0\) in the proportional \(n\sim d\sim k_{1}\) regime. Upon changing \(\sigma_{1}\) there is no loss in generality in assuming \(\Delta_{1}=1\) which we do for notational convenience. **Proposition 3.1** (Deterministic equivalent for RF).: _For any deterministic \(A\) and Lipschitz-continuous activation function \(\sigma_{1}\), under the assumptions above, we have that, for any \(z\in\mathbf{C}\setminus\mathbb{R}_{+}\)_ \[\left|\left\langle A\Big{[}\Big{(}\frac{X_{1}^{\top}X_{1}}{k_{1}}-z\Big{)}^{-1 }-\widehat{M}(z)\Big{]}\right\rangle\right|\prec\frac{\langle AA^{*}\rangle^ {1/2}}{\delta^{9}\sqrt{n}},\] _and_ \[\left|\left\langle A\Big{(}\frac{X_{1}X_{1}^{\top}}{k_{1}}-z\Big{)}^{-1} \right\rangle-\langle A\rangle\widetilde{m}(z)\right|\prec\frac{\langle AA^{*} \rangle^{1/2}}{\delta^{9}\sqrt{n}},\] _where \(\delta\coloneqq\operatorname{dist}(z,\mathbb{R}_{+})\),_ \[-z\widehat{M}(z) \coloneqq\Big{(}\widetilde{m}(z)\Sigma_{\mathrm{lin}}+I\Big{)}^ {-1}, \tag{12}\] \[\Sigma_{\mathrm{lin}} \coloneqq(\kappa_{1}^{1})^{2}\frac{X_{0}^{\top}X_{0}}{d}+(\kappa _{*}^{1})^{2}I,\] _and_ \[\widehat{m}(z):=m_{\mu(\Sigma_{\mathrm{lin}})\boxtimes\mu_{\mathrm{MP}}^{n/k_{1}} }(z),\quad\widetilde{m}(z)=\frac{n-k_{1}}{nz}+\frac{n}{k_{1}}\widehat{m}(z).\] _Furthermore, Assumption (11) holds true with \(X_{0},r_{1}\) replaced by \(X_{1},r_{2}\), respectively, and we have that \(\mathrm{dist}(-1/\widetilde{m}(z),\mathbb{R}_{+})\geq\mathrm{dist}(z,\mathbb{ R}_{+})\)._ _Remark 3.2_.: The tracial version of Proposition 3.1 has appeared multiple times in the literature, e.g. [44]. It implies that the spectrum \(\widehat{\mu}_{1}\) of \(X_{1}^{\top}X_{1}/k_{1}\) is approximately given by the free multiplicative convolution \[\begin{split}\widehat{\mu}_{1}&\approx\mu\Big{(}( \kappa_{1}^{1})^{2}\frac{X_{0}^{\top}X_{0}}{d}+(\kappa_{*}^{1})^{2}I\Big{)} \boxtimes\mu_{\mathrm{MP}}^{n/k_{1}}\\ &=\Big{(}\mu\Big{(}(\kappa_{1}^{1})^{2}\frac{X_{0}^{\top}X_{0}}{ d}\Big{)}\boxplus\delta_{(\kappa_{1}^{1})^{2}}\Big{)}\boxtimes\mu_{\mathrm{MP}}^{n/k_{1}}. \end{split} \tag{13}\] In case \(c\leq 1\), i.e. when \(\mu_{\mathrm{MP}}^{c}\) has no atom at \(0\), it was shown in [48] that \[\sqrt{\mu\boxtimes\mu_{\mathrm{MP}}^{c}}\boxplus_{c}\sqrt{\mu^{\prime} \boxtimes\mu_{\mathrm{MP}}^{c}}=\sqrt{(\mu\boxplus\mu^{\prime})\boxtimes\mu _{\mathrm{MP}}^{c}} \tag{14}\] which allows to simplify (13). Here \(\boxplus_{c}\) is the _rectangular free convolution_ which models the distribution of singular values of the addition of two free rectangular random matrices, and the square-root is to be understood as the push-forward of the square-root map. Applying (14) to (13) yields \[\sqrt{\widehat{\mu}_{1}}\approx\Big{(}\kappa_{1}^{1}\otimes\sqrt{\widehat{ \mu}_{0}\boxtimes\mu_{\mathrm{MP}}^{n/k_{1}}}\Big{)}\boxplus_{n/k_{1}}\kappa_ {*}^{1}\otimes\sqrt{\mu_{\mathrm{MP}}^{n/k_{1}}}, \tag{15}\] suggesting that the non-zero singular values of \(X_{1}/\sqrt{k}\) can be modeled by the non-zero singular values of the _Gaussian equivalent model_: \[c^{\prime}W^{\prime}X_{0}+c^{\prime\prime}W^{\prime\prime} \tag{16}\] for some suitably chosen constants \(c^{\prime},c^{\prime\prime}\) and independent Gaussian matrices \(W,W^{\prime}\). The last assertion of Proposition 3.1 allows to iterate over an arbitrary (but finite) number of layers. Indeed, after one layer we have \[\begin{split}\Big{(}\frac{X_{1}^{\top}X_{1}}{k_{1}}-z_{1}\Big{)} ^{-1}&\approx\Big{(}-\widetilde{m}(z_{1})z_{1}\Sigma_{\mathrm{ lin}}-z_{1}\Big{)}^{-1}\\ &=c_{1}\Big{(}\frac{X_{0}^{\top}X_{0}}{k_{0}}-z_{0}\Big{)}^{-1}, \end{split} \tag{17}\] using the definitions from Theorem 3.3 for \(c_{1},z_{0}\) below. **Theorem 3.3** (Deterministic equivalent for \(\mathrm{dRF}\)).: _For any deterministic \(A\) and Lipschitz-continuous activation functions \(\sigma_{1},\dots,\sigma_{\ell}\) satisfying \(\mathbf{E}_{\xi\sim\mathcal{N}(0,1)}\,\sigma_{m}(\sqrt{r_{m}}\xi)=0\), under the Assumption (11) above, we have that for any \(z_{\ell}\in\mathbf{C}\setminus\mathbb{R}_{+}\)_ \[\left|\left\langle A\Big{(}\frac{X_{\ell}^{\top}X_{\ell}}{k_{\ell}}-z_{\ell} \Big{)}^{-1}\right\rangle-c_{1}\cdots c_{\ell}\widetilde{m}_{0}\langle A\rangle \right|\prec\frac{\langle AA^{*}\rangle^{1/2}}{\delta^{9}\sqrt{n}}\] _and that_ \[\left|\left\langle A\Big{(}\frac{X_{\ell}X_{\ell}^{\top}}{k_{\ell}}-z_{\ell} \Big{)}^{-1}\right\rangle-\widetilde{m}_{\ell}\langle A\rangle\right|\prec \frac{\langle AA^{*}\rangle^{1/2}}{\delta^{9}\sqrt{n}},\] _where \(\delta:=\mathrm{dist}(z_{\ell},\mathbb{R}_{+})\), and we recursively define_ \[\begin{split}\Sigma_{\mathrm{lin}}^{\ell-1}& \coloneqq(\kappa_{1}^{\ell})^{2}\frac{X_{\ell-1}^{\top}X_{\ell-1}}{k_{\ell-1}}+( \kappa_{*}^{\ell})^{2}I,\\ \widetilde{m}_{\ell}&\coloneqq\frac{n-k_{\ell}}{nz_{ \ell}}+\frac{n}{k_{\ell}}m_{\mu(\Sigma_{\mathrm{lin}}^{\ell-1})\boxtimes\mu _{\mathrm{MP}}^{n/k_{\ell}}}(z_{\ell})\\ -\frac{1}{c_{\ell}}&\coloneqq\widetilde{m}_{\ell}z_ {\ell}(\kappa_{1}^{\ell})^{2},\quad z_{\ell-1}:=c_{\ell}z_{\ell}-\left(\frac{ \kappa_{*}^{\ell}}{\kappa_{1}^{\ell}}\right)^{2}\end{split} \tag{18}\] _for \(\ell\geq 1\) and finally_ \[\widetilde{m}_{0}=\frac{d-n}{nz_{0}}+\frac{d^{2}}{n^{2}}m_{\mu(\Omega_{0}) \boxtimes\mu_{\mathrm{MP}}^{d/n}}\Big{(}\frac{d}{n}z_{0}\Big{)}. \tag{19}\] Proofs of Proposition 3.1 and Theorem 3.3 are given in App. A. _Remark 3.4_.: The same iteration argument has appeared before in [8]. The main difference to our present work is the anisotropic nature of our estimate which allows to test both sample covariance, as well as Gram resolvent against arbitrary deterministic matrices. As we will discuss in the next section, this is crucial in order to provide closed-form asymptotics for the test error of the deep random features model. ### Closed-formed formula for the population covariance In Proposition 3.1 and Theorem 3.3 we iteratively considered \(X_{\ell}^{\top}X_{\ell}/k_{\ell}\) as a sample-covariance matrix with population covariance \[\mathbf{E}_{W_{\ell}}\frac{X_{\ell}^{\top}X_{\ell}}{k_{\ell}}=\mathbf{E}_{w} \,\sigma_{\ell}\Big{(}\frac{X_{\ell-1}^{\top}w}{\sqrt{k_{\ell-1}}}\Big{)} \sigma_{\ell}\Big{(}\frac{w^{\top}X_{\ell-1}}{\sqrt{k_{\ell-1}}}\Big{)}\approx \Sigma_{\mathrm{lin}}^{\ell}\] and from this obtained formulas for the deterministic equivalents for both \(X_{\ell}^{\top}X_{\ell}\) and \(X_{\ell}X_{\ell}^{\top}\). A more natural approach would be to consider \(X_{\ell}X_{\ell}^{\top}/n\) as a sample covariance matrix with population covariance \[\Omega_{\ell}:=\mathbf{E}_{X_{0}}\frac{X_{\ell}X_{\ell}^{\top}}{n}, \tag{20}\] noting that the matrix \(X_{\ell}\) conditioned on \(W_{1},\dots,W_{\ell}\) has independent columns. Theorem A.3 and Proposition A.4 apply also in this setting, but lacking a rigorous expression for \(\Omega_{\ell}\) the resulting deterministic equivalent is less descriptive than the one from Theorem 3.3. A heuristic closed-form formula for the population covariance which is conjectured to be exact was recently derived in [12]. We now discuss this result, and for the sake of completeness provide a derivation in Appendix App. B. Consider the sequence of matrices \(\{\Omega_{\ell}^{\mathrm{lin}}\}_{\ell}\) defined by the recursion \[\Omega_{\ell+1}^{\mathrm{lin}}=\kappa_{1}^{(\ell+1)2}\frac{W_{\ell+1}\Omega_{ \ell}^{\mathrm{lin}}W_{\ell+1}^{\top}}{k_{\ell}}+\kappa_{*}^{(\ell+1)2}I_{k_{ \ell+1}}. \tag{21}\] with \(\Omega_{0}^{\mathrm{lin}}:=\Omega_{0}\). Informally, \(\Omega_{0}^{\mathrm{lin}}\) provides an asymptotic approximation of \(\Omega_{\ell}\) in the sense that the normalized distance \(||\Omega_{\ell}^{\mathrm{lin}}-\Omega_{\ell}||_{F}/\sqrt{d}\) is of order \(\mathcal{O}(\nicefrac{{1}}{{\sqrt{d}}})\). Besides, the recursion (21) implies that \(\Omega_{\ell}^{\mathrm{lin}}\) can be expressed as a sum of products of Gaussian matrices (and transposes thereof), and affords a straightforward way to derive an analytical expression its asymptotic spectral distribution. This derivation is presented in App. B. It is an interesting question whether an approximate formula for the population covariance matrix like the one in Equation (21) can be obtained indirectly via Theorem 3.3. There is extensive literature on this _inverse problem_, i.e. how to infer spectral properties of the population covariance spectrum from the sample covariance spectrum, e.g. [49] but we leave this avenue to future work. ### Consistency of Theorem 3.3 and the approximate population covariance What we can note, however, is that Equation (21) is _consistent_ with Theorem 3.3. We demonstrate this in case of equal dimensions \(n=d=k_{1}=\dots=k_{\ell}\) to avoid unnecessary technicalities due to the zero eigenvalues. We define \[\widehat{\mu}_{\ell}:=\mu\Big{(}\frac{X_{\ell}^{\top}X_{\ell}}{k_{\ell}}\Big{)} =\widetilde{\mu}_{\ell}:=\mu\Big{(}\frac{X_{\ell}X_{\ell}^{\top}}{n}\Big{)} \tag{22}\] and recall that Proposition 3.1 implies that \[\widehat{\mu}_{\ell}\approx((\kappa_{1}^{l})^{2}\otimes\widehat{\mu}_{l-1} \oplus(\kappa_{*}^{l})^{2})\boxtimes\mu_{\mathrm{MP}}. \tag{23}\] On the other hand (6) applied to the sample covariance matrix \(X_{\ell}X_{\ell}^{\top}/n\) with population covariance \(\Omega_{\ell}\approx\Omega_{\ell}^{\mathrm{lin}}\) implies that \[\begin{split}\widetilde{\mu}_{\ell}&\approx\mu( \Omega_{\ell}^{\mathrm{lin}})\boxtimes\mu_{\mathrm{MP}}\\ &=\mu\Big{(}(\kappa_{1}^{\ell})^{2}\frac{W_{\ell}\Omega_{\ell-1} ^{\mathrm{lin}}W_{\ell}^{\top}}{k_{\ell-1}}+(\kappa_{*}^{\ell})^{2}I_{k_{ \ell}}\Big{)}\boxtimes\mu_{\mathrm{MP}}\\ &\approx\Big{(}(\kappa_{1}^{\ell})\otimes\mu(\Omega_{\ell-1}^{ \mathrm{lin}})\boxtimes\mu_{\mathrm{MP}}\oplus(\kappa_{*}^{\ell})^{2}\Big{)} \boxtimes\mu_{\mathrm{MP}}\\ &\approx\Big{(}(\kappa_{1}^{\ell})\otimes\widetilde{\mu}_{\ell-1 }\oplus(\kappa_{*}^{\ell})^{2}\Big{)}\boxtimes\mu_{\mathrm{MP}},\end{split} \tag{24}\] demonstrating that both approaches lead to the same recursion. Here in the third step we applied (6) to the sample covariance matrix \(\sqrt{\Omega_{\ell-1}^{\mathrm{lin}}}W_{\ell}^{\top}\), and in the fourth step used the first approximation for \(\ell\) replaced by \(\ell-1\). Gaussian universality of the test error In the second part of this work, we discuss how the results on the asymptotic spectrum of the empirical and population covariances of the features can be used to provide sharp expressions for the test and training errors (5) when the labels are generated by a deep random neural network: \[f_{*}(\mathrm{x}^{\mu})=\sigma^{*}\left(\frac{\theta_{*}^{\top}\varphi^{*}( \mathrm{x}^{\mu})}{\sqrt{k^{*}}}\right). \tag{25}\] The feature map \(\varphi^{*}\) denotes the composition \(\varphi_{L^{*}}^{\star}\circ...\circ\varphi_{1}^{\star}\) of the \(L^{*}+1\) layers: \[\varphi_{\ell}^{*}(\mathrm{x})=\sigma_{\ell}^{*}\left(\frac{1}{\sqrt{k_{\ell- 1}^{\star}}}W_{\ell}^{*}\cdot\mathrm{x}\right),\] and \(\theta_{*}\in\mathbb{R}^{k^{*}}\) is the last layer weights. To alleviate notations, we denote \(k^{*}:=k_{L}^{*}\). The weight matrices \(\{W_{\ell}^{*}\}_{\ell\in[L^{*}]}\) have i.i.d Gaussian entries sampled from \(\mathcal{N}(0,\Delta_{\ell}^{*})\). Note that we do not require the sequence of activations \(\{\sigma_{\ell}^{*}\}_{\ell}\) and widths \(\{\gamma_{\ell}:=\nicefrac{{k_{\ell}}}{{d}}\}_{\ell}\) to match with those of the learner dRF (2). We address in succession * The well-specified case where the target and learner networks share the same intermediate layers (i.e. same architecture, activations and weights) \(\varphi_{\ell}^{*}=\varphi_{\ell}\), \(\ell\in[L]\) with \(L^{*}=L\), and the readout of the dRF is trained using ridge regression. This is equivalent to the interesting setting of ridge regression on a linear target, with features drawn from a non-Gaussian distribution, resulting from the propagation of Gaussian data through several non-linear layers. * The general case where the target and learner possess generically distinct architectures, activations and weights, and a generic convex loss. In both cases, we provide a sharp asymptotic characterization of the test error. Furthermore, we establish the equality of the latter with the test error of an equivalent learning problem on _Gaussian samples_ with matching population covariance, thereby showing the Gaussian universality of the test error. In the well-specified case, our results are rigorous, and make use of the deterministic equivalent provided by Theorem 3.3. In the fully generic case, we formulate a conjecture, which we strongly support with finite-size numerical experiments. ### Well-specified case We first establish the Gaussian universality of the test error of dRFs in the matched setting \(\varphi=\varphi^{*}\), for a readout layer trained using a square loss. This corresponds to \(\mathcal{Y}=\mathbb{R}\), \(\ell(y,\hat{y})=\nicefrac{{1}}{{2}}(y-\hat{y})^{2}\). This case is particularly simple since the empirical risk minimization problem (4) admits the following closed form solution: \[\hat{\theta}=\nicefrac{{1}}{{\sqrt{k}}}(\lambda I_{k}+\nicefrac{{1}}{{k}}X_{ L}X_{L}^{\top})^{-1}X_{L}y \tag{26}\] where we recall the reader \(X_{L}\in\mathbb{R}^{k\times n}\) is the matrix obtained by stacking the last layer features column-wise and \(y\in\mathbb{R}^{n}\) is the vector of labels. For a given target function, computing the test error boils down to a random matrix theory problem depending on variations of the trace of deterministic matrices times the resolvent of the features sample covariance matrices (c.f. App. C for a derivation): \[\epsilon_{g}(\hat{\theta})=\Delta\left(\left\langle\Omega_{L}\left( \lambda I_{k}+\nicefrac{{1}}{{k}}X_{L}X_{L}\right)^{-1}\right\rangle+1\right)\] \[\qquad\qquad-\lambda(\lambda-\Delta)\partial_{\lambda}\left\langle \Omega_{L}\left(\lambda I_{k}+\nicefrac{{1}}{{k}}X_{L}X_{L}\right)^{-1}\right\rangle \tag{27}\] Applying Theorem 3.3 yields the following corollary: **Corollary 4.1** (Ridge universality of matched target).: _Let \(\lambda>0\). In the asymptotic limit \(n,d,k_{\ell}\to\infty\) with fixed \(\mathcal{O}(1)\) ratios \(\alpha=\nicefrac{{n}}{{d}}\), \(\gamma_{\ell}:=\nicefrac{{k_{\ell}}}{{d}}\) and under the assumptions of Theorem 3.3, the asymptotic test error of the ridge estimator (26) on the target (25) with \(L=L^{*}\) and \(\varphi_{\ell}^{*}=\varphi_{\ell}\) and additive Gaussian noise with variance \(\Delta>0\) is given by:_ \[\epsilon_{g}(\hat{\theta})\xrightarrow{k\to\infty}\epsilon_{g}^{*} =\Delta\left(\langle\Omega_{L}\rangle\widetilde{m}_{L}(-\lambda)+1\right)\] \[\qquad\qquad-\lambda(\lambda-\Delta)\langle\Omega_{L}\rangle \partial_{\lambda}\widetilde{m}_{L}(-\lambda) \tag{28}\] _where \(\widetilde{m}_{L}\) can be recursively computed from (18) respectively. In particular, this implies Gaussian universality of the asymptotic mean-squared error in this model, since (28) exactly agrees with the asymptotic test error of ridge regression on Gaussian data \(\mathrm{x}\sim\mathcal{N}(0_{d},\Omega_{L})\) derived in (50)._ A detailed derivation of (27) and Corollary 4.1 is given in App. C, together with a discussion of possible extensions to deterministic last-layer weights and general targets. Note that, while it is not needed to establish the Gaussian equivalence of ridge dRF regression in the well-specified case, the trace of the population covariance \(\langle\Omega_{L}\rangle\) can be explicitly computed from the closed-form formula (21). ### General case Despite the major progress stemming from the application of the random matrix theory toolbox to learning problems, the application of the latter has been mostly limited to quadratic problems where a closed-form expression of the estimators, such as (26), are available. Proving universality results akin to Corollary 4.1 beyond quadratic problems is a challenging task, which has recently been the subject of intense investigation. In the context of generalized linear estimation (4), universality of the test error for the \(L=1\) random features model under a generic convex loss function was heuristically studied in [5], where the authors have shown that the asymptotic formula for the test error obtained under the Gaussian design assumption perfectly agreed with finite-size simulations with the true features. This Gaussian universality of the test error was later proven by [37] by combining a Lindeberg interpolation scheme with a generalized central limit theorem. Our goal in the following is to provide an analogous contribution as [5] to the case of multi-layer random features. This result builds on a rigorous, closed-form formula for the asymptotic test error of misspecified generalized linear estimation in the high-dimensional limit considered here, which was derived in [11]. We show that in the high-dimensional limit the asymptotic test error for the model introduced in Section 2 is in the _Gaussian universality class_. More precisely, the test error of this model is asymptotically equivalent to the test error of an equivalent Gaussian covariate model (GCM) consisting of doing generalized linear estimation on a dataset \(\hat{\mathcal{D}}=\{v^{\mu},\hat{y}^{\mu}\}_{\mu\in[n]}\) with labels \(\hat{y}^{\mu}=f_{\star}(\nicefrac{{1}}{{\sqrt{\kappa^{2}}\theta_{\star}^{ \top}}}u^{\mu})\) and jointly Gaussian covariates: \[(u,v)\sim\mathcal{N}\left(\begin{array}{cc}\Psi_{L^{\star}}&\Phi_{L^{\star}L }\\ \Phi_{L^{\star}L}^{\top}&\Omega_{L}\end{array}\right) \tag{29}\] where we recall \(\Omega_{L}\) is the variance of the model features (20) and \(\Phi\in\mathbb{R}^{k^{\star}\times k}\) and \(\Psi\in\mathbb{R}^{k^{\star}\times k^{\star}}\) are the covariances between the model and target features and the target variance respectively: \[\Phi_{L^{\star}L}\coloneqq\mathbf{E}\left[\varphi^{\star}(\mathrm{x})\varphi (\mathrm{x})^{\top}\right],\;\Psi_{L^{\star}}\coloneqq\mathbf{E}\left[ \varphi^{\star}(\mathrm{x})\varphi^{\star}(\mathrm{x})^{\top}\right] \tag{30}\] This result adds to a stream of recent universality results in high-dimensional linear estimation [11, 38, 51], and generalizes the random features universality of [15, 36, 37] to \(L>1\). It can be summarized in the following conjecture: **Conjecture 4.2**.: _In the high-dimensional limit \(n,d,k_{\ell}\to\infty\) at fixed \(\mathcal{O}(1)\) ratios \(\alpha\coloneqq\nicefrac{{n}}{{d}}\) and \(\gamma_{\ell}\coloneqq\nicefrac{{k_{\ell}}}{{d}}\), the test error of the empirical risk minimizer (4) trained on \(\mathcal{D}=\{(\mathrm{x}^{\mu},y^{\mu})\}_{\mu\in[n]}\) with covariates \(\mathrm{x}^{\mu}\sim\mathcal{N}(0_{d},\Omega_{0})\) and labels from (25) is equal to the one of a Gaussian covariate model (29) with matching second moments \(\Psi,\Phi,\Omega\) as defined in (20) and (30)._ We go a step further and provide a sharp asymptotic expression for the test error. Construct recursively the sequence Figure 1: Learning curves \(\epsilon_{g}(\alpha)\) for ridge regression (\(\sigma_{\star}=id\), \(\ell(y,z)=\nicefrac{{1}}{{2}}(y-z)^{2}\), and \(g(y,\hat{y})=(y-\hat{y})^{2}\)). Red dots correspond to numerical simulations on the learning model (2) (25), averaged over \(20\) runs. The solid line correspond to sharp asymptotic characterization provided by conjecture 4.3, and detailed in App. D. (left) 2-layers target (\(L^{\star}=1\),\(\sigma_{1}^{\star}=\mathrm{sign}\)), (right) single-layer target (\(L^{\star}=0\)). Both are learnt with a \(2-\)hidden layers RF (2) with \(\sigma_{1,2}(x)=\tanh(2x)\) activation and regularization \(\lambda=0.001\). of matrices \[\Psi_{\ell+1}^{\mathrm{lin}}=\left(\kappa_{1}^{*(\ell+1)}\right)^{2}\frac{W_{\ell+ 1}^{*}\Psi_{\ell}^{\mathrm{lin}}W_{\ell+1}^{*\top}}{k_{\ell}^{*}}+\left(\kappa_ {*}^{*(\ell+1)}\right)^{2}I_{k_{\ell+1}^{*}} \tag{31}\] with the initial condition \(\Omega_{0}^{\mathrm{lin}}=\Psi_{0}^{\mathrm{lin}}\coloneqq\Omega_{0}\). Further define \[\Phi_{L^{*}L}^{\mathrm{lin}}=\left(\prod_{\ell=L^{*}}^{1}\frac{\kappa_{1}^{* \ell}W_{\ell}^{*}}{\sqrt{k_{\ell}^{*}}}\right)\cdot\Omega_{0}\cdot\left(\prod_ {\ell=1}^{L}\frac{\kappa_{1}^{\ell}W_{\ell}^{\top}}{\sqrt{k_{\ell}}}\right). \tag{32}\] The sequence \(\{\kappa_{1}^{*\ell}\kappa_{*}^{*\ell}\}_{\ell=1}^{L^{*}}\) is define by (10) with \(\sigma_{\ell}^{*},\Delta_{\ell}^{*}\). In the special case \(L^{*}=0\), which correspond to a single-index target function, the first product in \(\Phi_{L^{*}L}^{\mathrm{lin}}\) should be replaced by \(I_{d}\). This particular target architecture is also known, in the case \(L=1\), as the _hidden manifold model_[52, 5] and affords a stylized model for structured data. The present paper generalizes these studies to arbitrary depths \(L\). One is then equipped to formulate the following, stronger, conjecture: **Conjecture 4.3**.: _In the same limit as in Conjecture 4.2, the test error of the empirical risk minimizer (4) trained on \(\mathcal{D}=\{(\mathrm{x}^{\mu},y^{\mu})\}_{\mu\in[n]}\) with covariates \(\mathrm{x}^{\mu}\sim\mathcal{N}(0_{d},\Omega_{0})\) and labels from (25) is equal to the one of a Gaussian covariate model (29) with the matrices \(\Psi_{L^{*}}^{\mathrm{lin}},\Omega_{L}^{\mathrm{lin}},\Phi_{L^{*}L}^{\mathrm{ lin}}\) (21),(32)._ Conjecture 4.3 allows to give a fully analytical sharp asymptotic characterization of the test error, which we detail in App. D. Importantly, observe that it also affords compact closed-form formulae for the population covariances \(\Omega_{L},\Phi_{L^{*}L},\Psi_{L^{*}}\). In particular the spectrum of \(\Psi_{L^{*}}^{\mathrm{lin}},\Omega_{L}^{\mathrm{lin}}\) can be analytically computed and compares excellently with empirical numerical simulations. We report those results in detail in App. B. Figs. 1 and 2 present the resulting theoretical curve and contrasts them to numerical simulations in dimensions \(d=1000\), revealing an excellent agreement. ## 5 Depth-induced implicit regularization An informal yet extremely insightful takeaway from Conjecture 4.3, and in particular the closed-form expressions (21), is that the activations in a deep non-linear dsf (2) share the same population statistics as the activations in a deep _noisy_ linear network, with layers \[\varphi_{\ell}^{\mathrm{lin}}(\mathrm{x})=\kappa_{1}^{\ell}\frac{W_{\ell}^{ \top}\,\mathrm{x}}{\sqrt{k_{\ell-1}}}+\kappa_{*}^{\ell}\xi_{\ell}, \tag{33}\] where \(\xi_{\ell}\sim\mathcal{N}(0_{k_{\ell}},I_{k_{\ell}})\) is a Gaussian noise term. It is immediate to see that (33) lead to the same recursion as (21). This observation, which was made in the concomitant work [12], essentially allows to equivalently think of the problem of learning using a dsf (2) as one of learning with linear noisy network. Indeed, Conjecture 4.3 essentially suggests that the asymptotic test error depends on the second-order statistics of the last layer activations, shared between the dsf and the equivalent linear network. Finally, it is worthy to stress that, while the learner dsf is deterministic conditional Figure 2: Learning curves \(\epsilon_{g}(\alpha)\) for logistic regression (\(\sigma_{*}=\mathrm{sign}\), \(\ell(y,z)=\ln(1+e^{-yz})\) and metric \(g(y,\hat{y})=1-\Theta(y\hat{y})\)). Red dots correspond to numerical simulations on the learning model (2) (25), averaged over \(20\) runs. The solid line correspond to sharp asymptotic characterization provided by conjecture 4.3, and detailed in App. D. (left) single-layer target (\(L^{*}=0\)), (right) two-layer target (\(L^{*}=1\), \(\sigma_{1}^{*}=\mathrm{erf}\)) (25) hidden sign layer. Both are learnt with a depth \(L=2\) dsf (2) with activation \(\sigma_{1,2}(x)=\tanh(2x)\) and regularization \(\lambda=0.05\) (top) and \(\sigma_{1,2}(x)=\mathrm{erf}(x)\) and \(\lambda=0.1\) (bottom). on the weights \(\{W_{\ell}\}\), the equivalent linear network (33) is intrinsically stochastic in nature due to the effective noise injection \(\xi_{\ell}\) at each layer. Statistical common sense dictates that this effective noise injection has a regularizing effect, by introducing some randomness in the learning, and helps mitigating overfitting. Since the effective noise is a product of the propagation through a non-linear layer, this suggest that _adding random non linear layers induces an implicit regularization_. We explore this intuition in this last section. Observe first that the equivalent noisy linear network (33) reduces to a simple shallow noisy linear model \[\hat{g}_{\theta}^{\mathrm{lin}}(\mathrm{x})=\sigma\left(\frac{1}{\sqrt{k}} \theta^{\top}\left(A_{L}\cdot\mathrm{x}+\xi_{L}\right)\right) \tag{34}\] where the effective weight matrix \(A\) is \[A_{L}:=\prod_{\ell=1}^{L}\left(\kappa_{1}^{\ell}\frac{W_{\ell}}{\sqrt{k_{\ell- 1}}}\right)\] and the effective noise \(\xi_{L}\) is Gaussian with covariance \(C_{\xi}^{L}\) \[C_{\xi}^{L}=\sum_{\ell_{0}=1}^{L-1}(\kappa_{*}^{\ell_{0}})^{2}\Big{(}\prod_{ \ell=\ell_{0}+1}^{L}\frac{\pi\left\{W_{\ell}^{\top}\right\}}{\sqrt{k_{\ell-1 }}}\Big{)}^{\top}\Big{(}\prod_{\ell=\ell_{0}+1}^{L}\frac{\pi\left\{W_{\ell-1 }^{\top}\right\}}{\sqrt{k_{\ell-1}}}\Big{)}+(\kappa_{*}^{L})^{2}I_{k}.\] The signal-plus-noise structure of the equivalent linear features (34) has profound consequences on the level of the learning curves of the model (2): * When \(\alpha=1\), there are as many training samples as the dimension of the data \(d-\) dimensional submanifold \(A_{L}\,\mathrm{x}\), resulting in a standard interpolation peak. The noise part \(\xi_{L}\) induces an implicit regularization which helps mitigate the overfitting. * As \(\alpha=\gamma_{L}\), the number of training samples matches the dimension \(k_{L}\) of the noise, and the _noise_ part is used to interpolate the training samples, resulting in another peak. This second peak is referred to as the non-linear peak by [53]. Therefore, there exists an interplay between the two peaks, with higher noise \(\xi_{L}\) both helping to mitigate the linear peak, and aggravating the non-linear peak. The depth of the network plays a role in that it modulates the amplitudes of the signal part and the noise part, depending on the activation through the recursions (10). We give two illustrations of the regularization effect of depth in Fig. 3. Two activations are considered : \(\sigma_{a}=\tanh\) (for which the noise level, as measure by \(\operatorname{tr}C_{\xi}^{L}\) decreases with depth), and a very weakly non-linear activation Figure 3: Learning curves for ridge regression on a \(1\)-hidden layer target function (\(\gamma_{1}^{*}=2\), \(\sigma_{1}^{*}=\mathrm{sign}\)) using a \(L-\)hidden layers learner with widths \(\gamma_{1}=...=\gamma_{L}=4\) and \(\sigma_{1,...,L}=\tanh\) activation (left) or \(\sigma_{1,...,L}(x)=1.1\times\mathrm{sign}(x)\times\min(2,|x|)\) clipped linear activation (right), for depths \(1\leq L\leq 6\). The regularization is \(\lambda=0.001\). Solid lines represent theoretical curves evaluated from the sharp characterization of conjecture 4.3, while numerical simulations, averaged over \(50\) runs, are indicated by dots. The linear peak can be observed at \(\alpha=1\), while the non-linear peak occurs for \(\alpha=\gamma=4\)[53]. Despite sharing the same architecture, the use of different activations induces different implicit regularizations, leading to the linear (resp. non-linear) peak being further suppressed as the depth increases for the clipped linear activation (resp. tanh activation). \(\sigma_{b}(x)=1.1\times\text{sign}(x)\times\min(2,|x|)\), corresponding to a linear function clipped between \(-2.2\) and \(2.2\) (for which tr \(C_{\xi}^{L}\) increases with depth). Note that, because for \(\sigma_{a}\) the effective noise decreases with depth, the linear peak is aggravated for deeper networks, while the non-linear peak is simultaneously suppressed. Conversely, for \(\sigma_{b}\), additional layers introduce more noise and cause a higher non-linear peak, while the induced implicit regularization mitigates the linear peak. Further discussion about the effect of architecture design on the generalization ability of dRFs (2) is provided in App. E. ## 6 Conclusion We study the problem of learning a deep random network target function by training the readout layer of a deep network, with frozen random hidden layers (deep Random Features). We first prove an asymptotic deterministic equivalent for the conjugate kernel and sample covariance of the activations in a deep Gaussian random networks. This result is leveraged to establish a sharp asymptotic characterization of the test error in the specific case where the learner and teacher networks share the same intermediate layers, and the readout is learnt using a ridge loss. This proves the Gaussian universality of the test error of ridge regression on non-linear features corresponding to the last layer activations. In the fully generic case, we conjecture a sharp asymptotic formula for the test error, for fully general target/learner architectures and convex loss. The formulas suggest that the dRF behaves like a linear noisy network, characterized by an implicit regularization. We explore the consequences of this equivalence on the interplay between the architecture of the dRF and its generalization ability. ## Acknowledgements We thank Gabriele Sicuro for discussion during the course of this project. BL acknowledges support from the _Choose France - CNRS AI Rising Talents_ program. DD is supported by ETH AI Center doctoral fellowship. DS is supported by SNSF Ambizione Grant PZ00P2_209089. HC acknowledges support from the ERC under the European Union's Horizon 2020 Research and Innovation Program Grant Agreement 714608-SMiLe.
2308.15731
Measurements of the Hubble constant from combinations of supernovae and radio quasars
In this letter, we propose an improved cosmological model independent method of determining the value of the Hubble constant $H_0$. The method uses unanchored luminosity distances $H_0d_L(z)$ from SN Ia Pantheon data combined with angular diameter distances $d_A(z)$ from a sample of intermediate luminosity radio quasars calibrated as standard rulers. The distance duality relation between $d_L(z)$ and $d_A(z)$, which is robust and independent of any cosmological model, allows to disentangle $H_0$ from such combination. However, the number of redshift matched quasars and SN Ia pairs is small (37 data-points). Hence, we take an advantage from the Artificial Neural Network (ANN) method to recover the $d_A(z)$ relation from a network trained on full 120 radio quasar sample. In this case, the result is unambiguously consistent with values of $H_0$ obtained from local probes by SH0ES and H0LiCOW collaborations. Three statistical summary measures: weighted mean $\widetilde{H}_0=73.51(\pm0.67) {~km~s^{-1}~Mpc^{-1}}$, median $Med(H_0)=74.71(\pm4.08) {~km~s^{-1}~Mpc^{-1}}$ and MCMC simulated posterior distribution $H_0=73.52^{+0.66}_{-0.68} {~km~s^{-1}~Mpc^{-1}}$ are fully consistent with each other and the precision reached $1\%$ level. This is encouraging for the future applications of our method. Because individual measurements of $H_0$ are related to different redshifts spanning the range $z=0.5 - 2.0$, we take advantage of this fact to check if there is any noticeable trend in $H_0$ measurements with redshift of objects used for this purpose. However, our result is that the data we used strongly support the lack of such systematic effects.
Tonghua Liu, Xiyan Yang, Zisheng Zhang, Jieci Wang, Marek Biesiada
2023-08-30T03:15:17Z
http://arxiv.org/abs/2308.15731v1
# Measurements of the Hubble constant from combinations of supernovae and radio quasars ###### Abstract In this letter, we propose an improved cosmological model independent method of determining the value of the Hubble constant \(H_{0}\). The method uses unanchored luminosity distances \(H_{0}d_{L}(z)\) from SN Ia Pantheon data combined with angular diameter distances \(d_{A}(z)\) from a sample of intermediate luminosity radio quasars calibrated as standard rulers. The distance duality relation between \(d_{L}(z)\) and \(d_{A}(z)\), which is robust and independent of any cosmological model, allows to disentangle \(H_{0}\) from such combination. However, the number of redshift matched quasars and SN Ia pairs is small (37 datapoints). Hence, we take an advantage from the Artificial Neural Network (ANN) method to recover the \(d_{A}(z)\) relation from a network trained on full 120 radio quasar sample. In this case, the result is unambiguously consistent with values of \(H_{0}\) obtained from local probes by SH0ES and H0LiCOW collaborations. Three statistical summary measures: weighted mean \(\widetilde{H}_{0}=73.51(\pm 0.67)\ {\rm km\ s^{-1}\ Mpc^{-1}}\), median \(Med(H_{0})=74.71(\pm 4.08)\ {\rm km\ s^{-1}\ Mpc^{-1}}\) and MCMC simulated posterior distribution \(H_{0}=73.52^{+0.66}_{-0.68}\ {\rm km\ s^{-1}\ Mpc^{-1}}\) are fully consistent with each other and the precision reached \(1\%\) level. This is encouraging for the future applications of our method. Because individual measurements of \(H_{0}\) are related to different redshifts spanning the range \(z=0.5-2.0\), we take advantage of this fact to check if there is any noticeable trend in \(H_{0}\) measurements with redshift of objects used for this purpose. However, our result is that the data we used strongly support the lack of such systematic effects. Introduction Over the last decades, one of the most important achievements in observational cosmology were very precise measurements of tiny anisotropies in the cosmic microwave background radiation (CMB) [1; 2; 3]. During the mission of _Planck_ satellite not only temperature fluctuations (T) but also E-modes of CMB polarization pattern have been accurately measured and TT, TE, EE power spectra have been precisely determined up to very high multipole moments \(l\sim 2500\) (see [4] and references therein). The power spectrum of temperature fluctuations revealed the so called acoustic peaks enabling direct, empirical studies of the early Universe and initial conditions for the formation of the large scale structure. CMB data combined with baryonic acoustic oscillations (BAO) measurements ([5; 6; 7; 8; 9; 10; 11] and references therein) in large galaxy catalogs led modern cosmology into an era of precision cosmology. However, with increased precision accuracy issues emerged [12; 13; 14]. One of the most widely known issue is the inconsistency between the values of the Hubble constant \(H_{0}\) (current expansion rate of the Universe) obtained by different techniques. The CMB measurement using the _Planck_ data yielded \(H_{0}=67.4\pm 0.5\ {\rm km\ s^{-1}\ Mpc^{-1}}\) at the \(68\%\) confidence level (CL) [4]. This result is in tension of more than \(4\sigma\) with the value of \(H_{0}=73.2\pm 1.3\ {\rm km\ s^{-1}\ Mpc^{-1}}\) at the \(68\%\) CL reported by SH0ES (_Supernova \(H_{0}\) for the Equation of State_) collaboration using Type Ia supernovae (SN Ia) calibrated by local Cepheid variable stars [15]. Such tension has reached 4.4\(\sigma\)-6\(\sigma\) with the accumulation of precise astrophysical observations. It should be emphasized that the Hubble constant obtained from the CMB data requires the assumption of \(\Lambda\)CDM cosmological model. The discrepancy between just two methods necessitates involvement of independent alternative techniques. Excellent and comprehensive review by [16] contains a detailed discussion of such alternative methods and current results obtained within their frameworks. Two of them are worth mentioning here. First is the possibility to directly measure the Hubble constant from time-delays between multiple images in strongly lensed systems [17]. The H0LiCOW (\(H_{0}\) Lenses in COSMOGRAIL's Wellspring) collaboration reported the value of the Hubble constant \(H_{0}=73.3^{+1.7}_{-1.8}\ {\rm km\ s^{-1}\ Mpc^{-1}}\) obtained from the joint analysis of time-delay measurements of six lensed quasars with the assumption of a spatially flat \(\Lambda{\rm CDM}\) model [18]. See refs [19; 20; 21; 22] for more on the strong lensing time-delay method to measure \(H_{0}\). Another alternative method comes from the standard sirens, which is promising in the era of gravitational wave (GW) astronomy [29; 30; 31]. Namely, from the detected waveform of coalescing binary systems one is able to measure the so called chirp mass and luminosity distance [32; 33; 34]. Besides the parallax method, this is the only one possibility in astronomy and cosmology where distance can be directly measured without reference to the distance ladder calibration. The major limitation is that in GW domain redshift of the source is rarely available. The famous event GW170817 whose optical counterpart has been identified provided such measurement [23] yielding \(H_{0}=70.0^{+12.0}_{-8.0}\ {\rm km\ s^{-1}\ Mpc^{-1}}\), which after further correction for peculiar motion has been improved [24] to \(H_{0}=68.3^{+4.6}_{-4.5}\ {\rm km\ s^{-1}\ Mpc^{-1}}\). These results are still far from the precision cosmology standards. Since there is no evidence of considerable systematic uncertainties in either the _Planck_ data [4; 16] and the local measurements [25; 26; 27], one natural way is to seeks the source of this discrepancy in modifications of the cosmological model. Thus, there is a growing interest in alternative cosmological models beyond \(\Lambda{\rm CDM}\), such as the Early Dark Energy models [28; 35; 36], interacting dark energy models [37; 38; 39; 40], modified gravity \(f(R)\)[41; 42; 43] and \(f(T)\)[44; 45] models, to give just a few examples. All this highlights the importance of any alternative, cosmological model independent methods of measuring the Hubble constant. Recently, a cosmological model-independent approach to assess the Hubble constant was proposed by Renzi & Silvestri [46]. Their method relies on the distance duality relation (DDR) [47], which links together luminosity distance \(d_{L}(z)\) and angular diameter distance \(d_{A}(z)\). The DDR is robust, i.e. valid in any metric theory of gravity (not only in Friedmann-Lemaitre-Robertson-Walker (FLRW) model) provided that number of photons in the beam is conserved (i.e. negligible absorption). For this purpose they used un-anchored SN Ia as source of \(d_{L}(z)\), line of sight and transverse BAO data as a source of \(H(z)d_{A}(z)\) and combination and cosmic chronometers as a source of \(H(z)\). Technically, the distance reconstruction steps necessary in their methodology were performed using the Gaussian process (GP) regression. The bottleneck of their study is the scarcity of \(d_{A}(z)\) data points: 7 BAO data points [8; 48; 49; 50; 51; 52; 53; 54; 55] and 30 data points for cosmic chronometers [56; 57; 58; 59; 60]. In this letter we extend the work of Renzi & Silvestri [46] in two important aspects. First, we use the sample of 120 intermediate-luminosity radio quasars compiled in Cao et al. [61] as a source of \(d_{A}(z)\). Besides much richer sample, this allows us to follow the methodology of Renzi & Silvestri [46] directly, without need to use Hubble functions \(H(z)\) from cosmic chronometers. Second improvement is that instead of Gaussian processes we use a machine learning method - Artificial Neural Network (ANN) algorithm to perform necessary reconstruction steps. More detailed description of the methodology, the samples used and the ANN method will be given below in Section II. In Section III, we show our results and discussion. We conclude in Section Methodology and observational data ### Methodology of measuring the Hubble constant Modern cosmology is based on the notion of homogeneous and isotropic Universe, well supported by the high degree of CMB isotropy. The geometry of space-time in the largest scales is described by the FLRW metric \[ds^{2}=cdt^{2}-\frac{a(t)^{2}}{1-Kr^{2}}dr^{2}-a(t)^{2}r^{2}d\Omega(\theta,\phi) ^{2}, \tag{1}\] where \(c\) is the speed of light, \(a(t)\) is the scale factor, and \(K\) is dimensionless curvature taking one of three values \(\{-1,0,1\}\) corresponding to closed, flat and open universe, respectively. The cosmic curvature parameter \(\Omega_{\rm K}\) is related to \(K\) and the Hubble constant \(H_{0}\), as \(\Omega_{\rm K}=-c^{2}K/a_{0}^{2}H_{0}^{2}\). It is evident, from the FLRW metric, that spatial lengths (e.g. wavelengths or distances between galaxies) change in time according to the scale factor \(a(t)\). The scale factor is therefore related to the redshift \(z\) of the source by \(a=1/(1+z)\) and the expansion rate at redshift \(z\) is \(H(z)=\dot{a}/a\). In the curved FLRW space-time the issue of distances becomes subtle. Namely, the distance to the object at redshift \(z\) suggested by the FLRW metric is the comoving distance \(d_{C}(z)\) given by the relation \[d_{C}(z)=c\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}. \tag{2}\] Unfortunately the comoving distance is unobservable. Two closely related distances, which can be measured are the luminosity distance \(d_{L}(z)=d_{C}(z)(1+z)\) and the angular diameter distance \(d_{A}(z)=d_{C}(z)/(1+z)\). The luminosity distance is usually inferred from sources with known intrinsic brightness or standardized luminosity, and the angular diameter distance is derived from the angular scale of objects whose intrinsic sizes are known. If the number of photons traveling along the null geodesics between the observer and the source is conserved, then \(d_{L}(z)\) and \(d_{A}(z)\) should satisfy the following relation [47], \[d_{L}(z)=d_{A}(z)(1+z)^{2}. \tag{3}\] This relation is also known as the DDR and is valid in any metric theory of gravity. Therefore, the DDR is completely independent of the cosmological model. Up to now, numerous works [62; 63; 64; 65; 66] focused on testing the DDR with the real data, demonstrated lack of evidence for noticeable departure from the DDR. According to the philosophy of Renzi & Silvestri [46] one can rewrite Eq.(3) in the form \[H_{0}=\frac{1}{(1+z)^{2}}\frac{H_{0}d_{L}(z)}{d_{A}(z)}. \tag{4}\] It is now clear that in order to obtain \(H_{0}\) one has to measure the unanchored luminosity distance \(H_{0}d_{L}(z)\) and angular diameter distance at the same redshift \(z\). As it was already mentioned, we take advantage of SN Ia as standard candles and intermediate luminosity radio quasars as standard rulers. In particular the redshift range of these two samples is similar. Below, we will present these samples in more details. ### Unanchored luminosity distance from SN Ia SN Ia observations led to the discovery the accelerating expansion of the Universe [67; 68]. They are incisive probes of cosmology through determining the shape of the Hubble diagram, i.e. the luminosity distance vs. redshift relation. However, the absolute distance \(d_{L}(z)\) is entangled with the combination of the absolute magnitude of SN Ia and the Hubble constant. Therefore, observations of SN Ia can directly provide the unanchored luminosity distance \(H_{0}d_{L}(z)\), which is exactly the quantity we need. The unanchored luminosity distance can be derived from apparent magnitude \[m_{B}=5\log_{10}(H_{0}d_{L}(z))-5a_{B}, \tag{5}\] where we adopt the value \(a_{B}=0.71273\pm 0.00176\) as inferred in work [69]. In a recent paper, Riess et al. [70] proposed an improved treatment of the intercept \(a_{B}\) valid for an arbitrary expansion history in terms of deceleration parameter \(q_{0}\) and jerk \(j_{0}\). We will assume its fixed value as mentioned above. Over the past two decades, many supernova surveys have focused on detecting supernovae within a considerable range of redshifts, including low redshifts (\(0.01<z<0.1\)), e.g. CfA1-CfA4, CSP and LOSS [71; 72; 73], and four main surveys probing the \(z>0.1\) redshift range like ESSENCE, SNLS, SDSS and PS1 [74; 75; 76; 77]. More high redshift observation of supernovae, like SCP, GOODS and CANDELS/CLASH surveys released the high-z (\(z>1.0\)) data [78; 79; 80; 81]. More recently, Scolnic et al. [82] combined the subset of 279 Pan-STARRS1(PS1) (\(0.03<z<0.68\)) supernovae [74; 83] with useful data of SN Ia from SDSS, SNLS, and various low redshift and HST samples to form the largest combined sample of SN Ia consisting of a total of 1048 SNe Ia ranging from \(0.01<z<2.3\), which is known as the "Pantheon Sample". We refer to work [82] for more details about the SN Ia standardization process including the improvements of the PS1 SNe photometry, astrometry and calibration. In order to obtain unanchored luminosity distances given by Eq.(5), we use the Pantheon dataset. The scatter diagram of the apparent magnitude for 1048 observed SN Ia is shown in the left panel of Fig. 1. ### Angular diameter distances from angular size of the compact structure in radio quasars The angular size-distance relation in compact radio quasar for cosmological inference was first proposed by Kellermann [84], who tried to obtain the deceleration parameter with 79 compact radio sources observed by VLBI at 5 GHz. Thereafter, Gurvits [85] extended this method and attempted to investigate the dependence of characteristic size on luminosity and redshift based on 337 Active Galactic Nuclei (AGNs) observed at 2.29 GHz [86]. In the subsequent analysis, [85] adopted the visibility modulus \(\Gamma=S_{c}/S_{t}\) to redefine angular size of radio sources \(\theta\), which can be expressed by \(\theta=\frac{2\sqrt{-\ln\Gamma\ln 2}}{\pi B}\), where \(B\) is the interferometer baseline measured in multiple of wavelengths, \(S_{c}\) and \(S_{t}\) are correlated flux density and total flux density, respectively. Based on a simple geometric relation between the angular size and distance, the angular diameter distance \(d_{A}(z)\) can be written as \[d_{A}(z)=\frac{lL^{\beta}(1+z)^{n}}{\theta(z)}, \tag{6}\] where \(L\) is the intrinsic luminosity of the source, \(\beta\) and \(n\) are used to quantify the possible "angular size-redshift" and "angular size-luminosity" relations, respectively. The parameter \(l\) represents the linear size scaling factor describing the apparent distribution of radio brightness within the core and \(\theta(z)\) is the observed angular size measured by VLBI technique. Our research is based on the sub-sample identified and calibrated in [87; 61; 88], so a brief description of this sample is appropriate. The original source of the data was a well-known 2.29 GHz VLBI survey undertaken by Preston et al. [86] (hereafter called P85). By using a worldwide array of antennas to form an interferometric system this survey successfully detected interference fringes from 917 radio sources out of 1398 candidates selected mainly from the Parkes survey [89]. Figure 1: _Left panel_: The apparent magnitude vs. redshift for the Pantheon sample of 1048 SN Ia. _Right panel_: Angular diameter distance vs. redshift based on the sample of 120 intermediate luminosity radio quasars. Gray dots with uncertainty bars represent angular diameter distances obtained from observed angular sizes of radio quasars. The ANN reconstructed function \(d_{A}(z)\) is shown as red dotted line and corresponding \(1\sigma\) uncertainty band is shown as a blue region. Subsequently, Jackson & Jannetta. [90] updated the P85 sample with respect to redshift, to include a total of 613 objects with redshifts \(0.0035\leq z\leq 3.787\). The full listing is available in electronic form at [[http://nrl.northumbria.ac.uk/13109/](http://nrl.northumbria.ac.uk/13109/)], including source coordinates, redshift, angular size, uncertainty in the latter, and total flux density. In the subsequent analysis, Cao et al. [87; 61] divided the Jackson's sample into different sub-samples, according to their optical counterparts and luminosity: low, intermediate, and high-luminosity quasars. It was found that 120 quasars with intermediate-luminosities (ILQSO) have reliable measurements of the angular size of the compact structure from updated 2.29 GHz VLBI survey with flat spectrum index (\(-0.38<\alpha<0.18\)). Moreover, they demonstrated that the linear size scaling factor showed negligible dependence on both redshift and intrinsic luminosity (\(|n|\simeq 10^{-3},|\beta|\simeq 10^{-4}\)) [87; 61]. The sample of 120 intermediate luminosity radio quasars selected in [61] has been extensively used in various cosmological studies [92; 93; 94; 95]. The crucial question is about the value of \(l\). Similar to absolute magnitudes of SN Ia applied to cosmology, the linear size scaling factor \(l\) parameter should also be optimized along with the cosmological model parameters. The original calibration performed in [91] through a cosmology-independent Gaussian Process (GP) technique combined with the BAO data resulted with the value of intrinsic linear size \(l=11.04\pm 0.40\) pc. Hence, in the current analysis, we use this value. The scatter diagram of angular diameter distances obtained from observed angular sizes of 120 intermediate luminosity radio quasars is shown in the right panel of Fig. 1. ### Measuring \(H_{0}\) with recent observations As we describe in more details in the next section, we assessed \(H_{0}\) using two approaches. First, by directly using redshift matched pairs QSO - SN Ia and second by using ANN reconstructed \(d_{A}(z)\) from radio quasars. In the former case the Eq.(4) can be rewritten in terms of observable quantities in the following way: \[H_{0}=\frac{\theta 10^{0.2(m_{B}+5a_{B})}}{l(1+z)^{2}}, \tag{7}\] while in the latter Eq.(4) is appropriate. The reason why we use the two methods is that, although the SN Ia and intermediate luminosity radio quasars cover similar redshift range, the number of matched pairs is small (due to relatively small QSO sample size and nonuniform coverage of the redshift range in both samples). Thus we also use the Artificial Neural Network (ANN) method trained on QSO sample to reconstruct \(d_{A}(z)\) function. The ANN method is a non-parametric approach, which unlike the Gaussian processes does not assume random variables that satisfy the Gaussian distribution. This is a totally data driven approach. The ANN method has been applied to many research fields in astronomy, and has shown excellent performance in cosmological applications [96; 97; 98; 65; 99]. We refer the reader to Wang et al. [98] for more details about ANN reconstructing \(H(z)\) function form cosmic chronometers data, where the performance of the method was amazing despite the small number of 31 data points. Wang et al. [100] released the ANN code, and Python module called Reconstruct Functions with ANN (ReFANN) [118]. We performed the \(d_{A}(z)\) reconstruction using the 120 intermediate luminosity radio quasars sample. The final result is shown in the right panel of the Fig. 1. One can see that the uncertainty band of the ANN reconstructed \(d_{A}(z)\) function is in stark contrast with individual uncertainties and scatter present in QSO data. ## III Results and Discussions It is clear from the methodology outline that SN Ia and radio quasars have exactly the same redshift with idealized setting. This does not happen in real data regarding two distinct samples of different objects. One can, however match the objects in narrow redshift bins. In our study we take the matching criterion: \(|z_{SN}-z_{QSO}|<0.005\). Such criterion has also been used in works [101; 102]. The matching criterion is very restrictive, hence from the samples of 1048 SN Ia and 120 quasars, we are able to select only 37 pairs matched by redshift. Based on these 37 matched pairs, we calculate \(H_{0}\) according to Eq.(7). Uncertainty of individual \(H_{0}\) values are calculated from the standard uncertainty propagation formula, based on the uncorrelated uncertainties of observable quantities including the observed angular size uncertainty \(\sigma_{\theta}\), apparent magnitude uncertainty \(\sigma_{m_{B}}\), as well as additional systematic uncertainties from \(\sigma_{a_{B}}\) in SN Ia and \(\sigma_{l}\) in radio quasars. The results are shown in Fig. 2. In the left panel of Fig. 2 individual measurements of \(H_{0}\) are shown. As one can see, they display a considerable scatter and diversity of uncertainties. Let us note that this plot also demonstrates that we are able to assess \(H_{0}\) using probes at different redshifts. This circumstance is important in our further discussion. The right panel of Fig. 2 shows the histogram of \(H_{0}\) values obtained from the sample of 37 matched pairs. Let us outline the approach. We adopt regarding summary statistics and take advantage of the complete sample of 120 quasars as standardizable rulers (source of \(d_{A}(z)\)). Regarding the latter, we decide to reconstruct the \(d_{A}(z)\) function using ANN trained on the full sample of 120 radio quasars. The trained network is able to forecast angular diameter distances at redshifts corresponding to SN Ia in the Pantheon sample. Concerning statistical summary of our results we take three approaches. First is the weighted mean [103], which is the way to summarize the data with non-uniform measurement uncertainties. It is the method most often encountered in the literature, in particular in meta-analysis to integrate the results of individual measurements and widely used in statistical analysis of astronomical data [104; 105]. In our context the weighted mean of \(H_{0}\) reads: \[\widetilde{H}_{0}=\frac{\Sigma_{i}\big{(}H_{0,i}/\sigma_{H_{0,i}}^{2}\big{)}}{ \Sigma_{i}\big{(}1/\sigma_{H_{0,i}}^{2}\big{)}},\ \ \ \ \ \sigma_{\widetilde{H}_{0}}^{2}=\frac{1}{\Sigma_{i}\big{(}1/\sigma_{H_{0,i}}^{2} \big{)}}, \tag{8}\] where \(\sigma_{\widetilde{H}_{0}}\) is the uncertainty of \(\widetilde{H}_{0}\). It is evident that the weighted mean gives more weight to measurements with low uncertainties. However, it is meaningful for Gaussian random variables, thus we use also a non-parametric summary: the median \(Med(H_{0})\). It is the most robust measure of central tendency, insensitive to outliers (extreme values) [106; 107]. Appropriate non-parametric dispersion measure is the median absolute deviation \(MAD(H_{0})=Med(|H_{0,i}-Med(H_{0})|)\). The third approach we use is the Markov Chain Monte Carlo (MCMC) simulation of the posterior distribution of \(H_{0}\). The summary statistics, i.e. the mean and credible intervals are directly obtained from the posterior. In this MCMC simulation, we adopt a uniform distribution for the prior distribution of \(H_{0}\in[55,100]\,\mathrm{km\ s^{-1}\ Mpc^{-1}}\). Now, getting back to the 37 matched pairs sample, our assessments for the weighted mean and corresponding uncertainty are \(\widetilde{H}_{0}=66.24(\pm 1.64)\ \mathrm{km\ s^{-1}\ Mpc^{-1}}\), which is in perfect agreement with the _Planck_ 2018 results. The histogram plot of the individual measurements is given in right panel of the Fig. 2. From the histogram one may doubt how close the distribution can be approximated by Gaussian. Therefore we perform the Kolmogorov-Smirnov test [108] for Gaussianity, which results with the p-value of \(p=0.944\). Much more sensitive Lilliefors test [109] give \(p=0.758\). Hence, the use of weighted mean as a summary measure is justified. Within the non-parametric approach, the median value of \(H_{0}\) and corresponding median absolute deviation is \(Med(H_{0})=74.21(\pm 14.77)\ \mathrm{km\ s^{-1}\ Mpc^{-1}}\). The median value is close to the value inferred from the SH0ES collaboration, but the MAD Figure 2: _Left panel_: Individual measurements of \(H_{0}\) based on the 37 pairs of radio quasars and SN Ia matched by redshift. _Right panel_: The histogram plot for the measurements of \(H_{0}\) with best fit Gaussian distribution overplotted. is large. On one side this is a well known property of MAD: it is robust, but not very precise [106]. On the other side it reflects a considerable scatter in the small sample. At last the MCMC simulation of the \(H_{0}\) posterior, performed using Python module _emcee_[110], give the result: \(H_{0}=64.83^{+1.48}_{-1.58}\,{\rm km\;s^{-1}\;Mpc^{-1}}\), where the credible region covers the range between \(16^{th}\) and \(84^{th}\) percentile. This value is consistent with the weighted mean assessment. As already stressed, the sample of 37 matched pairs SNIa - QSO is poor and scatter dominated. Therefore we take advantage of the full sample of 120 radio quasars to train the ANN network and recover the \(d_{A}(z)\) relation supported by quasars. Then one is able to forecast angular diameter distance of SN Ia having redshifts in the range of \(z\) covered by quasars. This resulted with a sample of 237 SN Ia fully meet this criterion. Uncertainty of reconstructed \(d_{A}(z)\) is obtained from ANN methodology. In principle, \(d_{A}(z)\) from trained network can be projected to the full Pantheon sample, but we do not considered this. The reason is that, in any type of reconstruction, projections beyond the range of data is questionable and burdened with considerable uncertainties. The individual measurements of \(H_{0}\) are shown in the left panel of the Fig. 3. Right panel shows the histogram of \(H_{0}\) measured. We have also tested it for Gaussianity. Kolmogorov-Smirnov test has the p-value of \(p=0.523\) and Lilliefors test yields \(p=0.164\). In both cases one cannot reject Gaussianity assumption. The difference in tests between Kolmogorov-Smirnov and Lilliefors and between 37 vs. 237 data-points can be understood in terms of the skewness noticeable on the histogram in Fig.3. However, the meaningful use of the weighted mean summary is supported. The results of different statistical approaches adopted by us are the following. The weighted mean and corresponding uncertainty are \(\widetilde{H}_{0}=73.51(\pm 0.67)\;{\rm km\;s^{-1}\;Mpc^{-1}}\). By Figure 3: _Left panel_: Individual measurements of \(H_{0}\) based on the 237 SN Ia data-points with \(d_{A}(z)\) reconstructed from the full sample of 120 radio quasars using ANN method. _Right panel_: The histogram plot for the measurements of \(H_{0}\) with best fit Gaussian distribution overplotted. using the median value and corresponding median absolute deviation, we obtain the \(Med(H_{0})=74.71(\pm 4.08)\ \mathrm{km\ s^{-1}\ Mpc^{-1}}\). This time both weighted mean and the median are consistent with each other. Moreover, the MAD region around the median differs by more than \(5\sigma\) from the _Planck_ value (\(\sigma\) referring to the _Planck_ result). With the sample size increased substantially after using ANN reconstructed angular diameter distances, our result is totally consistent with the \(H_{0}\) values inferred from local (i.e. not early Universe) probes, as reported by SH0ES and H0LiCOW collaborations. MCMC assessment of \(H_{0}\) posterior yields \(H_{0}=73.52^{+0.66}_{-0.68}\,\mathrm{km\ s^{-1}\ Mpc^{-1}}\). It is worth stressing that the technique we proposed, that is the use of data driven reconstruction of angular diameter distances via ANN achieved a of \(1\%\), which is comparable to the result from _Planck_ 2018 TT, TE, EE+lowE+lensing data [4]. This is one of the most important conclusions of our work. It is necessary to emphasize that our assessment of \(H_{0}\) depends on the calibration of the linear size parameter \(l\) in radio quasars and \(a_{B}\) in SN Ia. Because the physical meaning of the compact structure size in radio quasars and the absolute B-band magnitude of SN Ia (whose value is determined by the host stellar mass) is not very clear, it is hard to determine the linear size parameter \(l\) and SN Ia parameter \(a_{B}\) precisely. Moreover, calibration of \(l\) and \(a_{B}\) by other astronomical probes would also introduce additional systematic uncertainties. We have checked the influence of calibration parameters on our conclusions. Namely, the Table 1 of [87] presents calibration parameters for different samples of radio-quasars using two cosmology independent methods. Regarding our sample of 120 intermediate-luminosity radio quasars the linear size parameter most different from the value we used is \(l=10.86\pm 1.58\). We check that adopting this value changes central values of \(H_{0}\) by about \(1.63\%\). Similarly, if we use different value of SN Ia nuisance parameter \(a_{B}=0.71719\) Figure 4: _Left panel_: Posterior probability density of \(H_{0}\) from MCMC simulations based on 37 radio quasar – SN Ia pairs matched by redshift. _Right panel_: Posterior probability density of \(H_{0}\) from MCMC simulations based on 237 SN Ia with \(d_{A}(z)\) reconstructed using ANN method. as in [69], it will affect the measurement accuracy of \(H_{0}\) by about \(1.12\%\). The precision of \(H_{0}\) assessment is not noticeably affected. Hence, our method offers a new way to measure \(H_{0}\) with high precision. Recently, a lot of attention has been paid on whether \(H_{0}\) determined from direct observations of local probes up to high redshifts displays any trace of evolution with redshift [111; 112; 113; 114]. Let us stress that "evolution" of \(H_{0}\) does not have sense: by definition \(H_{0}\) is the present expansion rate, hence has one exact value. What we mean here is a way to discover possible systematic effects arising in using local probes at different redshifts. Any such effect, if significant evidence of it is found, most probably would indicate that calibrations on nearby objects is biased when applied to high redshift ones, e.g. metallicity effects on Cepheid calibration. Hence, the term of "evolving \(H_{0}\)" should be treated as a useful mental shortcut. For example, Dainotti et al. [115] used the Pantheon sample split into appropriate redshift bins to fit the extracted \(H_{0}\) values with a function mimicking the redshift evolution \(H_{0}(z)=H_{0}/(1+z)^{\alpha}\). They found that \(H_{0}\) evolves with redshift, showing a slowly decreasing trend, with \(\alpha\) coefficients consistent with zero only within \(1.2-2.0\sigma\) confidence level. However, their assessment of \(H_{0}\) assumed a cosmological model as \(\Lambda\mathrm{CDM}\) and \(\omega_{0}\omega_{a}\)CDM. Inspired by these findings, we decide to check whether our data, i.e. combined SN Ia and intermediate luminosity radio quasars support these claims. For this purpose, we take advantage of already mentioned circumstance that individual \(H_{0}\) values are obtained at different redshifts spanning a considerable range. Moreover, the advantage is that we do not need to specify the cosmological model. Our approach is purely data driven. We propose to test the redshift dependence of \(H_{0}\) by considering the ratio: \(\eta_{H_{0}}=H_{0,z_{i}}/H_{0,z_{fid}}\) where the \(H_{0,z_{fid}}\) is some fiducial value. From Eq.(4) and Eq.(7) one can see that calibration parameters \(a_{B}\) and \(l\) cancel and do not enter directly to the test. Hence they do not introduce any additional systematics to determination of \(\eta_{H_{0}}\). It is also important to note that whatever fiducial value we chose possible presence or lack of redshift dependence of \(H_{0}\) should not be affected. In other words statistical consistency of our ratio with the value of \(\eta_{H_{0}}=1\) would indicate lack of systematic redshift dependence of measured \(H_{0}\). In our analysis, we chose the point with the smallest redshift as the fiducial value. The individual values of \(\eta_{H_{0}}\) are shown in the Fig. 5. Already by inspection one can see that our results are consistent with \(\eta_{H_{0}}=1\). The weighted mean value and corresponding uncertainty is \(\eta_{\widetilde{H}_{0}}=1.05\pm 0.05\). By using the median value and corresponding median absolute deviation, we obtain the \(Med(\eta_{H_{0}})=1.07(\pm 0.08)\;\mathrm{km\;s^{-1}\;Mpc^{-1}}\). Meanwhile, we also perform the same test choosing the median as a fiducial value. The final results are consistent with \(\eta_{H_{0}}=1\) within corresponding uncertainty. Thus, adopting the median as the fiducial value do not change our main conclusion, therefore we do not show these results but conclude that the result we obtained is robust. ## IV Conclusion In this letter, we use the data-sets of SN Ia (Pantheon) acting as standard candles together intermediate luminosity radio quasars acting as standard rulers to determine the Hubble constant according to the idea proposed by Renzi & Silvestri [46]. The idea is based on the distance duality relation between luminosity distance \(d_{L}(z)\) and angular diameter distance \(d_{A}(z)\), which is robust and independent of any cosmological model. For this purpose, we recover the \(d_{A}(z)\) relation using ANN trained on 120 radio quasar sample. In order to cover the same redshift range by SN Ia, the sub-sample of 237 supernovae is selected. The result is unambiguously consistent with values of \(H_{0}\) obtained from local probes by SH0ES and H0LiCOW collaborations. Three statistical summary measures: weighted mean, median and MCMC simulated posterior distribution are fully consistent with each other and the precision reached \(1\%\) level. This is encouraging for the future applications of our method. For the sake of illustration, we also considered 37 matched pairs of quasars and supernovae, where \(d_{A}(z)\) was obtained directly, not from ANN reconstruction. In this case, weighted mean and posterior distributions are consistent with _Planck_ results. However, the sample is scatter dominated with large individual uncertainties. The most robust summary statistics i.e. the median and MAD turn out consistent with both local and _Planck_ values, although the median itself is consistent with SH0ES results. Because individual measurements of \(H_{0}\) are related to different redshifts spanning the range from \(z=0.5\) to almost \(z=2.0\), we take advantage of this fact and our cosmological model independent method to check if there is any noticeable Figure 5: Scatter plot of the \(\eta_{H_{0}}\) ratio based on the 237 observations of SN Ia with \(d_{A}(z)\) reconstructed from radio quasars by ANN method. trend in \(H_{0}\) measurements with redshift of objects used for this purpose. Our result is that the data alone strongly support the lack of such systematic effects. As a final remark, we also look forward to a large amount of future data, not only from the radio quasars, but also from the SN Ia, allowing us to further improve the precision of \(H_{0}\) measurements. In the future, multi-frequency VLBI observations will yield more high-quality quasar observations based on better UV coverage [116]. These radio quasars have more compact structures, higher angular resolution, and less statistical and systematic uncertainty. On the other hand, the Nancy Grace Roman Space Telescope is the future NASA mission. In a baseline 6-yr mission, including a 2-yr supernova survey strategy, it is expected to discover about \(10^{3}\sim 10^{4}\) SN Ia [117]. Future observations will create a perfect opportunity to apply the method presented here on much larger and better samples. Meanwhile, considering the variety of different machine learning algorithms and fast progress in this area, we may also be optimistic in measuring \(H_{0}\) with much higher precision. ###### Acknowledgements. The authors are grateful to the referee for constructive comments, which allowed to improve the paper substantially. Liu. T.-H was supported by National Natural Science Foundation of China under Grant No. 12203009; Chutian Scholars Program in Hubei Province. Hubei Province Foreign Expert Project (2023DJC040); Wang. J. C was supported by the National Natural Science Foundation of China under Grant No. 12122504 and No. 12035005.
2310.00610
In-plane Tidal Disruption of Stars in Disks of Active Galactic Nuclei
Stars embedded in active galactic nucleus (AGN) disks or captured by them may scatter onto the supermassive black hole (SMBH), leading to a tidal disruption event (TDE). Using the moving-mesh hydrodynamics simulations with {\small AREPO}, we investigate the dependence of debris properties in in-plane TDEs in AGN disks on the disk density and the orientation of stellar orbits relative to the disk gas (pro- and retro-grade). Key findings are: 1) Debris experiences continuous perturbations from the disk gas, which can result in significant and continuous changes in debris energy and angular momentum compared to `naked' TDEs. 2) Above a critical density of a disk around a SMBH with mass $M_{\bullet}$ ($\rho_{\rm crit} \sim 10^{-8}{\rm g~cm^{-3}}(M_{\bullet}/10^{6}{\rm M}_{\odot})^{-2.5}$) for retrograde stars, both bound and unbound debris is fully mixed into the disk. The density threshold for no bound debris return, inhibiting the accretion component of TDEs, is $\rho_{\rm crit,bound} \sim 10^{-9}{\rm g~cm^{-3}}(M_{\bullet}/10^{6}{\rm M}_{\odot})^{-2.5}$. 3) Observationally, AGN-TDEs transition from resembling naked TDEs in the limit of $\rho_{\rm disk}\lesssim 10^{-2}\rho_{\rm crit,bound}$ to fully muffled TDEs with associated inner disk state changes at $\rho_{\rm disk}\gtrsim\rho_{\rm crit,bound}$, with a superposition of AGN+TDE in between. Stellar or remnant passages themselves can significantly perturb the inner disk. This can lead to an immediate X-ray signature and optically detectable inner disk state changes, potentially contributing to the changing-look AGN phenomenon. 4) Debris mixing can enriches the average disk metallicity over time if the star's metallicity exceeds that of the disk gas.
Taeho Ryu, Barry McKernan, Saavik Ford, Matteo Cantiello, Matthew Graham, Daniel Stern, Nathan W. C Leigh
2023-10-01T08:09:19Z
http://arxiv.org/abs/2310.00610v1
# In-plane Tidal Disruption of Stars in Disks of Active Galactic Nuclei ###### Abstract Stars embedded in active galactic nucleus (AGN) disks or captured by them may scatter onto the supermassive black hole (SMBH), leading to a tidal disruption event (TDE). Using the moving-mesh hydrodynamics simulations with arepo, we investigate the dependence of debris properties in in-plane TDEs in AGN disks on the disk density and the orientation of stellar orbits relative to the disk gas (pro- and retro-grade). Key findings are: 1) Debris experiences continuous perturbations from the disk gas, which can result in significant and continuous changes in debris energy and angular momentum compared to 'naked' TDEs. 2) Above a critical density of a disk around a SMBH with mass \(M_{\bullet}\) (\(\rho_{\rm crit}\sim 10^{-8}\)g cm\({}^{-3}(M_{\bullet}/10^{6}\)M\({}_{\odot})^{-2.5}\)) for retrograde stars, both bound and unbound debris is fully mixed into the disk. The density threshold for no bound debris return, inhibiting the accretion component of TDEs, is \(\rho_{\rm crit,bound}\sim 10^{-9}\)g cm\({}^{-3}(M_{\bullet}/10^{6}\)M\({}_{\odot})^{-2.5}\). 3) Observationally, AGN-TDEs transition from resembling naked TDEs in the limit of \(\rho_{\rm disk}\lesssim 10^{-2}\rho_{\rm crit,bound}\) to fully muffled TDEs with associated inner disk state changes at \(\rho_{\rm disk}\gtrsim\rho_{\rm crit,bound}\), with a superposition of AGN+TDE in between. Stellar or remnant passages themselves can significantly perturb the inner disk. This can lead to an immediate X-ray signature and optically detectable inner disk state changes, potentially contributing to the changing-look AGN phenomenon. 4) Debris mixing can enriches the average disk metallicity over time if the star's metallicity exceeds that of the disk gas. We point out signatures of AGN-TDEs may be found in large AGN surveys. keywords: Supermassive black hole - Active galactic Nuclei - Hydrodynamics - Tidal Disruption Events - Galactic Nuclei ## 1 Introduction Active galactic nuclei (AGN) are powered by the accretion of gas disks onto supermassive black holes (SMBH). The accreting SMBH is often also orbited by a nuclear star cluster (Neumayer et al., 2020), which must interact with the gas disk. Depending on the radial size of the gas disk, gas density (\(\rho_{\rm disk}\)), and how long the disk lasts, some fraction of the nuclear star cluster orbiting the SMBH will be captured by the AGN disk (e.g. Artymowicz et al., 1993; Fabj et al., 2020; Nasim et al., 2023; Generozov and Peres, 2023; Wang et al., 2023). Star formation within the AGN disk (e.g. Goodman and Tan, 2004; Levin, 2007) can also add to the embedded stellar population. Thus, we expect a dynamic population of embedded objects (stars and stellar remnants) to live within AGN disks. The initial population of objects within the AGN disk soon after it forms should consist of both prograde and retrograde orbiters, leading to the possibility of dynamically complex and high-speed encounters and scatterings (e.g. Leigh et al., 2018; Wang et al., 2021). Captured orbiters may include stars on retrograde orbits and high eccentricities at small semi-major axes (Wang et al., 2023). In-plane tidal disruption events in AGN (or simply AGN-TDEs hereafter) can result from either in-plane scatterings of stars onto the SMBH, or eccentricity pumping of stars on retrograde orbits (e.g. Secunda et al., 2021; McKernan et al., 2022). Note that this is _a new source_ of TDEs in addition to standard loss-cone filling scattering yielding TDEs at roughly the same rate (\(\sim 10^{-4}\)yr\({}^{-1}\)) as in any other (quiescent) galactic nucleus (e.g., Stone et al., 2020). The loss-cone TDEs (e.g., Hills, 1988; Rees, 1988) will very likely intersect the AGN disk at an angle and yield a TDE that looks different from a TDE in a vacuum (Chan et al., 2019, 2020, 2021). In-plane AGN TDEs should look more different still. In McKernan et al. (2022) we speculated that AGN-TDEs could look quite different from 'naked' or gas-free TDEs, with observable differences between TDEs that are retrograde or prograde compared to the flow of disk gas. Here we investigate the hydrodynamics of prograde and retrograde AGN-TDEs using a simple disk model, with a view to qualitatively describing key features of AGN-TDEs and potential observables. Throughout we highlight the point that in-plane AGN TDEs test the dynamics of the disk as well as its embedded population. While stars on prograde orbits embedded in AGN disks may experience runaway mass growth (Cantiello et al., 2021; Jermyn et al., 2022), we only consider TDEs of normal main-sequence stars. The paper is organized as follows: we provide descriptions of our numerical methods in detail in SS2. We present results of our simulations in SS3 and discuss astrophysical implications in SS4 and caveats in SS5. Finally, we conclude with a summary in SS6. ## 2 Simulation details ### Numerical methods We perform 3D hydrodynamic simulations of a tidal disruption event of a main-sequence (MS) star on an in-plane parabolic orbit1 around an AGN disk surrounding a SMBH, using the massively parallel gravity and magnetohydrodynamic moving-mesh code AREPO (Springel, 2010; Pakmor et al., 2016; Weinberger et al., 2020). It employs a second-order finite-volume scheme to solve the hydrodynamic equations on a moving Voronoi mesh, and a tree-particle-mesh method for gravity calculations. By adopting this innovative approach to grid construction and solving hydrodynamics equations, the code inherits advantages of both commonly used hydrodynamics schemes, Eulerian grid-based methods and Lagrangian smoothed particle methods. The advantages include improved shock capturing without introducing an artificial viscosity, and adaptive adjustment of spatial resolution. Footnote 1: For naked TDEs, it is a good approximation that a star that is tidally disrupted initially had approached on a parabolic orbit. Even in AGNs, if a star approaches the SMBH from a large distance near the influence radius of the SMBH, the stellar orbit can be approximated as a parabolic orbit. The disk gas would exert a drag force on the star, affecting the orbit. But, in this work, we simply assume a parabolic orbit for simplicity. We use the ideal gas equation \(p=(\gamma-1)u\) with \(\gamma=5/3\), where \(p\) is the pressure and \(u\) is the gas internal energy density. ### Creation of a star in an AGN disk To make proper initial conditions for the simulations, we follow several steps, 1) creating a disk (SS2.2.1), 2) creating a 3D MS star (SS2.2.2), and 3) placing the star in the disk with a different mid-plane density on a parabolic orbit around a SMBH (SS2.2.3).. #### 2.2.1 AGN disk around a supermassive black hole We model the central SMBH using a non-rotating sink particle, which interacts solely through gravitational forces with the gas. We allow the particle to grow in mass via accretion following the same procedure described in Ryu et al. (2023). However, it's worth noting that the total mass accreted remains significantly smaller than the mass of the SMBH throughout the simulation. Consequently, the change in the gravitational potential due to the mass growth of the SMBH would not significantly impact the results presented in this paper. For the sake of completeness, we will briefly summarize the adopted accretion prescription. At every time step, the accretion rate is estimated as an inward radial mass flux towards the BH averaged over cells with weights within \(10r_{\rm g}\) (denoted by "accretion radius"), and multiplied by the integration area. Here, \(r_{\rm g}=GM_{\bullet}/c^{2}\) represents the gravitational radius, which is approximately \(2\,{\rm R}_{\odot}\) for a SMBH mass of \(M_{\bullet}=10^{6}\,{\rm M}_{\odot}\). The weights are given using an inverse-distance weighted spline Kernel (Monaghan & Lattanzio, 1985) (Equation 4 in Springel, 2005). If there are only a few cells within the accretion radius, the accretion rate estimate may be affected by Poisson noise. To ensure a sufficient number of cells in proximity to the black hole, we dynamically adjust cell refinement and derefinement within a region slightly larger than the accretion radius, aiming to maintain more than approximately 100 cells within this radius. Specifically, the code refines cells with a density greater than \(10^{-14}\) g/cm\({}^{3}\) and a mass exceeding \(6\times 10^{22}\) g, provided that the ratio of the cell size to the distance from the black hole is greater than \(\Delta r/r=0.03\). Conversely, the code defines cells if their mass falls below \(1.5\times 10^{22}\) g or if \(\Delta r/r<0.01\). The thermodynamic profiles of an AGN disk surrounding the SMBH are described by the solution for a gas-pressure dominated disk in Nelson et al. (2013). The mid-plane density and temperature follow a power-law in \(r\), \[\rho_{\rm mid}(r)=\rho_{\rm c}\left(\frac{r}{r_{\rm cusp}}\right)^{p}, \tag{1}\] \[T_{\rm mid}(r)=T_{\rm c}\left(\frac{r}{r_{\rm cusp}}\right)^{q}, \tag{2}\] where \(\rho_{\rm c}\) and \(T_{\rm c}\) are the mid-plane density and temperature near the inner edge of the disk, respectively, and \(r_{\rm cusp}=10^{3}r_{\rm g}\). In this work, we consider a disk surrounding a \(10^{6}\,{\rm M}_{\odot}\) SMBH with the mid-plane density \(\rho_{\rm c}=10^{-7}-10^{-12}\) g cm\({}^{-3}\) at the inner disk edge \(R_{\rm inner}=100r_{\rm g}\) (see Table 2). To match the disk solution by Sirko & Goodman (2003), we adopt the values of \(p\), \[p=\left\{\begin{array}{rl}0&\mbox{for $r<r_{\rm cusp}$}\\ -3&\mbox{for $r>r_{\rm cusp}$,}\end{array}\right. \tag{3}\] and \(q=-3/4\). The vertical structure, i.e., the density and angular frequency \(\Omega\) Figure 1: Average mid-plane density and enclosed mass of our fiducial AGN disk model at four different times: \(t=0\) hours (grey dotted), 25 hours (blue solid), 50 hours (red solid), 100 hours (black solid). The upper \(x-\)axis indicates the time which it takes for a star on a parabolic orbit to reach the given distance. of the disk is described by the following equations, \[\rho_{\rm disk}(r,z) =\rho_{\rm mid}(r)\exp\left(\left[\frac{h}{r}\right]^{-2}\left[ \frac{1}{\sqrt{1+(z/r)^{2}}}-1\right]\right), \tag{4}\] \[\Omega_{\rm disk}(r,z) =\Omega_{\rm K}\left[(p+q)\left(\frac{h}{r}\right)^{2}+(1+q)- \frac{q}{\sqrt{1+(z/r)^{2}}}\right]^{1/2}, \tag{5}\] where \(\Omega_{\rm K}=\sqrt{G\,M_{\bullet}/r^{3}}\) is the Keplerian angular frequency and \(h/r\) the aspect ratio. Note that the temperature has no dependence on the vertical distance orthogonal to the mid-plane \(z\), meaning the temperature (so the sound speed) is constant along each vertical column at given \(r\). The disk is fully described if \(\rho_{\rm c}\), \(T_{\rm c}\) and \(h/r\) at \(R_{\rm immer}\) are determined. For a given \(\rho_{\rm c}\) we estimate the two other disk parameters using the following equations from Sirko & Goodman (2003), \[c_{\rm s}^{2}\Sigma =\frac{\dot{M}^{\prime}\Omega}{3\pi\alpha}, \tag{6}\] \[\Sigma =2\rho_{\rm c}h,\] (7) \[h =\frac{c_{\rm s}}{\Omega}, \tag{8}\] where \(\dot{M}^{\prime}=\dot{M}(1-\sqrt{\frac{5r_{\rm s}}{R_{\rm immer}}})\) and \(\dot{M}\) is the accretion rate. Assuming \(\dot{M}=0.5\dot{M}_{\rm Edd}\), \(\dot{M}^{\prime}\simeq 0.4\dot{M}_{\rm Edd}\) at \(R_{\rm immer}=100r_{\rm g}\). Here, \(\dot{M}_{\rm Edd}=10L_{\rm Edd}/c^{2}\) where \(L_{\rm Edd}\) is the Eddington luminosity and an radiation efficiency of \(\eta=0.1\) is assumed. And \(\alpha=0.01\) is the viscosity parameter. Combining Equations 6, 7, and 8, we find an expression for \(h/r\), \[\frac{h}{r}=\left(\frac{\dot{M}^{\prime}}{6\pi\alpha\rho_{\rm c}\Omega}\right) ^{1/3}r^{-1}. \tag{9}\] Once \(h/r\) is estimated, Equation 8 determines \(T_{\rm c}\) from the assumed ideal equation of state. Using the disk solution, we construct a disk extending out to \(\simeq 10^{4}r_{\rm g}\) for \(M_{\bullet}=10^{6}\,\rm M_{\odot}\), corresponding to \(2\times 10^{4}\,\rm R_{\odot}\), using \(\simeq 10^{7}\) cells. The disk parameters for our models are summarized in Table 1. #### 2.2.2 Stellar model The initial state of the star in our hydrodynamics simulations was taken from stellar models evolved using the stellar evolution code MESA (version r22.05.1) (Paxton et al., 2011, 2013, 2015, 2019; Jermyn et al., 2023). We consider MS stars with three different masses, \(M_{\star}=1\,\rm M_{\odot}\), \(3\,\rm M_{\odot}\), and \(10\,\rm M_{\odot}\), when the core H mass fraction is 0.3. The stellar radii of those stars are \(R_{\star}=0.95\,\rm R_{\odot}\), \(2.5\,\rm R_{\odot}\), and \(4.7\,\rm R_{\odot}\), respectively. Stars can grow in mass via accretion in the AGN disks (Cantiello et al., 2021). The rate of accretion significantly influences the internal structure and chemical compositions of stars embedded in the disk. However, for those stars which approach the SMBH on a parabolic orbit from the effective radius of the nuclear cluster, \(\simeq 0.5\) pc \(\simeq 10^{7}r_{\rm g}\) for \(M_{\bullet}=10^{6}\,\rm M_{\odot}\)(Neumayer et al., 2020), and are disrupted at the first pericenter passage, the accretion onto the star would not be significant. Assuming a Bondi-Hoyle accretion (Bondi & Hoyle, 1944; Bondi, 1952)2 onto a \(1\,\rm M_{\odot}\) star on a parabolic orbit in our disk with \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\), the accretion rate can be estimated as \(\dot{M}\simeq 10^{-15}\,\rm M_{\odot}\) yr\({}^{-1}(r/10^{7}r_{\rm g})^{-3/2}\). The total accreted mass until the star reaches the SMBH is \(\simeq 10^{-10}\,\rm M_{\odot}\). Going one step further, because the dynamical friction is \(\propto\dot{M}v\)(Lee & Stahler, 2011, 2014) where \(v\) is the speed of the star, the total momentum of disk gas interacting with the star is many orders of magnitude smaller than the angular momentum of the stellar orbit. Footnote 2: The Bondi radius \(\propto 1/[c_{\rm s}^{2}+v^{2}]\propto v^{-2}\) where \(v\) is the speed of the star because \(c_{\rm s}\lesssim v\). \begin{table} \begin{tabular}{c c c c c c} \hline Model number & \(M_{\bullet}\) [\(\,\rm M_{\odot}\)] & \(M_{\bullet}\) [\(\,\rm M_{\odot}\)] & \(\rho_{\rm c}\) [\(\,\rm g/\) cm\({}^{-3}\)] & Pro. or retro. & \(r_{\rm p}/r_{\rm r}\) & \(r_{\rm p}\) [\(r_{\rm g}\)] & \(t_{\rm p}\) [hours] \\ \hline \hline 1 & \(10^{6}\) & 1 & \(10^{-7}\) & Pro & 0.3 & 13 & 0.07 \\ 2 & \(10^{6}\) & 1 & \(10^{-7}\) & Retro & 0.3 & 13 & 0.07 \\ 3 & \(10^{6}\) & 1 & \(10^{-8}\) & Pro & 0.3 & 13 & 0.07 \\ 4 & \(10^{6}\) & 1 & \(10^{-8}\) & Retro & 0.3 & 13 & 0.07 \\ 5 & \(10^{6}\) & 1 & \(10^{-9}\) & Pro & 0.3 & 13 & 0.07 \\ 6 & \(10^{6}\) & 1 & \(10^{-9}\) & Retro & 0.3 & 13 & 0.07 \\ 7 & \(10^{6}\) & 1 & \(10^{-11}\) & Pro & 0.3 & 13 & 0.07 \\ 8 & \(10^{6}\) & 1 & \(10^{-11}\) & Retro & 0.3 & 13 & 0.07 \\ 9 & \(10^{6}\) & 1 & \(10^{-12}\) & Pro & 0.3 & 13 & 0.07 \\ 10 & \(10^{6}\) & 1 & \(10^{-12}\) & Retro & 0.3 & 13 & 0.07 \\ \hline 11 & \(10^{6}\) & 3 & \(10^{-8}\) & Pro & 0.3 & 24 & 0.15 \\ 12 & \(10^{6}\) & 3 & \(10^{-8}\) & Retro & 0.3 & 24 & 0.15 \\ \hline 13 & \(10^{6}\) & 10 & \(10^{-8}\) & Pro & 0.3 & 30 & 0.22 \\ 14 & \(10^{6}\) & 10 & \(10^{-8}\) & Retro & 0.3 & 30 & 0.22 \\ \hline \end{tabular} \end{table} Table 2: Initial parameters: (left to right) model number, black hole mass \(M_{\bullet}\)[\(\,\rm M_{\odot}\) ], stellar mass \(M_{\bullet}\)[\(\,\rm M_{\odot}\) ], maximum mid-plane density \(\rho_{\rm c}\), relative orientation (prograde vs retrograde), pericenter distance \(r_{\rm p}\) measured in units of the tidal radius \(r_{\rm t}\), \(r_{\rm p}\) measured in units of the gravitational radius \(r_{\rm g}\) dynamical time \(t_{\rm p}\) at pericenter. We first map the 1D MESA model into a 3D AREPO grid with \(N\simeq 10^{6}\) cells using the mapping routine by Ohlmann et al. (2017). Then we fully relax the 3D single star, which usually takes up to five stellar dynamical times \(\sqrt{R_{\star}^{3}/GM_{\star}}\). Figure 3 depicts the radial density profile of the fully relaxed stars in comparison with the MESA models. The internal profile of the 3D star matches the MESA model within less than a few % except near the surface, corresponding to only a few % of the total mass, where the error is greater than 10%. #### 2.2.3 Initial conditions for star \(+\) disk model The relaxed stars are initially placed at \(8r_{\rm t}\) on a parabolic orbit with a pericenter distance \(r_{\rm p}\simeq 0.3r_{\rm t}\) where \(r_{\rm t}\) is the tidal disruption radius, defined as \(r_{\rm t}\equiv(M_{\star}/\ M_{\star})^{1/3}\ R_{\star}\). The pericenter distance was chosen to ensure a complete disruption of the star in our fiducial model while keeping the events from becoming too relativistic. We consider both prograde and retrograde orbits of the star relative to the orbit of the disk. Our fiducial models assume \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\). In addition, we consider \(\rho_{\rm c}=10^{-7}\) g cm\({}^{-3}\), \(10^{-9}\) g cm\({}^{-3}\), \(10^{-11}\) g cm\({}^{-3}\), and \(10^{-12}\) g cm\({}^{-3}\). For reference, we perform the simulation of a TDE in an extremely low-density medium with \(\rho\simeq 10^{-20}\) g cm\({}^{-3}\), representing a vacuum, sharing the same encounter parameters of our fiducial model. To examine the stellar-mass dependence, we examine the post-disruption properties of the disk and debris for stars with \(M_{\star}=3\,{\rm M}_{\odot}\) and \(10\,{\rm M}_{\odot}\). For these cases, we only consider \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\). We performed convergence tests for the retrograde version of our fiducial models with different resolutions for the star (\(N_{\star}\)=125000, 250000, 500000, \(10^{6}\) cells) and disk (\(N_{\rm disk}=6\times 10^{6}\) and \(1.2\times 10^{7}\)). By comparing several key quantities (e.g., debris mass as a function of radius from the black hole, the average radial mass infall rate towards the black hole), we confirmed that the results have already converged very well with \(N_{\star}=250000\) and \(N_{\rm disk}=6\times 10^{6}\). To ensure the convergence, we chose \(N_{\star}=10^{6}\). We summarize the model parameters in Table 1. ## 3 Results ### Overview We first provide a qualitative overview of the results of our simulations. More quantitative descriptions will be given in the following sections. Figure 4 shows successive moments in a full disruption of the \(1\,{\rm M}_{\odot}\) star in our fiducial models (\(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\)) with a prograde (_left_) and retrograde (_right_) orbit. For comparison, we show in the _middle_ column the same moments in a naked TDE sharing the same encounter parameters. The pre-disruption orbit and the internal structure of the star are not significantly affected by the disk provided a negligible amount of disk mass is interacting with the star until it reaches pericenter (\(1^{\rm st}\) panels). Upon disruption, the debris starts to expand in size. The increasingly larger cross-sections makes the debris more subject to interacting with the disk (\(2^{\rm nd}-3^{\rm rd}\) panels). Depending on whether the orbit is prograde or retrograde, the evolution of the debris can be qualitatively different, meaning potentially different observational signatures. In the prograde case, the outer edges of the debris are gradually mixed into the disk via the Rayleigh-Taylor instability (Taylor, 1950; Rayleigh, 1882). Due to the coherent motion between the debris and Figure 3: Radial density (_top_) of fully relaxed 3D stars with mass \(M_{\star}=1\,{\rm M}_{\odot}\) (red solid), \(3\,{\rm M}_{\odot}\) (blue solid), and \(10\,{\rm M}_{\odot}\) (orange solid), over-plotted with lines for the MESA models (grey dashed lines) and the relative errors (_bottom_) between the two density profiles. Figure 2: Mid-plane density (_top_) and enclosed mass (_bottom_) of fully relaxed disks with a different initial mid-plane density. The grey lines in the _top_ panel showing the initial density profile are just sitting on top of the profile for the relaxed disks. On the other hand, the grey horizontal lines in the _bottom_ panel indicate the masses of the stars considered. Like Figure 1, the upper \(x-\)axis shows how long it takes for a star on a parabolic orbit to travel the given distance. disk, as illustrated in the _left_ panel of Figure 5, the interaction with the disk acts to add angular momentum to the debris. On the other hand, the evolution of debris in the retrograde case is more dramatic due to the significant cancellation of angular momentum. In the retrograde case, like the prograde case, the debris is continuously lost to the disk. But the mixing is more violent, which is shown in the 3\({}^{\rm rd}\)_right_ panel of Figure 5. As a result, the initially coherent motion of the debris is significantly perturbed even before any of the bound matter starts to return to the SMBH. Because of increasingly irregular perturbations caused by the Rayleigh-Taylor instability, the energy distribution and the resulting fallback curve of the debris tend to be bumpier than that for the prograde case. In the case of a sufficiently high mid-plane Figure 4: Successive moments in a full disruption event in a prograde (_left_) and retrograde (_right_) disk with a mid-plane density of \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\). The _middle_ column shows them in a naked disruption event at the same times. The spatial scale shown in the inset corresponds to roughly 1/20 of the size indicated below the bar on the _left-bottom_ corner. Continuous interactions between the debris and the disk gas result in a significant perturbation of the debris’s orbit and, therefore, its structure. The impact of the disk interaction is greater for the retrograde orbit than the prograde orbit. At later times, the debris is completely disintegrated and mixed into the disk. density (e.g., \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\)), the entire debris can be mixed with the disk in less time than the peak fallback time of debris in a naked TDE with the same disruption parameters. ### Debris mass loss - semi-analytic approach To quantify the mass loss of debris to the disk, we distinguish the debris from the disk gas using a passive scalar. The passive scalar is an artificial scalar quantity initially assigned to each cell. The scalar then evolves via advection. The initial value of the passive scalar for the cells that belong to the stars is one, and for the disk cells, it is zero. Depending on the mass exchange (and thus momentum exchange) between the cells, the passive scalar varies between zero (for disk cells) and one (for cells originally in the stars). Identifying a specific region of gas with a passive scalar has been used in the literature to investigate mixing of gas in various contexts (e.g., McCourt et al., 2015; Gronke and Oh, 2018; Dutta and Sharma, 2019; Kanjilal et al., 2021; Farber et al., 2022; Farber and Gronke, 2022).Our close investigation of the distribution of the scalar suggests that the scalar quantity for the debris in a coherent motion is generally larger than 0.99, meaning it mixes with the disk material by roughly less than 1% in mass. Figure 6 illustrates the fractional mass of debris relative to the initial stellar mass as a function of the distance of the center of mass of debris from the SMBH. The _left_ (_right_) panel compares the debris mass between models with different mid-plane disk densities (stellar masses). The most noticeable difference is between the prograde and retrograde cases. For the case with the retrograde orbit relative to the disk with the highest density (\(\rho_{\rm c}=10^{-7}\) g cm\({}^{-3}\), red solid line in the _left_ panel), the entire debris gets mixed to the disk near the density cusp at \(r\simeq 10^{3}r_{\rm B}\). On the other hand, for the prograde case with the same disk, the mass loss is less severe: \(\simeq 30\%\) of the debris survives until it reaches \(7000r_{\rm B}\). As the disk mid-plane density decreases or the stellar mass increases, a larger fraction of debris can reach farther out. The impact of the debris mass loss on the disk structure would be insignificant because the enclosed mass of the disk (dotted diagonal lines in Figure 6) is many orders of magnitude greater than the stellar mass by the time the entire debris is dissociated and mixed into the disk. We can understand the trends of debris mass loss to the disk by comparing how much disk mass interacts to remove the momentum of the debris along the way out. To this end, we build a semi-analytic model for the mass of debris that is mixed to the disk in the retrograde case, which allows us to estimate the maximum distance that the debris can travel through a disk. For the prograde case, the momentum of the disk is added to the debris (see Figure 7), so this semi-analytic model won't apply to the prograde case. We assume that the disruption of the coherent motion of the debris is primarily governed by the amount of mass of the disk flow hitting the debris. In other words, the remaining debris mass \(M_{\rm debris}\) that continues to follow the orbit that the debris would have assuming a ballistic orbit is simply the initial debris mass or stellar mass \(M_{\bullet}\) minus the mass of disk flow \(M_{\rm d}\) continuously interacting with the debris, \[M_{\rm debris}(t)=M_{\bullet}-M_{\rm d}(t). \tag{10}\] When the self-gravity is not important, the slow-down of the debris would naturally lead to mixing into the disk. However, when the self-gravity is strong, the entire debris would slow down instead of mixing into the disk. The former is more relevant for AGN-TDEs. We may be able to estimate \(M_{\rm d}(t)\) as, \[M_{\rm d}(t)\sim\int_{0}^{t}\rho_{\rm disk}(r)v_{\rm disk}(r)A_{\rm debris}(t^{ \prime})dt^{\prime}, \tag{11}\] where \(\rho_{\rm disk}(r)\) is the density at distance \(r\) from the SMBH, \(v_{\rm disk}(r)\simeq\sqrt{GM_{\bullet}/r}\) the flow speed at \(r\), and \(A_{\rm debris}\) the cross-section of the debris whose normal is parallel to the disk flow. Although each part of the debris moves at a different speed, we simply assume that the entire debris continues to follow the original orbit of the star, i.e., Figure 5: Zoom-in near the head of the debris for the prograde (_left_) and retrograde (_right_) cases with \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\), measured at \(-\) 4 days after disruption. The white arrows show the direction of motion of the gas. In the prograde case, the disk interactions act to increase the angular momentum of the debris whereas, in the retrograde case, the disk interactions cancel out the angular momentum of the debris. parabolic orbit, so that \(r(t)\) since disruption is expressed, \[r(t)=\left(\frac{9GM_{\bullet}}{2}\right)^{1/3}t^{2/3}, \tag{12}\] and the radial velocity is, \[v_{\rm debris}(t)=\frac{dr(t)}{dt}=\frac{2}{3}\left(\frac{9GM_{\bullet}}{2} \right)^{1/3}t^{-1/3}. \tag{13}\] With Equations 12 and 13, \({\rho_{\rm disk}(r)}\) and \(v_{\rm disk}(r)\) are now a function of time. To calculate \(A_{\rm debris}(t)\), we assume that by the time the debris arrives at \(r\simeq r_{\rm t}\) since disruption, the debris extends to \(l_{\rm debris}\simeq\alpha\ R_{\bullet}\) with a width \(w\simeq R_{\bullet}\), where \(\alpha\simeq 20-22\) from our simulations (see Figure 4). We further assume that the debris expands in size such that \(l\propto t^{4/3}\) and \(w_{\rm debris}\propto t^{1/3}\) before the most bound debris starts to return (Coughlin et al., 2021; Bonnerot & Stone, 2021). These assumptions allow us to write an expression for \(l_{\rm debris}\) and \(w_{\rm debris}\), \[l_{\rm debris} \simeq \alpha\ R_{\bullet}\left(\frac{t}{t(r=r_{\rm t})}\right)^{4/3}, \tag{14}\] \[\simeq \alpha\ R_{\odot}\left(\frac{M_{\bullet}}{1\,{\rm M}_{\odot}} \right)^{2/3}\left(\frac{R_{\bullet}}{1\,{\rm R}_{\odot}}\right)^{-1}\left( \frac{t}{0.01{\rm days}}\right)^{4/3},\] and \[w_{\rm debris} \simeq R_{\bullet}\left(\frac{t}{t(r=r_{\rm t})}\right)^{1/3}, \tag{15}\] \[\simeq \ R_{\odot}\left(\frac{M_{\bullet}}{1\,{\rm M}_{\odot}}\right)^{ 1/6}\left(\frac{R_{\bullet}}{1\,{\rm R}_{\odot}}\right)^{1/2}\left(\frac{t}{0. 01{\rm days}}\right)^{1/3},\] where \(t(r=r_{\rm t})\) is estimated using Equation 12. Note that the average density of debris \(\tilde{\rho}_{\rm debris}\propto M_{\bullet}/[l_{\rm debris}\eta_{\rm debris}^{2}] \propto t^{-2}\), which we have confirmed from our simulations. It follows that the cross-section \(A_{\rm debris}\) is, \[A_{\rm debris}\simeq l_{\rm debris}w_{\rm debris}=\alpha\ R_{\odot}^{2}\left( \frac{M_{\bullet}}{1\,{\rm M}_{\odot}}\right)^{5/6}\left(\frac{R_{\bullet}}{1 \,{\rm R}_{\odot}}\right)^{-1/2}\left(\frac{t}{0.01{\rm days}}\right)^{5/3}. \tag{16}\] Because the disk density profile has two regions, i.e., flat for \(r<r_{\rm cusp}=10^{3}r_{\rm g}\) and power-law for \(r>r_{\rm cusp}\), we will calculate the mass loss due to disk-debris interaction for the two regions separately. 1. _Flat region (\(\rho_{\rm disk}=\rho_{\rm c}\))_: the time required for the debris to reach the cusp is roughly estimated using Equation 12, \[t_{\rm cusp}=t(r=r_{\rm cusp})\simeq 0.85{\rm days}\left(\frac{r_{\rm cusp}}{10 ^{3}r_{\rm g}}\right)^{3/2}\left(\frac{M_{\bullet}}{10^{6}\,{\rm M}_{\odot}} \right)^{-1/2}.\] (17) So for \(t<t_{\rm cusp}\), the mass loss is, \[M_{\rm d}^{r<r_{\rm cusp}}(t) \simeq \int_{0}^{t}\rho_{\rm c}v_{\rm disk}(r(t^{\prime}))A_{\rm debris}(t ^{\prime})dt^{\prime}, \tag{18}\] \[= 0.05\,{\rm M}_{\odot}\left(\frac{\alpha}{22}\right)\left(\frac{M _{\bullet}}{10^{6}\,{\rm M}_{\odot}}\right)^{1/3}\left(\frac{M_{\bullet}}{1 \,{\rm M}_{\odot}}\right)^{5/6}\left(\frac{R_{\bullet}}{1\,{\rm R}_{\odot}} \right)^{-1/2}\] \[\times \left(\frac{\rho_{\rm c}}{10^{-8}\ {\rm g\ cm}^{-3}}\right)\left(\frac{t}{0. 85{\rm days}}\right)^{7/3}.\] As shown in the equation, the fractional mass loss \(M_{\rm d}^{r<r_{\rm cusp}}/M_{\bullet}\) has a relatively weak dependence on \(M_{\bullet}\) (\(\propto M_{\bullet}^{-1/6}\)) and \(M_{\bullet}\) (\(\propto M_{\bullet}^{1/3}\)), but rather strongly depends on \(\rho_{\rm c}\). For example, roughly half the debris mass would be mixed to the disk at \(r\simeq r_{\rm cusp}\) (or \(t\simeq 0.85\) days) when \(\rho_{\rm c}\simeq 10^{-7}\ {\rm g\ cm}^{-3}\), which is illustrated in the _left_ panel (red solid line) of Figure 6. 2. _Power-law region (\(\rho_{\rm disk}\propto r^{p}\) with \(p=-3\))_: the mass loss at \(t_{\rm cusp}\lesssim t\lesssim t_{0}\) is, \[M_{\rm d}^{r>r_{\rm cusp}}(t)\] \[\simeq M_{\rm d}^{r<r_{\rm cusp}}(t=t_{\rm cusp})+\int_{t_{\rm cusp}}^ {t}\rho_{\rm c}\left(\frac{r(t^{\prime})}{r_{\rm cusp}}\right)^{-3}v_{\rm disk }(r(t^{\prime}))A_{\rm debris}(t^{\prime})dt^{\prime},\] \[= \left(\frac{\alpha}{22}\right)\left(\frac{M_{\bullet}}{10^{6}\,{ \rm M}_{\odot}}\right)^{-2/3}\left(\frac{M_{\bullet}}{1\,{\rm M}_{\odot}} \right)^{5/6}\left(\frac{R_{\bullet}}{1\,{\rm R}_{\odot}}\right)^{-1/2}\left( \frac{r_{\rm cusp}}{10^{3}r_{\rm g}}\right)^{3}\left(\frac{\rho_{\rm c}}{10^{-8 }\ {\rm g\ cm}^{-3}}\right)\] \[\times \left[-0.3\,{\rm M}_{\odot}\left(\frac{r_{\rm cusp}}{10^{3}r_{\rm g }}\right)^{1/2}\left(\frac{M_{\bullet}}{10^{6}\,{\rm M}_{\odot}}\right)^{-1/6}+ 0.85\,{\rm M}_{\odot}\left(\frac{t}{15{\rm days}}\right)^{7/3}\right].\] Figure 6: Time evolution of the fractional remaining debris mass that has not been mixed into the disk as a function of the distance of the center of mass of the debris from the black hole for different models: (_left_) different disk densities and (_right_) different stellar masses. The solid (dotted) lines represent the prograde (retrograde) cases. The dot-dashed diagonal lines in both panels indicate the prediction from our semi-analytic model for the retrograde cases (§3.2). The dotted diagonal lines show the enclosed mass of the disk: in the _left_ panel, the colors of the lines for the disk enclosed mass match those for the debris mass while in the _right_ panel, the disk mass is for \(\rho_{\rm c}=10^{-8}\ {\rm g\ cm}^{-3}\). The mass loss predicted from the semi-analytic model is in good agreement with the simulation results for the retrograde cases (solid lines), which are depicted in Figure 6 using dot-dashed lines. Our semi-analytic model suggests that in our fiducial model with \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\), roughly 50% (80%) of the debris mass would be mixed into the disk in 15 days (30 days \(\simeq t_{0}\)) while the debris is moving away from the SMBH. Among the rest of the remaining debris (20%), the bound part would have to blow through the disk inwards while it was returning to the SMBH, such that it is very likely that at least the remaining bound debris would be completely mixed into the disk on the way in. In fact, we do not observe any coherent return of debris to the SMBH in our simulations. This means, no TDE-like flare would be generated. ### Energy and angular momentum distribution of debris In this section, we seek to investigate the energy and energy distribution of debris in AGN disks. Figure 7 presents the distribution \(d^{2}M/dEdL\) of specific orbital energy \(E\) relative to the SMBH and specific angular momentum \(L\) of debris for our fiducial models (i.e., \(M_{\bullet}=1\,{\rm M}_{\odot}\) and \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\)). Debris produced in AGN disks undergoes continuous changes in the debris structure, potentially leading to a dramatic modification of the \(E-L\) distribution on a relatively short time scale (e.g., before any of the bound material returns to the SMBH). This continuous evolution of the distribution is one of the main differences between TDEs in AGN disks and those in a vacuum ("naked TDEs"). Not long after the disruption (_left_ panel, \(t=0.2\) days), the \(L-E\) distribution appears almost identical between the prograde and retrograde cases. However, as the debris travels away from the SMBH, the distribution continuously evolves differently over time, depending on the relative orientation of the orbit. For the prograde case (_top_ panels), one of the most noticeable trends is that the angular momentum increases over time. On the other hand, the angular momentum decreases for the retrograde case (_bottom_ panels). This trend is expected based on how the motion of the debris is aligned or anti-aligned with the disk flow (see Figure 5). Additionally, the distribution for the retrograde case is substantially more perturbed by the disk. At around 10 days, most of the debris in the retrograde case is mixed into the disk, and its angular momentum becomes less than 80% of the initial angular momentum. Compared to changes in angular momentum within the debris (\(\lesssim 5\%\)) for naked TDEs (Cheng and Bogdanovic, 2014; Ryu et al., 2020), the subsequent change in the angular momentum due to continuous interactions with the disk is much more substantial. Other cases with different disk mid-plane density and stellar masses reveal qualitatively the same trend. We further present the distribution of \(E\) for \(M_{\bullet}=1\,{\rm M}_{\odot}\), by integrating \(d^{2}M/dEdL\) over \(L\), in Figure 8. Because the energy distribution for the disk with \(\rho_{\rm c}\lesssim 10^{-11}\) g cm\({}^{-3}\) is almost identical to our vacuum case, we only show the distribution for \(\rho_{\rm c}\gtrsim 10^{-9}\) g cm\({}^{-3}\) (also the same for the fallback rate in SS9). For comparison, we depict the energy distribution of debris produced in a full disruption in a vacuum sharing the same encounter parameters (grey line in each panel), measured at 2 days after disruption. For both prograde (_left_ panels) and retrograde (_right_ panels) cases, the energy distribution at \(t=0.2\) days is almost identical to that for the naked TDE, except for the sharp cut-off at the far-end of the wing for the unbound debris, indicating that the most unbound debris has been already mixed to the disk. The subsequent interaction of the debris with the disk gas continuously perturbs the debris starting from its head and tails (where the density is the lowest), corresponding to the wings of the distribution. As a result, the distribution becomes narrower. Notice, however, that the rate at which each side of the distribution becomes narrower is different. At early times (\(t\lesssim\) a few days), the unbound debris is lost to the disk at a faster rate than the bound debris. However, the "mixing" or "slowing-down" rate of the unbound debris Figure 7: Distribution of specific energy \(E\) and specific angular momentum \(L\) for a full disruption of a \(1\,{\rm M}_{\odot}\) star on a prograde (_top_) and retrograde (_bottom_) orbit relative to that of a disk with \(\rho_{\rm c}=10^{-8}\) g cm\({}^{-3}\) around a \(10^{6}\,{\rm M}_{\odot}\) SMBH at three different times. becomes slower than that of the bound debris at later times. In all cases except for the retrograde case with \(\rho_{\rm c}=10^{-7}\) g cm\({}^{-3}\), the distribution for the unbound debris does not change at \(t\gtrsim 5\) days, while that for the bound debris continues to shrink. This behavior can be understood based on when and how long the debris moves in a denser region of the disk. Upon disruption, the unbound debris advances further out, meaning that it interacts with the disk more at a given time. At later times, once the unbound debris moves beyond the density cusp, because the disk density continues to decrease, the perturbation of the unbound debris due to the disk material becomes Figure 8: Energy distribution of debris produced in a full disruption of a 1 M\({}_{\odot}\) star on a prograde (_left_) or a retrograde (_right_) orbit relative to an AGN disk with \(\rho_{\rm c}=10^{-7}\) g cm\({}^{-3}\) (_top_), \(10^{-8}\) g cm\({}^{-3}\), and \(10^{-9}\) g cm\({}^{-3}\), around a \(10^{6}\)M\({}_{\odot}\) SMBH. The grey line in each panel shows TDEs of the same star in vacuum (naked TDEs), which is sitting behind the line for AGN-TDEs at \(t\sim 0\) days. The color bar indicates the time at which the distribution is measured since disruption. Notice different time scales in the color bars. increasingly weaker. However, the bound debris stays for a longer time in denser parts of the disk as it slows down before returning to the SMBH, meaning more interactions with the disk. It is worth noting that the distribution becomes bumpy when irregular debris structure develops due to the Rayleigh-Taylor instability, which is more pronounced for the retrograde case with higher disk densities (see the _right_ panel of Figure 5). The features mentioned above are also found in the cases with different stellar masses. Given the qualitative similarities, we present the distribution for \(M_{\star}=3\,\mathrm{M}_{\odot}\) and \(10\,\mathrm{M}_{\odot}\) in the _upper_ panels of Figure 12 and Figure 12, respectively. Note that at this pericenter distance, the \(3\,\mathrm{M}_{\odot}\) star is only partially disrupted and a remnant survives, which correponds to the peak at \(E\simeq 0\) in the energy distribution. Figure 9: Same as Figure 8, but for the fallback rate. ### Fallback rate Using the energy distribution and assuming a ballistic orbit of the debris, we estimate the mass fallback rate, which is illustrated in Figure 9 for \(M_{\bullet}=1\,\mathrm{M}_{\odot}\). For completeness, we present the fallback rate for \(M_{\bullet}=3\,\mathrm{M}_{\odot}\) in Figure A1 and for \(M_{\bullet}=10\,\mathrm{M}_{\odot}\) in Figure A2. The continuous shrinkage of the energy distribution for the bound debris leads to the decrease in the peak fallback rate and increase in the peak fallback time. For \(\rho_{\mathrm{c}}=10^{-8}\) g cm\({}^{-3}\) with the prograde orbit (_middle-left_ panels), the peak mass return rate decreases from \(100\dot{M}_{\mathrm{Edd}}\) to \(20\dot{M}_{\mathrm{Edd}}\) in 24 days. Here, \(\dot{M}_{\mathrm{Edd}}=L_{\mathrm{Edd}}/\eta c^{2}\) where \(L_{\mathrm{Edd}}\) is the Eddington luminosity with a radiative efficiency \(\eta=0.1\). The debris with a bumpy energy distribution in some of the cases (see Figure 8 reveals irregular patterns in the rate on top of the trend of the peak fallback rate and time. For example, for \(\rho_{\mathrm{c}}=10^{-8}\) g cm\({}^{-3}\) (_top_ panels), the rate curves gradually shift towards the right-bottom corner of the figure while the curves become increasingly bumpy. The bumpiness and the change in the peak fallback rate and time are greater for higher \(\rho_{\mathrm{c}}\) and for a retrograde orbital configuration. One observationally relevant finding is that _the rate at which the bound debris is mixed into the disk is faster than the rate at which the debris returns in all cases shown in Figure 9_ (\(\rho_{\mathrm{c}}\gtrsim 10^{-9}\) g cm\({}^{-3}\)). In other words, the bound debris is continuously mixed into the disk before it returns to the SMBH in a coherent fashion like it does in a naked TDE. This suggests that the resulting light curves of AGN TDEs in sufficiently dense gas disks would not simply be TDE-like lightcurves on top of AGN lightcurves (see SS4.2). ## 4 Discussion ### Lightcurves #### 4.1.1 Passage of star around the SMBH A close passage of a star can significantly perturb the inner part of the disk, which possibly enhances the accretion rate until the perturbed disk settles again over a time scale comparable to the cooling timescale. To zeroth order, whether a stellar passage can significantly affect the structure of the inner disk can be measured by comparing the swept-up mass of the star during the pericenter passage, \(M_{\mathrm{swept-up}}\simeq\rho_{\mathrm{disk}}\pi R_{\mathrm{c}}^{2}r_{ \mathrm{p}}\), to the disk mass within the pericenter distance, \(M_{\mathrm{disk}}(r<r_{\mathrm{p}})\simeq\rho_{\mathrm{disk}}(h/r)r_{\mathrm{p }}^{3}\), \[\frac{M_{\mathrm{swept-up}}}{M_{\mathrm{disk}}(r<r_{\mathrm{p}})}=0.3\left( \frac{h/r}{0.05}\right)^{-1}\left(\frac{M_{\bullet}}{10\,\mathrm{M}_{\odot}} \right)^{2/3}\left(\frac{M_{\bullet}}{10^{6}\,\mathrm{M}_{\odot}}\right)^{-2/3 }\left(\frac{r_{\mathrm{p}}/r_{\mathrm{r}}}{0.3}\right)^{2}. \tag{20}\] This ratio of order unity suggests that a passage of a main-sequence star, in particular a massive one, can significantly affect the inner disk over a very short time scale comparable to the dynamical time at pericenter. Qualitatively, this may cause a state-change in the inner disk, much like those observed in changing-look AGN exhibiting an increase in luminosity (Graham et al., 2020). The temperature at the innermost stable circular orbit can be parameterized as (McKernan et al., 2022) \[T_{\mathrm{ISCO}} \approx 10^{6}\mathrm{K}\left(\frac{M_{\bullet}}{10^{6}M_{\odot}} \right)^{-1/4}\left(\frac{\dot{M}}{0.1M_{\mathrm{Edd}}}\right)^{1/4}\left( \frac{\eta}{0.1}\right)^{-1/4}\] \[\times \left(\frac{r_{\mathrm{ISCO}}}{6r_{g}}\right)^{-3/4}\left(\frac{f }{2}\right)^{-1/4},\] where \(r_{\mathrm{ISCO}}\) is the location of the innermost stable circular orbit and \(f\) is a numerical factor. By contrast, the temperature of the shock due to the close pass of the star (\(T_{\mathrm{shock}}\sim L_{\mathrm{shock}}/4\pi R_{\mathrm{shock}}^{2}\sigma\)) can be parameterized as (McKernan et al., 2022) \[T_{\mathrm{shock}}\approx 4\times 10^{6}\mathrm{K}\left(\frac{a}{10r_{g}}\right)^{ -3/8}\left(\frac{\rho_{\mathrm{disk}}}{10^{-8}\,\mathrm{g}\,\mathrm{cm}^{-3}} \right)^{1/4}, \tag{22}\] where we assume \(R_{\mathrm{shock}}\sim R_{\bullet}\) and clearly the passage of the star must heat the innermost disk substantially. At such temperatures, prompt X-ray flaring and fast outflows are likely (e.g. Kosec et al., 2023). If the heating of passage translates into a fiducial local aspect ratio increase then the puffed-up inner disk is accreted on a shorter viscous timescale \(t_{\nu}\). Since \(t_{\nu}\) can be parameterized as (e.g. Stern et al., 2018) \[t_{\nu}\sim 6\,\mathrm{yr}\left(\frac{h/r}{0.05}\right)^{-2}\left(\frac{ \alpha}{0.01}\right)^{-1}\left(\frac{M_{\bullet}}{10^{6}M_{\odot}}\right) \left(\frac{R}{100r_{g}}\right)^{3/2}, \tag{23}\] we can see that e.g. a doubling of average disk aspect ratio \(h/r\) due to local heating leads to a significantly shorter (\(1/4\)) accretion timescale and so there is a temporary enhancement in accretion (and therefore \(\eta\dot{M}c^{2}\) luminosity) while the local disk accretes and cools over the approximate thermal timescale (Stern et al., 2018) \[t_{\mathrm{th}}\sim 12\,\mathrm{days}\left(\frac{\alpha}{0.01}\right)^{-1}\left( \frac{M_{\bullet}}{10^{6}M_{\odot}}\right)\left(\frac{R}{100r_{g}}\right)^{3/ 2}. \tag{24}\] Thus, if debris from the TDE can make it back to the SMBH on these timescales (\(t_{\mathrm{th}}\)), the initial impulse heating is continued and added to, in a single episode. If debris from the TDE takes longer than \(t_{\mathrm{th}}\) to return to the inner disk, then the lightcurve will consist of two separate episodes, the initial perturbation, followed by the debris fallback and accretion. But as explained in the following section, the TDE-like lightcurves from the debris fallback and accretion can only be created when the disk density is sufficiently small. #### 4.1.2 Full tidal disruption event The subsequent source of a flare is the stellar debris produced in the tidal disruption. The biggest difference between in-plane AGN TDEs and 'naked' or standard TDEs is the continuous interaction between debris and disk gas, resulting in the time evolution of the debris orbits and debris structure. In naked TDEs, because the debris' orbit is almost ballistic, the post-disruption debris orbit does not change significantly over time until debris returns to the SMBH. This feature allows us to make a prediction for the fallback rate curve (Hills, 1988; Rees, 1988) with the energy distribution of debris upon disruption. However, in the case of AGN-TDEs, because the debris continuously interacts with surrounding disk gas, the shape of the fallback rate curve, such as the peak fallback rate and the slope of the decaying part of the fallback rate curve, depends strongly on the disk density (\(\rho_{\mathrm{disk}}\)). If \(\rho_{\mathrm{disk}}\) is sufficiently high, greater than some critical value (\(\rho_{\mathrm{crit}}\)), the debris is completely mixed with the disk before it ever returns. In this case, no debris returns to the SMBH in a coherent and eccentric fashion as predicted for naked TDEs. Using the semi-analytic model developed in SS 3.2 we estimate \(\rho_{\mathrm{crit}}\) for a complete dissociation of the debris in a retrograde orbit on a time scale comparable to the peak mass return time (estimated assuming naked TDEs) as a function of \(M_{\bullet}\) and \(M_{\bullet}\) in the _left_ panel of Figure 10. Although we assume \(r_{\mathrm{cusp}}=10^{3}r_{\mathrm{g}}\) here, \(\rho_{\mathrm{crit}}\) can be easily calculated using our semi-analytic model (see SS 3.2) with different values of \(r_{\mathrm{cusp}}\). As shown in the _left_ panel, the minimum \(\rho_{\mathrm{crit}}\) has a relatively weak dependence on \(M_{\bullet}\) while it mostly depends on \(M_{\bullet}\). The reason that \(\rho_{\mathrm{crit}}\) is lower for higher \(M_{\bullet}\) is because each part of the debris travels a longer absolute distance for a given peak mass return time. We also present in the _right_ panel the ratio of the time for a complete dissociation \(t_{\rm Gks}=t(M_{\rm d}=M_{\star})\) of debris to the peak mass return time \(t_{0}\) for naked TDEs as a function of \(\rho_{\rm crit}\) and \(M_{\star}\), suggesting that in the large parameter space relevant for AGN disks, debris is completely mixed with the disk before returning to the SMBH. Thus, for dense AGN disks, _the resulting lightcurves cannot be described by a simple superposition of the luminosity of naked TDEs on top of that of AGN disks_. Conveniently, we can parameterize \(\rho_{\rm crit}\) at \(t_{\rm dis}\simeq t_{0}\) for the retrograde TDEs, as \[\rho_{\rm crit}\simeq 10^{-8}{\rm g~{}cm^{-3}}\left(\frac{M_{\star}}{10^{6}\,{ \rm M_{\odot}}}\right)^{-2.5}. \tag{25}\] The value of \(\rho_{\rm crit}\) would be higher for prograde TDEs than that for retrograde TDEs. Even for those cases where the debris is partially disintegrated, because the mass fallback rate curve would be significantly different from that for naked TDEs, the resulting luminosity associated with the disruption of a star could be different. Nonetheless, the returned debris can perturb the inner disk, which would boost the luminosity temporarily. However, detailed modeling of the response of the disk near the SMBH is beyond the scope of this paper. In some cases, disruptions end up adding some mass to the disk without generating a TDE-like flare. However the addition of the mass would have a minimal effect on the disk structure because the disk mass inside the radius at which the debris is completely disintegrated, namely \(r\ga 10^{7}r_{\rm g}\) for \(\rho_{\rm c}\la 10^{-7}{\rm g~{}cm^{-3}}\), is much greater than the mass of the debris (see Figure 6). Quantitatively, in the limit of low \(\rho_{\rm disk}\ll\rho_{\rm c}\) (more specifically \(\rho_{\rm disk}\la\rho_{\rm c}/10^{3}\) based on our simulations), AGN-TDEs should look increasingly like standard 'naked' TDEs. Thus, observations of a TDE-like lightcurve in an AGN should indicate a low density disk with \(\rho_{\rm disk}\ll\rho_{\rm c}\). A low density disk at large radii might be responsible for late-time radio signatures years post-TDE (Cendes et al., 2022), as part of the debris that would otherwise escape interacts with the gas disk and returns later than the main apparently 'naked' TDE. Thus late-time responses to otherwise 'naked' TDEs could indicate either a weak AGN, or a more distant fuel reservoir, interacting with the debris, driving much later material return. #### 4.1.3 Partial tidal disruption event The source of AGN-TDEs are embedded stars that have either been scattered via dynamical encounters into the AGN loss cone, or on highly eccentric orbits. In both cases, the probability of a partial tidal disruption event (where pericenter passage of the star is close, but not too close, to the SMBH) should be higher than an actual AGN-TDE. It is worth considering the observational implications of partial AGN-TDEs. As we have seen above, AGN-TDE debris mixing can be significant, particularly in dense AGN disks. This can inhibit and, if the disk is dense enough (\(\rho_{\rm disk}>\rho_{\rm crit}\)), completely prevent the return of TDE material to the SMBH. However, in the case of a partial disruption, some of the outer part of the star is stripped, but the core remains coherent. As long as the orbit is bound (e.g. if the orbit is highly eccentric), it should return to the SMBH on approximately the orbital period, \(T_{\rm orb}\sim 5\,{\rm day}\,(a/600r_{\rm g})^{3/2}(M_{\star}/10^{6}\,{\rm M_{ \odot}})\) where \(a\) is the semimajor axis of the remnant's orbit. Repeated passage of a bound remnant will generate similar heating of the inner disk to the first pericenter passage. Such partial disruption perturbations could yield transients like quasi-periodic eruptions (QPEs) as observed in X-rays in some AGNs around low mass SMBHs (e.g. Wevers et al., 2022). The magnitude of any repeating flare depends on the remnant mass and the ratio of the local thermal timescale (\(t_{\rm th}\)) to the returning timescale (\(T_{\rm orb}\) in this case). For example, if the remnant mass is smaller and \(t_{\rm th}>T_{\rm orb}\), the heating flare will appear less prominent against a higher AGN continuum state. In order to explain observed QPE timescales of O(day), this would require highly eccentric retrograde stellar orbits at \(a\sim{\rm few}\times 10^{2}r_{g}\), around smaller mass SMBHs. Such orbits may occur early on in the AGN phase due to disk capture (Wang et al., 2023) or rapid retrograde orbital decay (McKernan et al., 2022). ### Metallicity of AGN disks AGNs are generally believed to have high metallicity. In particular the broad line region (BLR) metallicity in AGNs is observed to be substantially super-solar out to high redshift (Hamann & Ferland Figure 10: The minimum critical density \(\rho_{\rm crit}\) (_left_) above which the entire debris is completely mixed into the disk on the peak mass return time \(t_{0}\) estimated right after disruption in a \(M_{\star}-M_{\star}\) plane and the ratio of the complete dissociation time \(t_{\rm dis}\) to the peak mass return time in a \(\rho_{\rm crit}-M_{\star}\) plane. Here, \(t_{0}\) is estimated by including the correction factor for the stellar internal structure and relativistic effects (Ryu et al., 2020, 2020, 2020). The white region near the corner indicates the region of parameter space where stars would be directly captured by the black hole. Similarly, the vertical dashed white line indicates the maximum black hole mass for direct captures. 1999; Juarez et al. 2009). Quasar host galaxies at \(z<2\) are surrounded by metal-enriched cool gas, believed to originate in AGN outflows (Prochaska et al., 2013). Of course some of this metallicity enrichment could come from supernovae embedded in AGN at a rate of \(O(10^{-4})\)yr\({}^{-1}\)(Juarez et al., 2009), a rate which is very similar to the expected standard TDE rate. It is unclear how stars embedded in AGN disks evolve; but it is possible that they do not undergo supernovae but instead grow in mass and support themselves by inflow of fresh hydrogen from the AGN disk (Cantiello et al., 2021; Jermyn et al., 2023). If this occurs non-negligibly often, then supernovae would be more rare in AGNs than naively expected from standard stellar evolution, making TDEs a plausible means of enriching AGN metallicity. TDEs can also occur around stellar mass BH embedded in AGN disks, yielding micro-TDEs (Perets et al., 2016; Yang et al., 2022). Such micro-TDEs can also contribute to metallicity enhancement in the disk. Assuming TDEs are the sole source of metallicity enhancement through the mixing of stellar debris with the disk, one can estimate, to an order of magnitude, the number of AGN-TDEs, denoted by \(N_{\rm TDE}\), of stars with metallicity \(Z_{\star}>Z_{1}\) required to elevate the metallicity of the AGN disk from \(Z_{0}\) to \(Z_{1}\). The total enclosed mass of the disk within a distance at which a fractional mass \(\xi\) of the debris is mixed after \(N_{\rm TDE}\) TDEs is \(\xi N_{\rm TDE}\)\(M_{\star}+M_{\rm disk}\) and the total mass of metals after TDEs \(Z_{0}M_{\rm disk}+Z_{\star}\xi N_{\rm TDE}\)\(M_{\star}\). Assuming the total enclosed mass of the disk is conserved, \(N_{\rm TDE}\) can be expressed as, \[N_{\rm TDE}=\left(\frac{Z_{1}-Z_{0}}{Z_{\star}-Z_{1}}\right)\left(\frac{M_{\rm disk }}{\xi\ M_{\star}}\right), \tag{26}\] where \(M_{\rm disk}\) is the enclosed mass of the disk into which debris with a mass of \(\xi\ M_{\star}\) is mixed. As an example, to enhance the metallicity from \(Z_{2}\simeq 0.1Z_{\odot}\) to \(Z_{1}\simeq 0.9Z_{\odot}\) through TDEs of \(1\,{\rm M}_{\odot}\) stars, \(N_{\rm TDE}\simeq 1000\xi^{-1}(M_{\rm disk}/10^{2}\,{\rm M}_{\odot})(M_{\star}/\,{ \rm M}_{\odot})^{-1}\) when \(Z_{\star}=Z_{\odot}\). If \(Z_{\star}=2Z_{\odot}\), \(\simeq 100\xi^{-1}(M_{\rm disk}/10^{2}\,{\rm M}_{\odot})(M_{\star}/\,{\rm M}_{ \odot})^{-1}\). However, it is important to note that many variables, namely \(M_{\star}\), \(Z_{\star}\), \(M_{\rm disk}\), and \(\xi\), are highly uncertain. To determine these quantities accurately, a more detailed modeling of the dynamics in stellar clusters around AGN disks would be necessary. ## 5 Caveats Although our simulations treat hydrodynamical effects accurately, there are two main caveats. First, no relativistic effects are included. It has been recognized that relativistic effects would play a major role in determining the evolution of debris in TDEs by massive black holes (e.g., Bonnerot & Stone 2021). This would be applicable to in-plane TDEs in AGN disks with low disk densities. However, if the disk density is sufficiently high so that the debris is mixed before its return, relativistic effects on the long-term evolution of debris would be irrelevant. However, because debris would stay a longer time near pericenter, it is possible that the perturbation of the inner disk at the first pericenter passage would be stronger for more relativistic cases (e.g., extremely relativistic TDEs, Ryu et al. 2023). Second, we do not include radiation pressure in our simulations. The standard AGN disk model suggests that the inner part of the disk is radiation pressure-dominated (Shakura & Sunyaev, 1973; Sirko & Goodman, 2003). Because our disk is supported only by the gas pressure, the temperature is significantly higher than the case where the disk is supported both by the radiation and gas pressure. We will investigate the impact of radiation pressure on the disk temperature profile and how this in turn affects the time evolution of the debris in a follow-up project. ## 6 Summary and Conclusions In this work, we investigated the evolution of debris in tidal disruption events of main-sequence stars by a \(10^{6}\,{\rm M}_{\odot}\) supermassive black hole, surrounded by a gaseous disk, using the moving-mesh hydrodynamics simulation code arepo. We consider three stellar masses, \(M_{\star}=1\,{\rm M}_{\odot}\), \(3\,{\rm M}_{\odot}\), and \(10\,{\rm M}_{\odot}\), and a range of mid-plane maximum disk densities \(\rho_{\rm c}=10^{-12}-10^{-7}\,{\rm g\ cm}^{-3}\). The results of the simulations can be summarized as follows, 1. Stellar debris produced in an in-plane disruption in an AGN disk is continuously perturbed by the disk gas as it blows through the disk. As a result, the energy and angular momentum of the debris evolves differently over time relative to TDEs in a vacuum. For prograde TDEs (those of stars on a prograde orbit relative to the disk orbit), the debris' angular momentum increases whereas the debris' angular momentum decreases for retrograde TDEs. 2. For a sufficiently high disk density, a large fraction of the debris can be disintegrated and mixed into the disk via the interaction with the disk. The mixing is more significant for retrograde TDEs. This gradual mixing is a unique feature that is clearly distinguished from tidal disruption events in a vacuum. 3. For high density AGN disks it is likely that the debris produced in retrograde TDEs is fully mixed into the disk before any of the debris material returns in a coherent fashion as in naked TDEs. The critical density above which the retrograde debris is completely mixed into the disk on a timescale comparable to the peak mass return time for naked TDEs is \(\rho_{\rm crit}\sim 10^{-8}\,{\rm g\ cm}^{-3}(M_{\star}/10^{6}\,{\rm M}_{\odot})^{-2.5}\). Note that this critical density is the minimum density of disintegration of the entire debris. The density above which no bound material returns to the SMBH would be lower, \(\rho_{\rm crit,bound}\sim 10^{-9}\,{\rm g\ cm}^{-3}(M_{\star}/10^{6}\,{\rm M}_{\odot})^{ -2.5}\). Even for prograde TDEs, no coherent fallback has been found when the maximum disk density is \(\geq 10^{-9}\,{\rm g\ cm}^{-3}\). 4. The mixing of the stellar material into the disk has several astrophysical implications. First, the light curves of in-plane TDEs in an AGN disk, whose density is high enough to cause the disintegration of the debris, could be significantly different from just the superposition of light curves of naked TDEs on top of those of AGN disks. A first burst should originate from the close passage of a star, perturbing and heating up the inner part of the disk, possibly resulting in an enhancement of the accretion rate until the perturbed disk settles on the local thermal timescale (\(t_{\rm th}\)). Thus we expect an observable state-change in the AGN emission from a close passage, similar to that observed in changing-look AGN, with significant X-ray flaring around smaller mass SMBHs. At low-density (\(\rho_{\rm disk}\lesssim\rho_{\rm crit,bound}/10^{2}\)), the gas reservoir at moderately large distances from the SMBH (\(>10^{4}\,{\rm r_{g}}\)) might generate late-time debris return and account for recent _very_ late-time signatures of (otherwise naked) TDEs (e.g. Cendes et al. 2022). At modest disk densities (\(\rho_{\rm crit,bound}/10^{2}<\rho<\rho_{\rm crit,bound}\)), AGN TDE debris is only partially mixed and the rest returns, generating a secondary, longer, inner disk flaring episode via both inner disk perturbation as well as accretion of debris. At high disk densities (\(\rho_{\rm disk}\geq\rho_{\rm crit,bound}\)) no TDE-like light curves are created, past the initial state-change, but a mildly elevated accretion rate (and higher luminosity) should persist for years due to the complete mixing of debris. 5. For partial disruptions, recurring passage of the stellar remnant will heat the inner disk, creating recurring flares. A population of highly eccentric retrograde orbiters around low mass SMBHs should produce quasi-periodic eruptions (QPEs) in X-rays from partial AGN TDEs. Whether each flare is separate or the disk response is blended depends on the orbital period of the embedded partially disrupted star as well as disk properties. Short-timescale QPEs are therefor a test of both the population of eccentric retrograde orbiters and the AGN disk. * The mixing of the stellar debris in AGN TDEs contributes to the enhancement of the disk metallicity. Supernovae of stars embedded in a disk could be another source of metallicity enhancement. But if stars grow in mass without undergoing supernovae, mixing from AGN TDEs (and micro-TDEs) could be an important mechanism to elevate disk metallicity. Identifying such events in a large sample of AGN can provide a constraint on typical densities of AGN disks and the embedded stellar population while the disk exists which are otherwise are hard to probe. Going forward, it will be imperative to understand the shape of the lightcurves that account for the passage of the star and the evolution of debris as a function of disk density. It will also be important to track returning masses in the case of partial AGN TDEs, to test models of QPEs in AGN around low mass SMBHs. In order to investigate the contribution of AGN TDEs to the evolution of metallicity enhancements in AGN disks, detailed modeling of the dynamics between stellar-mass objects in nuclear star clusters surrounding AGN disks would be required over a cosmological time scale. ## Acknowledgements TR is very grateful to Max Gronke for a useful discussion for the mixing of gas. This research project was conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b166ea10. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683. In addition, some of the simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. BM & KESF are supported by NSF AST-2206096 and NSF AST-1831415 and Simons Foundation Grant 533845, with additional sabbatical support from the Simons Foundation. NWCL gratefully acknowledges the generous support of a Fondecyt General grant 1230082, as well as support from Millenium Nucleus NCN19_058 (TITANs) and funding via the BASAL Centro de Excelencia en Astrofisica y Tecnologia Afines (CATA) grant PFB-06/2007. NWCL also thanks support from ANID BASAL project ACE210002 and ANID BASAL projects ACE210002 and FB210003. ## Data Availability Any data used in this analysis are available on reasonable request from the first author.
2302.10612
Tree-Based Machine Learning Methods For Vehicle Insurance Claims Size Prediction
Vehicle insurance claims size prediction needs methods to efficiently handle these claims. Machine learning (ML) is one of the methods that solve this problem. Tree-based ensemble learning algorithms are highly effective and widely used ML methods. This study considers how vehicle insurance providers incorporate ML methods in their companies and explores how the models can be applied to insurance big data. We utilize various tree-based ML methods, such as bagging, random forest, and gradient boosting, to determine the relative importance of predictors in predicting claims size and to explore the relationships between claims size and predictors. Furthermore, we evaluate and compare these models' performances. The results show that tree-based ensemble methods are better than the classical least square method. Keywords: claims size prediction; machine learning; tree-based ensemble methods; vehicle insurance.
Edossa Merga Terefe
2023-02-21T11:43:01Z
http://arxiv.org/abs/2302.10612v1
# Tree-Based Machine Learning Methods For Vehicle Insurance Claims Size Prediction ## Abstract Vehicle insurance claims size prediction needs methods to efficiently handle these claims. Machine learning (ML) is one of the methods that solve this problem. Tree-based ensemble learning algorithms are highly effective and widely used ML methods. This study considers how vehicle insurance providers incorporate ML methods in their companies and explores how the models can be applied to insurance big data. We utilize various tree-based ML methods, such as bagging, random forest and gradient boosting, to determine the relative importance of predictors in predicting claims size and to explore the relationships between claims size and predictors. Furthermore, we evaluate and compare these models' performances. The results show that tree-based ensemble methods are better than the classical least square method. _Keywords:_ claims size prediction; machine learning; tree-based ensemble methods; vehicle insurance. ## 1 Introduction A key challenge for the insurance industry is to charge each customer an appropriate price for the risk they represent. Risk varies widely from customer to customer, and a deep understanding of different risk factors helps predict the likelihood and cost of insurance claims. Thus, insurance companies must have an insurance premium that is appropriate for each customer. There are two groups in the insurance industry: life insurance and non-life insurance. This study considers nonlife insurance, particularly vehicle insurance. Insurance claims occur when the policyholder creates a formal request to an insurer for coverage or compensation for an unfortunate event of an accident. Policyholders can mitigate the costs involved with coverage for the property (damage or theft to a car) and liability (legal responsibility to others for the medical or property costs). Insurance companies must predict how many claims are going to occur and the severity of these claims to enable insurers to set a fair price for their insurance products accordingly. In other words, claim prediction in the vehicle insurance sector is the cornerstone of premium estimates. Furthermore, it is crucial in the insurance sector to plan the correct insurance policy for each prospective policyholder. Several studies have been done to personalize the premium estimate, such as Guillen et al. (2019) and Roel et al. (2017), they demonstrated the possible benefits of analyzing information from telematics when determining premiums for vehicle insurance. The predictive capacity of covariates obtained from telematics vehicle driving data was investigated by Gao and Wuthrich (2018) and Gao et al. (2019) using the speed-acceleration heat maps suggested by Wuthrich (2017). Prediction accuracy enables the insurance industry to better adjust its premiums, and makes vehicle insurance coverage more affordable for more drivers. Currently, many insurance companies are transitioning to ML techniques to predict claims size. However, selecting a suitable ML predictive model is far from trivial. In this study, we investigate flexible ML techniques to make accurate predictions for claims size by analyzing a large vehicle dataset given by Ethiopian Insurance Company, one of the main car insurance company based in Ethiopia, and we apply the tree-based ML methods to the dataset, such as bagging, random forest, and gradient boosting. We also evaluate and compare the performance of these models. The rest of this paper is organized as follows. In Section 2, the dataset is described and some descriptive statistics are provided. In Section 3, we present review of three tree-based ensemble methods is presented. In Section 4, we report the results from application of considered methods. In Section 5 we provide a discussion and conclusion of the study. ## 2 Dataset and Exploratory Analysis ### The data The data used for this analysis were provided from a large database of the Ethiopian Insurance Corporation, one of the biggest insurance companies in Ethiopia. It consists of policy and claim information of vehicle insurance at the individual level. The dataset originally contains \(n=288\), \(763\) unique individual contracts, represented by the observations \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\) where \(X=(X_{1},\ldots,X_{p})\in\mathbb{R}^{p}\) denotes a vector of \(p=10\) predictors, and \(Y\in\mathbb{R}\) denotes the response variable representing the claim size. The data were correspond to the period between July 2011 to June 2018. The different predictors used in the analysis are summarized in Table 2.1. The terms, liability and comprehensive coverage in Table 2.1 are defined as: * Comprehensive coverage: The company covers all the losses which happen to the car whenever the conditions of agreement are satisfied. * Liability or third party coverage: The car can cause a damage to someone or someone's property. If the policy holder already has an agreement for a liability coverage, the insurance \begin{table} \begin{tabular}{||c|l|l|l|l||} \hline \hline S.N & Name & Type & Domain / Levels & Description / representation \\ \hline \hline 1 & Sex & categorical & 0, 1, 2 & 0 = legal entity, 1 = male, 2 = female \\ \hline 2 & Season & categorical & autumn, winter, spring, summer & Beginning of contract. \\ \hline 3 & Insurance type & categorical & 1201, 1202, 1204 & 1201 = private, 1202 = commercial, 1204 = motor trade road risk \\ \hline 4 & Type vehicle & categorical & pick-up, truck, bus,... & Type of vehicle grouped into six categories. \\ \hline 5 & Usage & categorical & fare paying passengers, taxi, general cartage,... & A usual usage of the vehicle grouped into six categories. \\ \hline 6 & Make & categorical & Toyota, Isuzu, Nissan,... & Manufacturer company. \\ \hline 7 & Coverage & categorical & comprehensive, liability & Scope of the insurance. \\ \hline 8 & Production year & Integer & 1960 - 2018 & Vehicle’s production year. \\ \hline 9 & Insured value & continuous & R+ & Vehicle’s price in USD. \\ \hline 10 & Premium & continuous & R+ & Premium amount in USD. \\ \hline \hline \end{tabular} \end{table} Table 2.1: Description of predictors in Ethiopian vehicle insurance dataset. company covers the costs in this case. Liability amount, which a policy holder has to pay as a part of premium is nationally fixed almost every year for each car type across all the insurance companies operating in the country. The liability cases are usually taken to courts or settled by negotiation. However, the affected party should not be either a family member or a relative of the policy holder. Computation of premium is determined as a function of: * Insured value. * Production year: For the first three years, age of the car is not considered, but after three years, age loader computation technique, which takes into account age of the vehicle is applied. * No claim discount (NCD): Upon the renewal of the contract after a year, a policy holder gets 20% discount from the previous year's premium and adjusted for inflation if he/she has not applied for a claim in that year. He/she can get up to 60% of discount in the consecutive years but pooled down by the age loader of the vehicle. * Contingency, Plant and Machinery (CPM): Applicable for those cars which are operated on different circumstances. For instance, loaders type vehicles premium can be computed depending on the assumption of being at the engineering (construction) sites. But in case the vehicle cause an accident while being driven on the road, the computation needs an additional consideration and computation mechanism differs. Some predictors such as carrying capacity and seat number are removed from the dataset prior to data analysis and modeling since they are not correctly coded. ### Claims size Variable In our analysis, the claim size is a continuous response variable \(Y\in\mathbb{R}\). It is originally the amount in the Ethiopian currency Birr and it is converted to USD during data analysis. The distribution of the response variable \(Y\) is strongly zero-inflated since for about 92.5% of the contracts there is no claim paid. Thus, instead of visualizing and modeling this distribution directly, we first select the non zero observations. Given that the claim has to be paid for policy holder \(i\), it is determined as \[Y_{i}=\text{Claims Size}=\frac{\text{Insured value}_{i}}{\text{Market value}_{i}}\times\text{Loss}_{i}, \tag{2.1}\] where * Market value is the market price of the car when it was bought. The market price for each car type is collected by the insurer almost every year. The data on the market value are taken either from importers or informally from other institutions. It can be either more or less than the insured value. Knowing market value of the car helps to adjust the claim amount not to be too high in case insured value is not reliable. In most cases, insured value and market value are the same. * Loss: When an accident happens, the damaged vehicle is inspected by the engineer experts who work in the insurance company. These experts known as server decision look at each part of the vehicle, identify the affected parts and propose either to replace or fix the vehicle. Once the affected parts have been identified, its price is determined by the server decision and a bid for repairing the damaged car is published. Including the server decision members, anyone who has a license to do so can usually participate in the bid competition. The loss is determined as follows. In some cases, the amount of claim paid can be higher than either the insured value or market price. It is mandatory to have at least a liability insurance coverage for all vehicles as a country's regulation, even if comprehensive insurance coverage in for a safety of the vehicle's owner. Additionally, a policy holder can also have BSG and PLL insurance coverage, but if only the comprehensive coverage is already secured first. The terms BSG and PLL are defined as: * Bandit, Shifta and Gorilla (BSG): A contract agreement in case the car is robbed or stolen. To the maximum of the insurer's knowledge, it is a vehicle insurance component applicable in Ethiopia only. * Passengers legal liability (PLL): This is applicable for fare paying passengers, in case someone is affected by an accident being in the car. Similar to that of liability coverage, its amount is fixed to be paid as part of premium and a maximum of 40,000 birr would be paid by the insurer to a passenger in case an accident happened. Even though both BSG and PLL insurance coverage depend on the interest of policy holder and they are optional, applicable if and only if the comprehensive insurance agreement is to take place or has already taken place. The insured value in (2.1) does not include the values of liability, BSG and PLL, even though they are included in the contract. It contains the value of comprehensive coverage only. Thus, claims size can be higher than the insured value if: * (total) loss + (liability insurance) + (PLL) > insured value, where total loss is an overall loss of the car due to a severe accident and impossible to repair. In that case the insurer company pays exactly the insured value as a claim. ### Exploratory Data Analysis To make assumptions about the data and find a model that fits it best, it is important to carry out an exploratory data analysis (EDA), since it has a significant role to let the data speak for themselves prior to or as part of a formal analysis. It allows the researcher to influence and criticize an intended analysis. Additionally, EDA techniques may reveal additional information that may not be directly related to the research question. For example, EDA could suggest fruitful new lines of research (Maindonald and Braun, 2010). The purpose of statistical graphics is to provide visual representations of quantitative and qualitative information. As a methodological tool, statistical graphics comprise a set of strategies and techniques that provide the researchers with important insights about the data under examination and help guide for the subsequent steps of the research process. The objectives of graphical methods are to explore and summarize the contents of large and complicated data sets, address questions about the variables in an analysis (for example, the distributional shapes, ranges, typical values and unusual observations), reveal structure and pattern in the data, check assumptions in statistical models, and facilitate greater interaction between the researcher and the data. Various graphical methods were examined to visualize data in raw and amalgamated formats. The most widely recognized graphical tool to display and examine the frequency distribution and a density of a single continuous variable is the histogram. Another common tool to visualize the observed distribution of data is by plotting a smoothed histogram commonly referred as empirical density, the blue curve superimposed on the histogram with blue line in Figure 1. The empirical densities overcome some of the disadvantages caused by the arbitrary discrete bins used in the basic histograms. Figure 1: Frequency histogram and superimposed density plot representations of natural logarithm of claim paid distribution. The distribution of the response variable \(Y\) is strongly zero-inflated since for only about 7.5% of the contracts there is non-zero claim payment. Thus in our analysis, instead of visualizing and modeling this distribution directly, we only consider a data of policy-holders who have ever received a positive claim. ### Exploring relationships between covariates and Claim paid Relationships between the predictors and the response variable can be depicted by graphical methods. Side-by-side boxplots are one way of graphical displaying the relationship between qualitative and quantitative variables. It is an excellent tool for conveying location and variation information in data sets, particularly for detecting and illustrating location and variation changes between different groups of data. The boxplots of claim paid against the different qualitative predictors are shown in Figure 2. Several predictors seem to have significant heterogeneity across theirs labels. For instance, in the boxplots of log of claim paid against sex, it seems existence of differences in terms of log of claim paid across the three groups of sex. Accordingly, male policy holders appeared to have a higher claim payment than either female counterparts or legal entities. Similarly, differences in claims size are observed in vehicle usage, identifying the vehicles that used for a general cartage to have the highest claim payment followed by a fare paying passengers vehicles. Moreover, vehicles manufactured from Isuzu and Iveco companies cost the insurer more than vehicles that are from any other companies, which is consistent with the insurer's prior identification of risky vehicles. It can also be seen that there are differences across the groups in other covariates such as vehicle type, insurance type and insurance coverage. Boxplots are also a robust measure of the spread of a distribution and more informative than merely computing the variance by group as they can be helpful in identifying the homogeneity of variance between groups of a predictor. Looking at the boxplot of sex covariate again, it can be seen that the claim payment made for male policy holders appears to be more variable than either of the other two categories. And also in vehicle usage covariate, claim payment made for vehicles that are used for a general cartage and fare paying passengers purposes have more variability than any other groups. Similarly, heterogeneity of variance between a groups of insurance coverage, insurance type, manufacturer company and vehicle type covariates was observed. Analogous to boxplots, Scatterplots are an obvious way to visualize a relationship between two quantitative variables that are numerically comparable. They are useful as a preliminary step before performing a regression analysis. Figure 3 shows that scatterplots, a bivariate relationships between claim paid and quantitative predictors. It is difficult to detect clear trends in any of the plots. However, by stratifying the points according to the different groups of usage predictor, we see some differences in claims size across the groups. In addition to the scatterplot matrix seen in Figure 3, we computed correlation coefficients between claims size and insured value, premium and production year as 0.22, 0.33 and 0.11, respectively. Even though none of the coefficients between claims size and the covariates are considered to be strong, but there are some notable associations. For instance, claims size appear to have a moderate positive correlation between insured value, premium and production year, meaning that as vehicles' insured value, premium and production year increase, their claim payment also tends to increase. Correlation coefficient based relationships usually be teased out more clearly when building the (final) model. The _loess curves_ drawn on top of the scatterplots indicates a possibly nonlinear relationship between the two variables. The curves for claims size against insured value and premium are an upside-down U-shape, peaking around the middle of insured value and premium for the most groups of usage predictor. This means that the vehicles with moderate insured value and/or moderate premium have larger claims sizes than those with lower and higher insured value and /or premium. Because this trend is non-linear, this finding could not have been inferred from the correlations alone. On the other hand, when we consider the private group of usage predictor, the relationships between claim size against insured value and premium seem to be linear with a positive slope. ## 3 Review of Machine Learning Methods Machine learning is now well established in many areas. In contrast to the statistical modeling approach, ML algorithms do not assume any specific model structure for the data. ML methods capture the underlying structure of data and therefore, they are more efficient in handling large data with arbitrary degree of complexity. One major task of machine learning is to construct good models from data sets. Among ML algorithms, ensemble methods are one of the usual choice to analyze large and complex data. Originally developed to reduce the variance-thereby improving the accuracy of an automated decision-making system, ensemble methods have since been successfully used to address a variety of machine learning problems (Zhang and Ma, 2012), such as predictor selection, class imbalanced data, confidence estimation, error correction, among others. The main idea behind the ensemble methodology is to weigh several individual pattern learners, and combine them in order to obtain a learner that outperforms most of them. In fact, combining the learners outputs does not necessarily lead to a performance that is guaranteed to be better than the best learner in the ensemble. Rather, it reduces likelihood of choosing a learner with a poor performance. Ensemble methodology imitates to seek several opinions before making any crucial decision. The individual opinions are weighted, and combined to reach the final decision (Polikar, 2006). A general principle of ensemble methods is to construct a linear combination of some model fitting method, instead of using a single fit of the method to improve the predictive performance of a given statistical learning or model fitting technique. More precisely, consider an estimation of a real-valued function \(f:\mathbb{R}^{p}\rightarrow\mathbb{R}\) based on data \(\left\{(x_{i},y_{i});i=1,\ldots,n\right\}\) where \(x\) is a \(p\)-dimensional predictor variable and \(y\) a univariate response. We may then generalize to functions \(f(x)\) and other data types. Given some input data, we learn several functions \(\hat{f}_{1}\), \(\hat{f}_{2}\), \(\hat{f}_{3}\), \(\ldots\), \(\hat{f}_{B}\), called learners, by changing the input data based on different reweighting. We can then construct an ensemble-based function estimate \(\hat{f}_{ens}(x)\) by taking linear combinations of the individual learners as an additive expansion of the learners (Elish, 2009)\(\hat{f}_{i}(x)\): \[\hat{f}_{ens}(x)=\sum_{i=1}^{B}w_{i}\hat{f}_{i}(x), \tag{3.1}\] where the \(\hat{f}_{i}(x)\) are estimates obtained from the \(i^{th}\) reweighted dataset and \(w_{i}\) are the linear com bination coefficients. For instance, \(w_{i}=1/B\), an averaging weights for bagging (see Section 3.1) and for boosting (see Section 3.3). In this study, three ensemble learning algorithms i.e., bagging, random forest and boosting are considered. For models performance comparison purpose, two non-ensemble learning technique i.e., ordinary linear regression and decision tree are also applied to Ethiopian vehicle insurance data set to predict claims size. ### Bagging Bagging (Breiman, 1996), which stands for **b**ootstrap **agg**reg**ating**, is an ensemble method for improving unstable estimation or classification schemes. As the name implies, the two key ingredients of Bagging are bootstrap and aggregation. Bagging adopts the bootstrap distribution for generating different learners. In other words, it applies bootstrap sampling (Efron and Tibshirani, 1993) to obtain the data subsets for training the learners. In detail, given an original data set, we generate a data set containing \(n\) number training observations by sampling with replacement. Some original observations appear more than once, while some original observations are not present in the sample. By applying the process \(B\) times, \(B\) samples of \(n\) training observations are obtained. Then, from each sample a learner can be trained by applying the learning algorithm. Bagging also adopts the most popular strategies for aggregating the outputs of the learners, that is, voting for classification and averaging for regression. To predict a test instance, taking regression for example, Bagging feeds the instance to its learners and collects all of their outputs, and then takes the average of the outputs as the prediction, where ties are broken arbitrarily. In particular, the bagging method for regression is applied as follows. A learner \(\hat{f_{i}}(x)\) is fitted on each of the \(B\) bootstrapped sample, where \(\hat{f_{i}}(x)\) denotes the predicted response values from \(i=1,2,\ldots,B\) learners. Then the \(B\) learners constructed are combined using the aggregation, so that the average prediction, \(f_{av}(x)\) is estimated as the average of predicted outputs from \(\hat{f_{i}}(x)\) as: \[\hat{f_{av}}(x)=\frac{1}{B}\sum_{i=1}^{B}\hat{f_{i}}(x) \tag{3.2}\] in order to obtain a single low-variance learner. ### Random Forest Random Forest (RF) is a representative of the state-of the-art ensemble methods algorithm developed by Breiman and Cutler (2008), and a very powerful technique which is used frequently in the data science field across industries (Dangeti, 2017). It is an extension of Bagging, where the major difference with Bagging is the incorporation of randomized predictor selection. During the construction of a component decision tree, at each split, RF first randomly selects a subset of predictors, and then carries out the conventional split selection procedure within the selected predictor subset. RF is usually applied to reduce the variance of individual trees by growing numerous trees. Each subsequent split for all trees grown is not done on the entire data set, but only on the portion of the prior split that it falls under. For each tree grown, about one third of training samples are not selected in bootstrap and it is called out of bootstrap ("out of bag" or "OOB") samples, as in case of Bagging in Section 3.1. Using OOB samples as input to the corresponding tree, predictions are made as if they were novel test samples. A particular observation can fall in the terminal nodes of many trees in the forest, each of which, potentially, can give a different prediction. Again, the OOB sample data used to fit a particular tree is used to make each tree's prediction. Through book-keeping principle, an average for continuous response is computed for all OOB samples from all trees for the prediction of RF model. For discrete outcomes, the prediction is the majority votes from all trees that have been grown without the respective observation or the average of the predicted probabilities (Jones and Linder, 2015). ### Gradient Boosting Gradient Boosting algorithms have been proposed in the machine learning literature by Schapire (1990) and ( Freund (1995); Freund and Schapire (1996)). The term **boosting** refers to a family of algorithms that are able to convert weak learners to strong learners for improving the predictive performance of a regression or classification procedure. Intuitively, a weak learner is just slightly better than random guess, while a strong learner is very close to perfect performance. The boosting method attempts to boost the accuracy of any given learning algorithm by fitting a series of models, each having a low error rate, and then combining them into an ensemble that may achieve better performance (Schapire, 1999). This strategy can be understood in terms of other well-known statistical approaches, such as additive models and a maximum likelihood (Friedman et al., 2000). Like bagging, boosting is a general approach that can be applied to many ensemble statistical learner for regression or classification. Unlike bagging which is a parallel ensemble method, boosting methods are sequential ensemble algorithms where the weights \(w_{i}\) in (3.1) are depending on the previous fitted functions \(\hat{f_{1}},\ldots,\hat{f_{i-1}}\). Boosting does not involve bootstrap sampling; instead each tree is fitted on a modified version of the original data set. There are several versions of the boosting algorithms for classification problems (Drucker (1997); Friedman et al. (2000); Schapire and Freund (2013)), but the most widely used is the one by Freund and Schapire (1996), which is known as AdaBoost, and it has been empirically demonstrated to be very accurate in terms of classification. There are also several studies (Friedman (2002); Friedman (2001); Drucker (1997)) conducted related to boosting for regression problems. In this paper, we rely on a recently proposed gradient boosting algorithm by Chen and Guestrin (2016), which uses regression trees as the basis functions, and it optimizes a regularized learning objective function \[\begin{split}& L(\phi)=\sum_{i}l(\hat{y}_{i},y_{i})+\sum_{b} \Omega(f_{b})\\ &\text{where}\ \ \Omega=\gamma T+\frac{1}{2}\lambda||w||^{2}.\end{split} \tag{3.3}\] Here, \(l\) is a differentiable convex loss function that measures the difference between the prediction \(\hat{y}_{i}\) and the target \(y_{i}\). The second term \(\Omega\) penalizes the complexity of the model (i.e., the regression tree functions). The additional regularization term helps to smooth the final learned weights to avoid over-fitting. Boosting regression tree involves generating a sequence of trees, each grown on the residuals of the previous tree. Therefore, boosting regression tree model inherits almost all of the advantages of tree-based models, while overcoming their primary disadvantage, that is, inaccuracy Friedman and Meulman (2003). ## 4 Application of Machine Learning Methods to Insurance Data ### Variable Importance Our goal is not only to find the most accurate model of the response, but also to identify which of the predictor variables are most important to make the predictions. For this reason, we perform variable importance. The ensemble methods algorithm estimate the importance of for instance, \(x_{j}\) predictor variable by looking at how much prediction error increases over the baseline error, when the OOB sample for \(x_{j}\) predictor is permuted while all others are left unchanged. The most commonly used variable importance measure is the permutation importance, introduced by Breiman (2001), which suggests that the variable importance of predictor \(x_{j}\) is the difference in prediction accuracy after and before permuting \(x_{j}\) averaged over all trees. More precisely, the variable importance of predictor \(x_{j}\) is defined as \[\text{VI}(x_{j})=\frac{1}{B}\sum_{i=1}^{B}\text{VI}\left(x_{j}\right)_{i} \tag{4.1}\] where \(\text{VI}\left(x_{j}\right)_{i}=\left(\text{RMSE}_{i}^{x_{j}}-\text{RMSE}_{ i}^{0}\right)\), and \(\text{RMSE}_{i}^{x_{j}}\) representing the RMSE value for the \(i^{th}\) model in the ensemble fitted from the dataset with a random permutation applied to the covariate \(x_{j}\), and \(\text{RMSE}_{i}^{0}\) is the RMSE value for this model fitted to the original dataset. Note that \(\text{VI}\left(x_{j}\right)_{i}=0\), if variable \(x_{j}\) is not in the \(i^{th}\) model. The raw variable importance score for each variable is then computed as the mean importance over all trees. In fact, \(\text{VI}(x_{j})\) can be computed depending on any other performance measure such as coefficient of determination, \(R^{2}\). Let \(x_{1},\ldots,x_{p}\) be the features of interest and let \(\text{RMSE}_{0}\) be the baseline performance metric for the trained model. The permutation-based variable importance scores can be computed as shown in Algorithm 1. Figure 4 displays the importance predictor while growing the trees. Accordingly in all the three models, premium is the most crucial predictor followed by the insured value. The second most influential (slightly equally in bagging and random forest) predictors are usage and sex, followed by production year, in line with the earlier boxplots exploratory analysis. Figure 4 is obtained by repeating the permutation of each variable 20 times and the results averaged together. This helps to provide more stable VI scores, and also the opportunity to measure their variability as seen in Figure 5, since permutation approach introduces randomness into the procedure. Figure 4: A graphical representation of average variable importance across all the trees from Bagging, boosting and regression RF. The larger the number, the bigger the effect. ### Partial Dependence Plots Though determining predictor importance is a crucial task in any supervised learning problem, ranking variables is only part of the story and once the important predictors are identified it is often necessary to assess the relationship between the predictors and the response variable. The task is often accomplished by constructing partial dependence plots (PDP) (Friedman, 2001), which helps to visualize the relationship between a predictor and the response variable while accounting for the average effect of the other predictors in the model. Let \(\hat{y}\) be prediction function from an arbitrary model using a dataset, \(D=\left\{\left(x_{i,j},y_{i}\right)\right\}\) for \(i=1,\ldots,n\) and \(j=1,\ldots,p\). The model generates predictions of the form: \[\hat{y}_{i}=f\left(x_{i,1},x_{i,2},\ldots,x_{i,p}\right)\,, \tag{4.2}\] for some function \(f(...)\). Let \(x_{k}\) be a single predictor of interest with unique values \(\left(x_{1,k},x_{2,k},\ldots,x_{n,k}\right)\). Then the partial dependence plots are obtained by computing the following average and plotting it over a useful range of \(x\) values: \[\bar{f}_{k}(x)=\frac{1}{n}\sum_{j=1}^{n}\hat{f}\left(x_{1,j},\ldots,x_{k-1,j}, x,x_{k+1,j},\ldots,x_{p,j}\right) \tag{4.3}\] The function \(\bar{f}_{k}(x)\) indicates how the value of the variable \(x_{k}\) influences the model predictions \(\left\{y_{j}\right\}\) after we have averaged out the influence of all other variables. The partial dependence of the response on \(x_{k}\) can be constructed as Algorithm 2: Figure 5: Boxplots of variable importance from Bagging (left panel) and regression RF (right panel) from 20 times repeated permutation. ``` 1:For\(j\in\{1,2,\ldots,n\}\); 1. Replace the original training values of \(x_{k}\) with the constant \(x_{k,j}\). 2. Compute vector of predicted values, \(\left\{\hat{y}_{j}\right\}\) from the modified version of the training data. 3. Compute the average prediction according to 4.3 to obtain \(\bar{f}_{k}(x_{k,j})\). 2:Plot the pairs of \(\left\{x_{k,j},\bar{f}_{k}\left(x_{k,j}\right)\right\}\) for \(j=1,2,\ldots,n\) ``` **Algorithm 2** Partial dependence construction of the response on a single predictor \(x_{k}\). Since Algorithm 2 can be quite computationally intensive as it involves \(n\) passes over the training records, a reduced number of points is used by equally spacing the values in the range of the variable of interest. Figure 6 shows a separate partial dependency function for each group of usage predictor. Because one-way partial dependency plots display one predictor at a time, they are valid only if the predictor of interest does not interact strongly with other predictors. However, interactions are common in actual practice; in these cases, we can use higher-order (such as two- and three-way) partial dependence plots to check for interactions among predictors. For example, Figure 6 shows an interaction between premium and usage predictors. The two-way plot shows that vehicles used for a general cartage with a high premium (more than 2 in ensemble methods) have much higher expected claims size compared to the vehicles with other usages. This interaction would have not apparent in the one-way plot. Figure 6: Two-way partial dependence plots; the marginal effect of premium on the claims size for different groups of usage predictor after integrating out the other variables. ### Methods Comparison In this application, the predictor variable is represented by a collection of quantitative and qualitative attributes of the vehicle and the response is the actual claims size. Given a collection of \(N\) observations \(\left\{(x_{i},y_{i});i=1,\ldots,N\right\}\) of known \((x,y)\) values, the goal is to use this data to obtain an estimate of the function that maps the predictor vector \(x\) to the values of the response variable \(y\). This function can then be used to make predictions on observations where only the \(x\) values are observed. Formally, we wish to learn a prediction function \(x\mapsto\hat{f}(x)\) that minimizes the expectation of some loss function \(\mathcal{L}(\hat{f}(x),y)\) over the joint distribution of all \((x,y)\)-values, that is \[\hat{f}(x)=arg\ min_{f(x)}E\left[\mathcal{L}(f(x),Y)|X=x\right]. \tag{4.4}\] In finite samples, we evaluate the performance of \(\hat{f}\) with the Mean Square Error (MSE), that is \[MSE=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}\left(\hat{f}(x_{i})-y_{i} \right)^{2} \tag{4.5}\] where \(\hat{f}(x)\) is a fitted regression function on the test data set \(\left\{x_{i}\right\}_{i=1}^{n^{\prime}}\) and \(y\) is the observed response variable. For statistical modeling purposes, we first partitioned the data into train (70%) and test (30%) data sets. The train set was used for exploratory data analysis, model training and selection, and the test set to assess the predictive accuracy of the selected method. The training data goes from years 2011 to 2015 and from 2017 to 2018, while the test data is observ Figure 7: Observed against predicted claims size on log scales Regarding performance of the methods, it can be clearly seen that OLS method is predicting all the claims less than USD \(10^{4}\) even though some observed claims are even larger than USD \(10^{5}\). However in case of ensemble methods, they could predict claims size beyond \(10^{4}\). Both OLS and ensemble methods underestimated high claims size, but the underestimation is higher in OLS. ## 5 Conclusion Ensemble methods are well established algorithms for obtaining highly accurate classifiers by combining less accurate ones. This paper has provided a brief overview of methods for constructing ensembles and reviewed the three most popular ones, namely bagging, random forest and gradient boosting. The paper has also provided some results from application of ensembles on real vehicle insurance dataset to address some problems of insurance companies. In the application section, the predictors are ranked according to their importance in predicting claims size, and the relationships between claims size and some predictors are assessed. The performances of non-ensemble (OLS) and ensemble learning algorithms (bagging, random forests and gradient boosting) are evaluated in terms of RMSE. Accordingly, the ensemble learning techniques outperformed the OLS. Thus, this study suggests that ensemble learning techniques can outperform non-ensemble techniques. Moreover, the three ensemble algorithms performed similarly.
2308.05522
Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning
Retrosynthesis consists of breaking down a chemical compound recursively step-by-step into molecular precursors until a set of commercially available molecules is found with the goal to provide a synthesis route. Its two primary research directions, single-step retrosynthesis prediction, which models the chemical reaction logic, and multi-step synthesis planning, which tries to find the correct sequence of reactions, are inherently intertwined. Still, this connection is not reflected in contemporary research. In this work, we combine these two major research directions by applying multiple single-step retrosynthesis models within multi-step synthesis planning and analyzing their impact using public and proprietary reaction data. We find a disconnection between high single-step performance and potential route-finding success, suggesting that single-step models must be evaluated within synthesis planning in the future. Furthermore, we show that the commonly used single-step retrosynthesis benchmark dataset USPTO-50k is insufficient as this evaluation task does not represent model performance and scalability on larger and more diverse datasets. For multi-step synthesis planning, we show that the choice of the single-step model can improve the overall success rate of synthesis planning by up to +28% compared to the commonly used baseline model. Finally, we show that each single-step model finds unique synthesis routes, and differs in aspects such as route-finding success, the number of found synthesis routes, and chemical validity, making the combination of single-step retrosynthesis prediction and multi-step synthesis planning a crucial aspect when developing future methods.
Paula Torren-Peraire, Alan Kai Hassen, Samuel Genheden, Jonas Verhoeven, Djork-Arne Clevert, Mike Preuss, Igor Tetko
2023-08-10T12:04:47Z
http://arxiv.org/abs/2308.05522v1
# Models Matter: The Impact of Single-Step ###### Abstract Retrosynthesis consists of breaking down a chemical compound recursively step-by-step into molecular precursors until a set of commercially available molecules is found with the goal to provide a synthesis route. Its two primary research directions, single-step retrosynthesis prediction, which models the chemical reaction logic, and multi-step synthesis planning, which tries to find the correct sequence of reactions, are inherently intertwined. Still, this connection is not reflected in contemporary research. In this work, we combine these two major research directions by applying multiple single-step retrosynthesis models within multi-step synthesis planning and analyzing their impact using public and proprietary reaction data. We find a disconnection between high single-step performance and potential route-finding success, suggesting that single-step models must be evaluated within synthesis planning in the future. Furthermore, we show that the commonly used single-step retrosynthesis benchmark dataset USPTO-50k is insufficient as this evaluation task does not represent model performance and scalability on larger and more diverse datasets. For multi-step synthesis planning, we show that the choice of the single-step model can improve the overall success rate of synthesis planning by up to +28% compared to the commonly used baseline model. Finally, we show that each single-step model finds unique synthesis routes, and differs in aspects such as route-finding success, the number of found synthesis routes, and chemical validity, making the combination of single-step retrosynthesis prediction and multi-step synthesis planning a crucial aspect when developing future methods. ###### Abstract We propose a novel approach to solve the problem of predicting the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of art. We propose a novel approach to predict the state of the art. We propose a novel approach to predict the state of the art. There are different approaches to extracting templates, though in all cases these processes aim to represent the atom and bond structures required to perform a reaction [8], where a single template will represent multiple reactions. Template-based methods consider single-step prediction as a classification problem where the task is to predict the appropriate template for the target molecule/product. Examples of template-based methods include NeuralSym [9], the first approach in the field which demonstrated the usefulness of using deep neural networks for retrosynthesis prediction, MHNreact [10], which uses an information retrieval approach to associate products and templates and LocalRetro [11], which uses a graph representation to predict relevant local atom and bond templates for the product. On the other hand, template-free approaches commonly treat retrosynthesis prediction as a sequence-to-sequence prediction problem [8], employing methods seen in natural language processing such as language translation tasks. Instead of extracting and predicting the corresponding templates, the approach aims to learn the underlying reactions to directly predict reactants. Product and reactants are typically introduced as Simplified Molecular-Input Line-Entry System (SMILES), a common text-based representation of chemical entities. Examples of template-free methods include Chemformer [12], a large pretrained transformer model fine-tuned on the retrosynthesis task, and Augmented Transformer [13], a transformer architecture which employs multiple types of augmentation. Other variations of these approaches exist, such as semi-template-based where the molecule is first broken down into subparts then completed to produce chemically viable reactants [14, 15, 16]. Multi-step synthesis planning focuses on researching novel synthesis route search algorithms using a single-step model to identify retrosynthetic disconnections. The pioneering approach in the field uses Monte Carlo Tree Search (MCTS) to plan the traversal of the search tree at run time guided by a neural network [2]. Alternative route planning algorithms use an oracle function or heuristics to guide the tree search instead of relying on compute-expensive run time planning. Prominent examples of this are Depth-First Proof-Number (DFPN) [17], which combines classical DFPN with a neural heuristic, Retro*, which combines A* pathfinding with a neural heuristic [18], or RetroGraph, which applies a holistic graph-based approach [19]. Other approaches incorporate reaction feasibility into the tree search [20] or use synthesizability heuristics in combination with a forward synthesis model [21, 22]. Finally, self-play approaches, motivated by their success in Go [23], learn to guide the tree search by leveraging information gathered from prior runs of synthesis planning [24, 25, 26]. Single-step retrosynthesis prediction and multi-step synthesis planning are inherently intertwined where the single-step method defines the maximum searchable reaction network, and the search algorithm tries to efficiently traverse this network by repeatedly applying the chemical information that is stored in the single-step model. However, this connection is not reflected in contemporary research. Currently, single-step methods are benchmarked by predicting a single retrosynthetic step from a product to reactants. The common benchmark data for these methods, USPTO-50k [27, 28], consists of around 50k reactions and only has a limited diversity of 10 reaction classes. These methods are typically only tested on reactant prediction and not within multi-step search algorithms, therefore their usability for synthesis planning is not assessed. Similarly, multi-step search algorithms benchmark the route-finding capabilities of their method using a single single-step model, often based on the template-based NeuralSym model [2, 17, 18, 19, 26], and evaluate the success rate of finding potential synthesis routes for molecules of interest. However, the approach of using only one single-step model does not consider the impact of alternative single-step models, a vital aspect of the search, as the route planning algorithm uses the reaction information stored in the single-step model to find synthesis routes and create alternate reaction pathways within the reaction network. The current question remains whether state-of-the-art single-step retrosynthesis methods are transferable to the multi-step synthesis planning domain, and their impact on multi-step synthesis planning [29, 30]. In this work, we address the transfer between single-step and multi-step methods by incorporating different state-of-the-art single-step models within a common multi-step search algorithm to analyze the use of these models for multi-step synthesis planning. We explore the effect on performance, analyzing the relationship between contemporary single-step and multi-step performance metrics using both public and proprietary datasets of varying size and diversity. Moreover, we also focus on vital aspects such as model suitability and chemical validity of the predicted routes. ## 2 Methods In this work, we develop an evaluation framework to benchmark different single-step models in multi-step synthesis planning (Figure 1). ### Evaluation Scheme **Single-step Retrosynthesis.** Single-step retrosynthesis methods are evaluated using top-n accuracy [7] (Table 1). The task for single-step retrosynthesis is the correct prediction of (gold-standard) reactants from the product of a known reaction. Here, we measure the percentage of target molecules for which the correct reactants are recovered within top-n predictions. Considering that the single-step model defines a possible maximum reaction network for a molecule of interest, published reactions are used to assess the accuracy of the single-step model since they are assumed to be chemically valid. Consequently, the assumption is that if the single-step model can recover a greater number of published reactants, then the predictions produced by the model are chemically viable reactions. **Multi-step Synthesis Planning.** On the other hand, for multi-step synthesis planning, the task is the search for likely synthesis routes for a molecule of interest, i.e., a reaction pathway from the \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Task** & **Metric** & **Description** \\ \hline Single-Step & Top-N Accuracy & Percentage of compounds for which the ground-truth reactants are predicted within the top-N \\ \hline Multi-Step & Success Rate & Percentage of compounds where at least one solved synthesis route is produced \\ \cline{2-3} Synthesis Planning & Number of Solved Routes & Average number of unique solved synthesis routes produced per molecule \\ \cline{2-3} & Search Times & Average search time per molecule \\ \cline{2-3} & Single-Step Model Calls & Average number of single-step model calls per molecule \\ \hline Route Accuracy & Percentage of compounds where the gold-standard route is predicted within the top-N synthesis routes \\ \hline Building Block Accuracy & Percentage of compounds where the gold-standard building blocks are predicted within the top-N synthesis routes \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation metrics for single-step retrosynthesis and multi-step synthesis planning. Solved synthesis route implies that the produced route leads to building blocks. Figure 1: Evaluation Framework for single-step models (AiZynthFinder (AZF), LocalRetro, Chemformer, and MHNreact), trained on different public (USPTO-50k, USPTO-PaRoutes-1M) and proprietary (AZ-1M, AZ-18M) datasets in synthesis planning on Caspyrus10k and PaRoutes. target molecule to a set of available building blocks [7]. For this, we consider multiple aspects for both the search and the predicted routes. Within success rate, we measure the percentage of molecules for which the route planning algorithm can successfully return at least one solved synthesis route leading from a molecule to building blocks. This condition is required for synthesis routes since a chemist can only consider routes as a suggestion for experimental evaluation if a complete synthesis route is found. Moreover, we analyze the number of solved routes since not only is it interesting to identify if there is a possible synthesis route for a molecule, but also how many alternatives are produced, given that different synthesis routes have different route properties. Nevertheless, algorithmic success does not measure if a found synthesis route is chemically valid, but only if a route into building blocks is found. Route accuracy is used to measure the chemical validity of synthesis routes as predicted routes can be compared to published, experimentally tested gold-standard routes [31]. Naturally, a route planning algorithm should be able to recover the gold-standard routes within the set of predicted, solved synthesis routes. This task is inherently more complex than producing solved routes (success rate) since it requires a sequence of multiple reactions and their intermediates to be correctly predicted and in the correct order. Additionally, we calculate whether there is an exact match between the predicted building blocks and the gold-standard building blocks. Building block accuracy differs from route accuracy since the route reactions and intermediates are not considered. In all cases it must be noted that a gold standard route is only one possible way of synthesizing a target molecule. Lastly, we consider search times and single-step model calls. Ideally, synthesis planning algorithms should produce routes in a timely manner to reduce allocated computational resources. However, different single-step models can have different inference speeds, and the time required for a search can massively diverge [30]. Consequently, the average search time for a molecule with a fixed number of single-step model calls, is measured. Additionally, we report the number of single-step model calls since, in some cases, the method may not reach the maximum iteration limit in the maximum search time. Noteworthy, the maximum search time can be exceeded if the last search iteration is started before the time limit is reached. ### Datasets. **Single-step Retrosynthesis.** Within single-step retrosynthesis datasets, each reaction is unique. They are all curated to comprise a single product leading to one or more reactants. One product can have more than one recorded reaction, and a reaction type can occur multiple times. Here we use four different single-step retrosynthesis datasets, USPTO-50k [28], USPTO-PaRoutes-1M [31], AZ-1M and AZ-18M [32] (Table 2). USPTO-50k is the default benchmark dataset for single-step retrosynthesis prediction. It features 50,016 reactions comprising ten reaction classes extracted from the original USPTO dataset [27], which originates from the United States Patent and Trademark Office. USPTO-PaRoutes-1M is a processed version of the original USPTO grant and application data. This single-step dataset is specifically developed to train single-step retrosynthesis models to benchmark multi-step algorithms [31]. The dataset contains single-step reactions and excludes gold-standard synthesis routes and their corresponding reactions for multi-step benchmarking. Here, we use the PaRoutes 2.0 dataset, which contains 1,198,554 single-step reactions [32]. Additionally, we use two datasets based on the proprietary AstraZeneca dataset [32, 34]. The first, AZ-18M, is the complete cleaned dataset from AstraZeneca, which includes Reaxys [35], Pistachio (a superset of USPTO-PaRoutes-1M) [36], and AstraZeneca Electronic Laboratory Notebooks (ELN) data. This dataset contains 18,697,432 single-step reactions [32]. Moreover, to obtain a dataset representative of AZ-18M with a comparable size to USPTO-PaRoutes-1M, we randomly subsample 1M reactions from AZ-18M to produce AZ-1M. To evaluate single-step models, we split all reaction datasets into random 80% training, 10% validation, and 10% test hold-out splits. In the case of USPTO-PaRoutes-1M, to replicate the original data split size [31], the hold-out split ratio is 90% training, 5% validation, and 5% test. We defer from using the original hold-out splits since they are based on template stratification. For AZ-18M, we subsample 100k molecules from the complete test set of 1.8 million reactions to avoid excessive evaluation computation. **Multi-step Synthesis Planning.** Multi-step evaluation datasets are collections of compounds that are used to test the route-finding capabilities of multi-step synthesis planning algorithms. To evaluate the synthesis planning capabilities of different single-step models, we create a new dataset \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Task** & **Dataset** & **Description** \\ \hline Single-Step Retrosynthesis & USPTO-50k [28] & Default single-step retrosynthesis benchmark dataset \\ \cline{2-3} Training & USPTO-PaRoutes-1M [31] & Largest publicly available single-step retrosynthesis dataset \\ \cline{2-3} & AZ-1M [32] & 1M reaction subsample of internal AstraZeneca reactions \\ \cline{2-3} & AZ-18M [32] & Dataset based on internal AstraZeneca reactions \\ \hline Multi-Step Synthesis Planning & Caspyrus 10k & 10,000 clustered bio-active molecules from Papyrus [33] \\ \cline{2-3} Evaluation & PaRoutes [31] & Collection of 10,000 gold-standard synthesis routes extracted from patents \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets for training single-step retrosynthesis models and evaluating multi-step synthesis planning called Caspyrus10k that consists of a clustered set of 10,000 molecules from a selection of known bioactive and synthesizable compounds, to ensure a reasonable representation of the synthesizable chemical space. In detail, we select the high-quality Papyrus [33] dataset of 1,238,835 molecules, where each molecule has an exact bioactivity value measure and is associated with a single protein, strongly suggesting that each of those molecules is synthesizable as its activity has been tested in an experimental setting. We filter those molecules with the Guacamol cleaning strategy [37] to ensure drug-like molecules, removing molecules which do not fit the criteria in the process. As we are interested in these molecules for synthesis planning, we remove the building blocks present in Zinc [38], Enamine [39], MolPort [40], and eMolecules [41]. Finally, we cluster the resulting set of molecules using Butina Clustering [42] using Morgan Fingerprints with a radius of 2, a fingerprint size of 1024, and a Butina cut-off threshold of 0.6. From the resulting cluster centroids, we remove 19 centroids in clinical phases 1-3 since they are intellectual property. Finally, we take the largest 10,000 cluster centroids, representing roughly 284,000 molecules. Additionally, we evaluate the synthesis planning capabilities of all single-step models on PaRoutes [31], a collection of 10,000 gold-standard retrosynthesis routes. This task differs from the general synthesis planning task with Caspyrus 10k in that the goal is to recover specific real-world synthesis routes conducted as part of a patent application process and therefore test the chemical validity of the predicted synthesis routes. The gold-standard routes are obtained from USPTO patent data, where we use the n-1 set, which contains a single retrosynthesis route for each patent. As stated in the PaRoutes dataset, we use a specialized set of building blocks containing the leaf nodes of all 10,000 routes. Given the specifics of the PaRoutes dataset, the search algorithm has a maximum route length of 10 as this is the longest extracted route length from patents. ### Selected Approaches. **Single-step Retrosynthesis.** We select three state-of-the-art single-step methods to evaluate within multi-step synthesis planning (Table 3). The selection is based on their top-n accuracy on the commonly used benchmarking dataset, USPTO-50k, ensuring to select models which employ the main research directions within the field, i.e., graph-based neural networks, sequence-to-sequence, and information retrieval. Where possible, we maintain the original implementation of the methods and only report deviations from this. LocalRetro [11] is a template-based method that uses local atom and bond templates. It applies a graph neural network to create embeddings for both atoms and bonds of a product, which are used in a classification task to predict appropriate templates and reaction centers jointly. Contrary to the original implementation of the method, for AZ-1M and AZ-18M we filter for a minimum template frequency of three to avoid an infeasible number of local atom and bond templates. Chemformer [12] is a template-free method based on a Transformer architecture that uses BART [43] pre-training on molecular SMILES and is then fine-tuned on the retrosynthesis task. It uses product SMILES as input to predict reactant SMILES using beam-search. We set the beam size to 50. MHNreact [10], a template-based information retrieval approach, trains separate product and template encoders and uses modern Hopfield Networks [44] to relate products and template embeddings to find the most applicable reaction template. The original implementation uses all template embeddings simultaneously. However, due to large RAM requirements (>300GB) of this approach for USPTO-PaRoutes-1M, AZ-1M and AZ-18M, the templates are used in batches to train the model. Moreover, we apply a cut-off of a minimum of three template occurrences for AZ-1M and do not show results for AZ-18M as due to increased reaction diversity leading to a much larger number of templates requiring an unfeasible amount of memory. Additionally, we include a simple template-based model as a baseline referred to as AZF, adapted from NeuralSym [9], which is the default model in the most used public route planning software implementation AiZynthFinder [38]. Noteworthy, this model architecture is also commonly used to benchmark novel multi-step search algorithms. Templates are extracted using the standard implementation of RDChiral [45] with a radius of two. Only templates with at least three occurrences are kept for USPTO-50k, USPTO-PaRoutes-1M, and AZ-1M, for AZ-18M templates with at least ten occurrences were kept, following [32]. **Multi-step Synthesis Planning.** For multi-step synthesis planning, we select Retro*[18] as the search algorithm used in all experiments. Retro* is a best-first tree search algorithm leveraging A*-like pathfinding guided by a neural network, where each algorithm iteration applies a single model \begin{table} \begin{tabular}{l l l} \hline \hline **Task** & **Approach** & **Description** \\ \hline Single-Step & LocalRetro [11] & Graph Neural Network predicting the application of local bond and atom templates \\ \cline{2-3} Retrosynthesis & Chemformer [12] & Template-free sequence-to-sequence Transformer \\ \cline{2-3} & MHNreact [10] & Template-based information retrieval method relating products and template embeddings \\ \cline{2-3} & AZF (Baseline) [9, 38] & Default template-based method \\ \hline Multi-Step & Retro*[18] & Best-first tree search algorithm leveraging A*-like pathfinding guided by the single-step model \\ \hline \hline \end{tabular} \end{table} Table 3: Selected single-step retrosynthesis models and multi-step synthesis planning algorithm call. We select Retro* as the multi-step algorithm since prior work shows minimal differences across multi-step algorithms [46], though this is only shown for the common NeuralSym model architecture. Moreover, Retro* performs better than MCTS with state-of-the-art single-step retrosynthesis models, which require longer inference times [30]. This performance difference is likely because Retro* does not require online planning for search tree traversal, limiting the number of single-step model calls required. Noteworthy, we defer from using a self-play dependent route planning algorithm, even though they have the highest reported benchmark performance [26] since self-play algorithms are not training data and single-step model agnostic, i.e., changes in stock or single-step model change the learned self-play tree traversal policy. This aspect is especially problematic for this work as every single-step model and data combination would require self-play training such that it would become unclear whether the single-step model or the self-play aspect is important for route planning. Furthermore, we use Retro* with no cost function, such that the reactant probability of the single-step model is the guiding probability in the tree search. The search goal of Retro* is to find synthesis routes that end in building block molecules, however, that information is not used to shape the reward, as in MCTS [2, 38], where the percentage of building block leaves is used to guide the tree search. Instead, the sole guidance of the tree search comes from the single-step model to prioritize reactions to explore. We defer from using the oracle function because it has shown little impact [46] and is trained on USPTO data, which could cause information leakage. For all searches, we use a maximum search time of 8 hours (28800 seconds) and 200 algorithm iterations. Furthermore, the top 50 reactions from the single-step model are added to the search tree at every iteration, deferring from using a cumulative probability cut-off. Moreover, unless otherwise stated, we use a maximum synthesis route length of 7 and the Zinc [38] building block set consisting of 17,422,831 molecules. ### Implementation. All single-step retrosynthesis models are incorporated into the AiZynthFinder [38] synthesis planning framework using a newly developed common single-step model interface, ModelZoo. We extend AiZynthFinder such that any single-step model can be tested and used interchangeably within all implemented multi-step search algorithms. Where possible, the original single-step model code is used. All code will be made available on GitHub upon publication. ### Computational requirements. All single-step models for this work are trained on GPUs (Tesla V100). However, route planning is conducted on CPUs, given that insufficient GPUs are available for embarrassingly parallel evaluation of 10,000 molecules for each single-step model. In total, more than 1.5 million CPU hours were used to create the reported results. ## 3 Results ### Single-step retrosynthesis prediction **USPTO-50k.** As in the respective single-step retrosynthesis publications, the results on the USPTO-50k dataset, commonly used to benchmark and develop new single-step models [8], are reproducible. The best-performing methods are the state-of-the-art template-based methods (LocalRetro, MHNreact), which approach over 93% accuracy by top-50 (Figure 2). Among those methods, LocalRetro is the best performing, closely followed by MHNreact. Chemformer, a template-free method, has Figure 2: Single-step Retrosynthesis Prediction Performance in terms of top-n accuracy for AZF, LocalRetro, Chemformer, and MHNreact on different datasets (USPTO-50k, USPTO-PaRoutes-1M, AZ-1M, AZ-18M) (see Supplementary Table S1). the highest top-1 accuracy but stagnates as its performance does not increase with rising top-n. AZF is the worst-performing model until the top-10, where it outperforms Chemformer. However, AZF and Chemformer only reach a maximum of 77% by top-50, an almost 19% performance drop-off compared to LocalRetro and MHNreact. **USPTO-PaRoutes-1M.** All models perform practically identically on the USPTO-PaRoutes-1M single-step dataset, with a maximum difference of \(\pm\)4.6% accuracy across all top-n (Figure 2), despite each approach employing different model architectures. At top-1, most models perform similarly, with LocalRetro outperforming the other models by 1%. Within the top-3 accuracy, all state-of-art models (LocalRetro, Chemformer, MHNreact) maintain similar performance, whereas AZF performs slightly worse. By top-50, some slight differences are present, where LocalRetro is the best performing model, followed by MHNreact and the slightly worse performing AZF and Chemformer. **AZ-1M.** In contrast to the comparably sized USPTO-PaRoutes-1M dataset, for AZ-1M the overall performance drops across all models (Figure 2). All three state-of-the-art models (LocalRetro, Chemformer, MHNreact) outperform AZF on all top-n accuracy levels. Both state-of-the-art template-based models perform similarly, where LocalRetro surpasses MHNreact as top-n increases. The template-free model, Chemformer, is the best-performing model throughout, though the difference is initially minimal, it becomes more pronounced across larger top-n. At top-50, Chemformer continues as the best-performing model, however it is closely followed by LocalRetro across all top-n. **AZ-18M.** On the AZ-18M dataset, with an 18x increase of data compared to AZ-1M, Chemformer clearly outperforms the other models (Figure 2). At top-1, Chemformer already reaches an accuracy of 45.0%, improving upon the other models by at least a +15.5% margin. At top-50, Chemformer reaches 83.1%, outperforming the next best model (LocalRetro) by +27.3%. Noteworthy, both template-based methods (LocalRetro, AZF) perform similarly until top-10. Importantly, it was not possible to obtain results for MHNreact on AZ-18M due to the memory requirements of the method. ### Multi-step synthesis planning #### 3.2.1 Caspyrus10k Multi-step metrics of single-step models in synthesis planning are evaluated on Caspyrus10k, specifically route-finding success rate, average number of solved routes per molecule, average number of single-step model calls per molecule, and the average search time per molecule (see Methods). This establishes an overview of the capabilities of different models, trained on different datasets, across a large synthesizable chemical space. **USPTO-50k.** For models trained on the USPTO-50k dataset, LocalRetro is the best-performing model with the highest success rate and average number of solved routes. Regarding success rate, a large disparity of \(\pm\)32.0% between the best-performing and worst-performing models is present. LocalRetro performs best, with a success rate of 74.1%, followed by Chemformer, MHNreact, and AZF, with each model decreasing in performance by around 10% from the previous one. The average number of solved routes per molecule also differs largely between the different single-step models, with the best-performing model producing almost 17x more solved routes than the worst-performing model. Again, LocalRetro performs best with 124 solved routes, followed by MHNreact, AZF, and Chemformer. In terms of single-step model calls, AZF, LocalRetro, and Chemformer approach the 200 model-call limit, yet there is a large disparity in search time. LocalRetro and AZF require only around 160 seconds per molecule, whereas Chemformer reaches an average search time of 5.3 hours (19,051 seconds). Lastly, despite reaching the search time limit, MHNreact has a considerably lower number of model calls. **USPTO-PaRoutes-1M.** Models trained on the USPTO-PaRoutes-1M dataset have considerable performance differences in synthesis planning, even though they perform similarly on the single-step test data (Figure 2). With the increased data volume, compared to USPTO-50k, all models solve a much larger portion of Caspyrus10k. The best-performing model in terms of success rate is Chemformer with 94.1%, followed by LocalRetro, AZF, and finally MHNreact. Overall, the average number of solved routes is high for state-of-the-art single-step models. Chemformer finds, on \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & Overall & \multicolumn{3}{c}{Average per Molecule} \\ \cline{3-6} **Training Dataset** & **Model** & **Success Rate (\%)** & **Solved Routes** & **Search Time (s)** & **Model Calls** \\ \hline \multirow{4}{*}{USPTO-50k} & AZF & 41.1 & 36.1 & 159 & 199 \\ & LocalRetro & **74.1** & 124 & 161 & 200 \\ & Chemformer & 62.4 & 7.37 & 19051 & 177 \\ & MHNreact & 51.0 & 38.0 & 28958 & 99 \\ \hline \multirow{4}{*}{USPTO-PaRoutes-1M} & AZF & 66.3 & 83.5 & 163 & 200 \\ & LocalRetro & 86.0 & 324 & 1218 & 200 \\ & Chemformer & **94.1** & 463 & 28809 & 147 \\ & MHNreact & 64.6 & 215 & 28839 & 169 \\ \hline \multirow{4}{*}{AZ-1M} & AZF & 73.5 & 124 & 168 & 200 \\ & LocalRetro & 88.1 & 321 & 465 & 200 \\ & Chemformer & **94.5** & 358 & 29109 & 108 \\ & MHNreact & 56.0 & 77.0 & 29116 & 65 \\ \hline \multirow{4}{*}{AZ-18M} & AZF & 76.2 & 154 & 154 & 199 \\ & LocalRetro & 87.3 & 350 & 2736 & 200 \\ \cline{1-1} & Chemformer & **90.9** & 381 & 30212 & 75 \\ \hline \hline \end{tabular} \end{table} Table 4: Multi-step synthesis planning performance on Caspyrus10k for different single-step models when trained on a diverse set of datasets. Measured by the success rate, indicating the number of molecules where a full synthesis route is found, the average number of solved routes, indicating the ability to produce synthesis route candidates, search times in seconds, and the average number of single-step model calls (see Supplementary Figure S1 for distributions). average, 463 solved synthesis routes, followed by LocalRetro and MHNreact with 324 and 215, respectively. In comparison, the baseline AZF model finds only 83.5 solved routes per molecule. Concerning search time, Chemformer and MHNreact both exhaust the maximum search time, where neither reaches the maximum number of model calls. AZF is by far the fastest method, reaching 200 model calls in an average of 163 seconds. LocalRetro reaches the iteration limit within 1218 seconds on average, 7.5x slower than AZF but considerably faster than other state-of-the-art models. **AZ-1M.** For AZ-1M, no clear performance improvement pattern is present in comparison to USPTO-PaRoutes-1M. In terms of success rate, AZF has a +7% gain compared to USPTO-PaRoutes-1M, whereas Chemformer and LocalRetro maintain a very similar success rate. MHNreact, however, drops in route-finding success, reaching only 56.0%. The average number of solved routes slightly increases for AZF compared to USPTO-PaRoutes-1M, whereas the performance decreases by 105 routes for Chemformer and more than halves for MHNreact. LocalRetro performs comparably with a minimal decrease of 3 solved routes. Regarding search time, both Chemformer and MHNreact exhaust the maximum search times, again not reaching the maximum number of single-step model calls. In fact, both models have a particularly low number of model calls, on average carrying out 108 model calls for Chemformer and 65 model calls for MHNreact. Both LocalRetro and AZF reach the maximum iteration limit, but LocalRetro is 2.77x slower. **AZ-18M.** Finally, the success rate of models trained on the considerably larger AZ-18M dataset is comparable to the performance on AZ-1M with no changes beyond \(\pm\) 3.6%, even though the single-step performance can differ massively between both single-step datasets (Figure 2). Compared to AZ-1M, all models produce more solved routes. Chemformer solves the most routes per molecule, followed by LocalRetro and AZF. As for the search times, Chemformer once again reaches the time limit of 8 hours, whereas LocalRetro is considerably faster on average, beaten only by AZF. AZF and LocalRetro each reach the maximum iteration limit, whereas Chemformer only has 75 single-step model calls on average. Even though Chemformer success rate decreases, it can still produce the highest number of solved routes and the best success rate on AZ-18M. #### 3.2.2 PaRoutes Instead of evaluating the general route-finding abilities of single-step retrosynthesis models, PaRoutes focuses on the ability to recover gold-standard routes given a set of molecules and their predefined target building blocks. In terms of multi-step metrics, using the same evaluation as for Caspyrus10k, all models achieve an extremely high success rate of at least 91% (Table 5). In particular, AZF, LocalRetro, and Chemformer find solutions for practically all PaRoutes compounds. The three template-based methods (AZF, MHNreact, LocalRetro) produce a similar number of solved routes per molecule ranging between 159 and 173, whereas Chemformer surpasses these with an average of 524 solved routes per molecule (Table 5, Supplementary Figure S5). As already seen with Caspyrus10k, Chemformer and MHNreact reach the maximum search time of 8 hours without maxing out the single-step model calls. LocalRetro and AZF perform considerably faster, with AZF taking just 153 seconds on average to reach the maximum of 200 iterations. The route accuracy of the single-step model in synthesis planning measures how often the gold-standard synthesis route is recovered for a target molecule, where the selected n-1 set [31] features only one retrosynthetic route per target-molecule. AZF has by far the best route accuracy overall, \begin{table} \begin{tabular}{c c c c c c} \hline \hline & & Overall & \multicolumn{3}{c}{Average per Molecule} \\ \cline{3-5} **Training Dataset** & **Model** & **Success Rate (\%)** & **Solved Routes** & **Search Time (s)** & **Model Calls** \\ \hline \multirow{3}{*}{USPTO-PaRoutes-1M} & AZF & 97.1 & 159 & 153 & 200 \\ & LocalRetro & 98.9 & 161 & 1067 & 200 \\ & Chemformer & **99.7** & 524 & 28538 & 157 \\ & MHNreact & 91.1 & 173 & 28802 & 156 \\ \hline \hline \end{tabular} \end{table} Table 5: Multi-step Synthesis Planning performance on PaRoutes for different single-step models when trained on USPTO-PaRoutes-1M. Measured by the success rate, indicating the number of molecules where a full synthesis route is found, the average number of solved routes, indicating the ability to produce synthesis route candidates, search times in seconds, and the average number of single-step model calls (see Supplementary Figure S5 for distributions). Figure 3: Multi-step synthesis planning accuracy on PaRoutes gold-standard synthesis routes with different single-step models trained on USPTO-PaRoutes-1M. Route accuracy measures the ability to recover the correct synthesis route within top-n, whereas building block accuracy measures the ability to recover the correct building blocks while not considering reactions and intermediates (see Supplementary Table S7). recovering 61.8% of gold-standard routes within its top-50 predicted synthesis routes (23.7% at top-1) (Figure 3). Noteworthy, the performance plateaus after top-10 (at 60.7%) and with little improvement at higher top-n. Both state-of-the-art template-based methods perform similarly across all top-n, but underperform compared to AZF by around -20% (MHNreact: 39.7%, LocalRetro: 36.1%).The template-free Chemformer model is worst-performing across all top-n, reaching only 11.9% by top-50. Noteworthy, the performance for all state-of-the-art models improves until the top-1000 (Supplementary Figure S5), but never reaches the performance of AZF. Considering the building block accuracy, which measures if the correct building blocks of the reference route are predicted while not considering the route reactions or intermediate molecules, considerable improvements for all models are present compared to the route accuracy. Within the top-50 synthesis route predictions, AZF correctly predicts the building blocks for 76.4% of the gold-standard synthesis routes, a +14.6% increase over its route accuracy. This improvement pattern is also present for the state-of-the-art models within the top-50 predicted synthesis routes, where all three state-of-the-art methods see a considerable improvement with at least a +17% improvement between route and building block accuracy. ## 4 Discussion Thus far, the task of retrosynthesis prediction is treated as two separate machine learning research fields. In this work, single-step retrosynthesis and multi-step synthesis planning are joined to analyze the impact of the single-step model on multi-step synthesis planning (Figure 1). In particular, the focus is on vital aspects of synthesis planning, the single-step model, the multi-step search algorithm, and their domain-specific applicability. ### Impact on single-step retrosynthesis prediction Considering the single-step retrosynthesis accuracies (Figure 2, Supplementary Table S1), it can be stated that the default single-step retrosynthesis benchmark dataset, USPTO-50k, is problematic as there is no performance transfer of models between different datasets. A model performing well on the smaller 50k reaction dataset does not necessarily perform well on larger, more diverse datasets, as the ranking of the best-performing single-step model changes for every dataset. Generally, model performance increases, or stays comparable, with more data available. For instance, for USPTO-PaRoutes-1M, a superset of USPTO-50k with a larger number of reaction classes, the performance increases (AZF and Chemformer) or stays comparable (LocalRetro, MHNreact). This pattern is also present when comparing AZ-1M to its superset AZ-18M, where more data improves the performance slightly (LocalRetro) or substantially (Chemformer, AZF). For AZ-18M, the model with the highest jump in performance is the template-free Chemformer, reaching a top-50 accuracy of 83.1% and substantially outperforming all other template-based methods by +27.3%. Here it seems that the template-based nature of the other two models (AZF, LocalRetro) limits their ability to perform on the largest, most diverse dataset. This indicates that template-based methods may have reached a performance plateau due to not being to extrapolate beyond known templates, a limitation which is not present for the template-free Chemformer. Interestingly, for USPTO-50k, the template-free method is outperformed by all template-based methods at top-10 accuracy. Looking at the performance of AZF on AZ-18M, it is generally worse than shown in [32]. The previous work uses a template-based stratified split for the hold-out split, leading to an even distribution of templates across the different splits and ensuring that every template is present in every split, which can benefit a template-based approach. However, in this work, we address the hold-out split by a strict random split on the reaction level, given the nature of the different single-step methods used. With increased data diversity, single-step performance diminishes for all models comparing the equally sized USPTO-PaRoutes-1M and AZ-1M (Figure 2). Data diversity is measured by the number of extracted unique reaction templates from the training splits of both datasets (USPTO-PaRoutes-1M: 314,959, AZ-1M: 439,618), representing different reaction ideas present in the respective datasets. This pattern is especially problematic, as USPTO-50k only includes ten reaction classes (USPTO-50k: 10,196 unique reaction templates). Secondly, a novel benchmark is required for the single-step retrosynthesis research field, as methods developed for 50,000 data points are not easily transferable to real-world-sized datasets with millions of data points. Naturally, new methods should be developed using larger datasets that better encompass the size and diversity shown in real-world data since development for USPTO-50k limits their transferability (Figure 2). In terms of dataset size, all models require at least minor refactoring to run on larger datasets or do not scale beyond 1 million data points (MHNreact). Similarly, some USPTO-50k developed models do not conceptually consider the increase in reaction diversity in larger (real-world) datasets. For example, template-based models produce more templates with higher data diversity, requiring more template prediction classes in their classification tasks. Inherently, the number of classes a method can represent limits the number of different templates a method can predict. The solution to the diversity problem for those template-based methods is to remove templates occurring below a threshold and subsequently remove potential valid reaction predictions (see Methods). The natural exception are template-free methods as they are not constrained to reaction templates and show better scalability to more diverse data (Figure 2). Noteworthy, USPTO-PaRoutes-1M [32], with its higher number of reactions and reaction diversity, is also not a perfect single-step model benchmark dataset since all single-step models perform comparably on it. Compared to the alternative public dataset USPTO-Full [47], the performance of all single-step models is much higher on USPTO-PaRoutes-1M, where LocalRetro has a more than +25% top-50 accuracy improvement [8]. The difference in single-step performance between USPTO-PaRoutes-1M and USPTO-Full and the equal performance on USPTO-PaRoutes-1M might be explainable by the underlying data sources and their respective preprocessing. USPTO-PaRoutes-1M is a superset of USPTO-Full, where the first contains USPTO grants and applications (3,748,191 total reactions) and the latter only USPTO grants (1,808,938 total reactions) [34]. In terms of preprocessing, USPTO-Full is noisier compared to USPTO-PaRoutes-1M as the latter applies extensive data cleaning and recreates and standardizes the atom-mapping between reactions with RXNMapper [48]. Naturally, given that all tested single-step models perform comparably on the most cleaned, standardized, publicly available dataset, the question remains whether a saturation point in single-step performance is reached on public data. Directly inferring multi-step synthesis planning results from single-step retrosynthesis results is not possible since single-step model performance metrics do not directly transfer to multi-step route planning success. In fact, it is necessary to evaluate the performance of respective single-step models in a multi-step framework to evaluate their synthesis planning performance. In this study, single-step models performing equally well on the USPTO-PaRoutes-1M single-step task are performing vastly differently in multi-step synthesis planning. For example, Chemformer, compared to MHNreact, has considerable differences in multi-step performance with a nearly \(\pm\)30% higher success rate and finding double the average number of solved routes per molecule (Table 4). Moreover, LocalRetro has a roughly +20% higher success rate than AZF and finds 3.9x the number of solved routes. Looking at the disparities between USPTO-50k and other datasets, LocalRetro has the highest route-finding success of single-step models trained on the USPTO-50k dataset but is not the best-performing model when trained on larger datasets. Additionally, low single-step model performance on AZ-1M still leads to high multi-step performance. Here, the high diversity of reactions in AZ-1M, compared to the equally sized USPTO-PaRoutes-1M, might be the factor for the low single-step model performance. It seems that with fewer correctly predicted reactions, it is still possible to reach high multi-step performance. This aligns with prior works showing that most molecules can be addressed with relatively few reaction templates [34]. ### Impact on multi-step synthesis planning An important finding for multi-step synthesis planning is that the performance of route planning can be improved by merely switching out the single-step model, introducing novel reaction pathways to traverse the underlying reaction network (Table 4). In particular, huge success rate disparities are present within datasets, where the performance difference in finding a synthesis route between the best and worst models can be up as high as \(\pm\)38.5% (USPTO-50k: \(\pm\)33.0%, USPTO-PaRoutes-1M: \(\pm\)29.5%, AZ-1M: \(\pm\)38.5%, AZ-18: \(\pm\)14.7%). This performance disparity pattern between the best and worst performing models trained on the same dataset is also present for the average number of solved routes per molecule, where the difference in solved routes ranges in the hundreds (USPTO-50k: \(\pm\)117, USPTO-PaRoutes-1M: \(\pm\)380, AZ-1M: \(\pm\)281, AZ-18M: \(\pm\)227). The availability of more reaction data can improve the success rate of route planning up to a certain level, where the largest jump is present between USPTO-50k and USPTO-PaRoutes-1M. Noteworthy, public data is on par with private data in terms of multi-step success rate for Chemformer and LocalRetro which have comparable performance when trained on USPTO-PaRoutes-1M or AZ-1M. However, for AZF, public datasets perform much worse as more reaction templates are extractable from private data [32]. For MHNreact, private data even decreases the performance as the added complexity highly increases inference times, and only 65 single-step model calls are conducted in a generous 8-hour search window. The availability of more diverse reaction data can increase the average number of solved synthesis routes produced. Generally, we see that as reaction diversity of the single-step data increases so does the number of solved synthesis routes though eventually this performance stagnates or even worsens due to model architecture limitations. All models have either longer run times, if they reach the iteration limit, or a reduced number of single-step model calls, if they reach the time limit, reducing their potential to explore additional synthesis pathways. In the case of LocalRetro, where the minimum reaction template occurrence is increased from USPTO-PaRoutes-1M to AZ-1M from one to three due to an infeasible number of reaction template classes in the more diverse dataset, the search times massively decrease while even improving the success rate likely due to the decreased number of reaction templates. Finally, template-based models produce their respective most solved synthesis routes using the 18 million reaction dataset, AZ-18M. Chemformer, however, achieves less solved routes compared to USTPO-PaRoutes-1M as the number of single-step model calls is halved for the largest dataset, suggesting that the inference becomes slower with more diverse data. Even though single-step retrosynthesis models improve the performance of route planning, they are generally not tailored to multi-step search algorithms. Single-step models have slow inference times that can deny high multi-step success rates, as few single-step model calls are possible within a set time limit and can also impede ad-hoc synthesis route generation. Attached to the inference problems of single-step models are the algorithmic properties of most multi-step algorithms. Though multi-step algorithms require single-step retrosynthesis models, they are generally developed to address a single molecule as a sequential next-disconnection prediction problem, with few exceptions [19]. Single-step models, however, are not optimized for this as they predict reactants for multiple different products simultaneously, typically in a joined GPU batch. Consequently, the combination of single-step and multi-step methods, though both thought for the task of retrosynthesis prediction, are currently not developed to be complementary to each other. Many of these models could massively improve their performance, particularly for the number of single-step calls and search time, by adapting the multi-step algorithm to suit the single-step model and vice-versa. Moreover, novel search algorithms, such as implementing asynchronous route planning, could have a substantial impact in this area. ### Impact on domain-specific applications Retrosynthesis prediction can be viewed as a domain-specific problem where the true objective of synthesis planning is to produce routes that can be used and tested experimentally. Given that there are multiple ways of synthesizing a molecule, the solution selected will often depend on the reaction preferences of the chemist and the desired route properties. As such, apart from the success rate and the number of solved routes, the route properties and their chemical validity are vital for the usefulness of the produced routes. Generally, different models produce different route characteristics on Caspyrus10k (Figure 4), where the template-free method has noticeably different maximum route length, number of building blocks and number of reactants compared to the template-based methods. AZF and LocalRetro generally have very similar distributions across all characteristics, particularly in maximum route length where MHNreact has markedly shorter routes. Since MHNreact carries out a low number of single-step model calls within the maximum search time, it is likely that it is only able to address and solve short routes. Yet, Chemformer generally has a higher proportion of routes with a maximum depth of one, essentially directly predicting building blocks. Additionally, Chemformer predicts a higher number of building blocks per route compared to all template-based methods, yet this effect is reduced with increased training data (Supplementary Figure S2). Within the template-based methods we observe that the majority of reactions are bimolecular, producing two reactants, this is Figure 4: Caspyrus10k route statistics of top-5 found synthesis routes by different single-step retrosynthesis models trained on USPTO-PaRoutes-1M. Shown are the maximum depth, referring to the longest linear path within the route, the number of building blocks within the route, and the number of reactants per route reaction. particularly true for MHNreact. Chemformer on the other hand predicts reactions which at times lead to four or more reactants. Apart from looking at general route statistics of Caspyrus10k route planning results, we cluster the resulting synthesis routes to understand the relationship between different solved routes produced by distinct models within a reaction dataset. In detail, the approximated pairwise edit distance between solved synthesis routes of the top-5 predictions for each molecule is used to cluster with the route-distance package [49, 50]. Here, different single-step models produce unique route clusters when looking at the same training data, where routes produced by each model are generally unique to that model (Figure 5). Noteworthy, routes produced by methods that rely on reaction templates (AZF, MHNreact, LocalRetro) tend to cluster together more frequently. Furthermore, models trained on AZ-18M tend to converge more regarding shared routes between models than models trained on Figure 5: Distribution and overlap of route clusters per single-step model and dataset when clustering with route-distance package [49, 50]. Clusters were calculated on a per molecule basis, N clusters shows the number of clusters which contained the stated combination of models. USPTO-PaRoutes-1M. Nevertheless, the bulk of routes remains in unique clusters. Noteworthy, we check that the clustering patterns are also present when removing MHNreact (Figure S3) to ensure that the missing MHNreact results for AZ-18M are not the sole reason for the difference between AZ-18M and the other datasets. The availability of solved synthesis routes does not imply that those routes are also chemically valid. Validity can be assessed by comparing the produced routes of a single-step model to gold-standard routes as found in USPTO patents [31] to indicate how valid the produced routes are. Generally, different single-step models are distinctive in their ability to reproduce gold-standard chemistry routes, i.e., route accuracy (Figure 3). Surprisingly, there is no relationship between the multi-step success rate and the route accuracy of a single-step model. All models achieve at least 91% success rates on PaRoutes target molecules (Table 5) but differ considerably between route accuracies. AZF is the best-performing model regarding route accuracy, recovering 23.7% of routes as the top-1 predicted synthesis route and 61.8% within the top-50 predicted routes. In comparison, state-of-the-art models produce lower route accuracy, even if they produce high success rates. Within those state-of-the-art models, template-based models (LocalRetro and MHNreact) have a considerably higher route accuracy than the template-free approach Chemformer, yet still have a considerable gap in performance compared to the route accuracy of AZF. Instead of predicting the correct gold-standard synthesis route, an easier task is to predict the right building blocks of the gold-standard route. This means that though the gold-standard route may not be entirely correctly predicted the building blocks are correctly predicted in the synthesis route, i.e., the order of the reactions may be incorrect or intermediate molecules are missing. For the easier task of predicting the correct building blocks, all models improve their performance compared to their respective route accuracy. However, the improvement between route accuracy and building block accuracy is much greater, compared to AZF, for state-of-the-art models that operate on local reaction templates (LocalRetro) or no templates at all (Chemformer), potentially meaning that they are more likely to skip vital aspects of the gold-standard synthesis routes in their route predictions rather than producing a distinct retrosynthesis route than the gold-standard route. Overall, the template-based AZF method performs best regarding building block accuracy. The performance difference on PaRoutes across different methods might be explainable by the allowed degree of chemical freedom of their respective model architectures. Template-based methods are more constrained by the reaction templates they apply, which are extracted from training reactions. With this constraint they are made to follow reaction pathways which are more chemically sound since their templates by definition, must be based on previous reactions. In comparison, the template-free Chemformer performs worst across both route and building block accuracy, potentially explained by the non-existent template guidance of the method allowing it to predict non-chemically sound reactions. Interestingly, this is in line with the divergence of Chemformer from general route statistics on Caspyrus10k, as the model predicts a much higher number of building blocks, multi-molecular reactions and routes that only consist of a singular reaction (Figure 4). Generally, state-of-the-art approaches can provide a much larger set of route alternatives (Supplementary Figure S5). This is also reflected in the PaRoutes route and building block accuracy, where AZF plateaus by top-10 accuracy, whereas state-of-the-art methods continue to increase their accuracy into very high top-n (Supplementary Figure S4). Given that state-of-the-art models produce more route alternatives, a future research direction, might be the best ranking of synthesis routes, as it can be assumed that desired routes are present within a large set of found synthesis routes. An underlying assumption for single-step and multi-step synthesis planning is that the single-step model prior indicates the predicted chemical viability of a reaction for a molecule. We assess this assumed relationship by extracting the predicted reaction probabilities and their respective rank for reactions from the top-10 solved routes of the PaRoutes benchmark dataset. We then select a random subset of 100,000 reactions for each model. Surprisingly, the single-step model prior distributions (Figure 6) show that for all models that there is no clear connection between the single-step reaction priors and the ability to find a solved synthesis routes as reactions of solved synthesis routes contain both low probability reactions and low prediction rank. Furthermore, models with a smoother progression between probabilities of higher and lower ranked disconnections (AZF, LocalRetro, MHNreact) tend to perform better at recovering gold-standard routes (Figure 3). In contrast, a more skewed, overconfident, distribution towards top-1 predictions tends to perform worse (Chemformer). Though the routes found within the top-10 predicted routes use reactions with very low reaction probabilities, gold-standard routes are generally only found within the top-5 predicted reactions (Supplementary Figure S4). This suggests that routes with reactions ranked outside the top-5 predicted reactions, though leading to building blocks, produce non-viable route reactions (Figure 6). The presence of these low-probability reactions can be explained by the search algorithm ranking possible synthesis routes by their ability to reach building blocks and their overall route length. In the tree search itself, the search algorithm prioritizes short and solved routes, which might also include reactions with low probabilities as the overall search goal is to find a synthesis route ending in purchasable building blocks. The effect of low-probability reactions is enforced by adding 50 reactions to the search tree at every time step, even if those disconnections have low probabilities. Noteworthy, it is likely that the tree search algorithm explores those low-probability reactions when the high-probability disconnections are already explored. However, given the overall distribution of reaction priors (Figure 6), this approach might not be desired for future synthesis planning search algorithms. Furthermore, in future work, it could be interesting to analyze how the synthesis planning results differ when applying only the top-5 predicted reactions, consequently limiting the breadth of the search tree. Given that gold-standard routes are only found within the top-5 predicted routes (Supplementary Figure S6), it opens the question if the resulting synthesis routes are closer to human-desired routes. Figure 6: Single-step model prior and rank distributions of reactions from the predicted and solved PaRoutes synthesis routes. A random sample of 100,000 reactions is extracted from the top-10 predicted routes (see Figure 3) for each single-step retrosynthesis model trained on USPTO-PaRoutes-1M. When discussing gold-standard synthesis routes, it is important to point out that a gold-standard route is only one way of synthesizing a desired molecule and other valid synthesis routes might also be possible. However, a good synthesis planning application should be able to prioritize real-world routes from a set of all potential routes, even if the favored chemical reactions change over time. Not finding the real-world routes entirely, yet identifying the correct building blocks, indicates that the produced synthesis routes are invalid or potentially missing vital parts of the synthesis route to be directly useful in an experimental setting. Naturally, there is a clear connection between the ability to recover gold standard routes and the ability to predict solved routes at all. High success rates produce route candidates that might be potential real-world synthesis routes but need to consider chemical validity. Because of this lack of validity, candidates are currently treated as initial retrosynthetic ideas. For a real improvement in the field of retrosynthesis, one of the essential questions, beyond improving the generation of possible solved route candidates, is how to evaluate and improve the chemical validity of generated synthesis routes. For this, it is vital to introduce reagents, conditions and yields into synthesis planning in the future and address the chemical feasibility of the generated routes. Though there is currently a lack of in-silico synthesis feasibility evaluation, as methods like round-trip accuracy [21] only measure if the product is recoverable from the reactants and do not consider full chemical validity, given that retrosynthesis methods do not produce the relevant reagents and conditions required. Newer works have attempted to address this problem by predicting all required components [22]. Chemical validity, however, could potentially be addressed with new advancements in the field, such as molecular dynamics or quantum chemistry prediction. Finally, when selecting the single-step retrosynthesis model for route planning, there are trade-offs between different desired search properties, as no approach outperforms all others if one uses a large enough dataset like USPTO-PaRoutes-1M. Clearly, there is a single-step performance advantage of template-free single-step models on large, heterogenous reaction data. However, this advantage comes at the cost of inference speed at multi-step synthesis planning, where template-based models are generally preferred as they can perform over 200-fold faster than template-free. If the overall goal of synthesis planning is a high success rate with a high average number of produced solved routes while accommodating long search times and a high divergence from reference routes, then the template-free approach, Chemformer, may be relevant. With a slightly lower success rate and average number of solved routes but much shorter runtimes and medium divergence from reference routes the successful state-of-the art template-based model, LocalRetro, is of interest. For very short run times and low divergence from reference routes yet lower success rate and an average number of solved routes, the default single-step retrosynthesis model, AZF, will be of use. Future developed models can aim to address a combination of these goals. One of the underlying problems in the field is that benchmarking different single-step retrosynthesis models within synthesis planning is time- and resource-intensive. However, to facilitate such benchmarking in the future, we analyze the variance of different subsample sizes of the Caspyrus10k multi-step synthesis dataset such that an approximation of the results can be carried out in lieu of running the full datasets for faster benchmarking/prototyping (see Supplementary Tables S2-S5). In detail, we repeatedly randomly subsample a subset of molecules (100, 500, 1000, 5000 molecules) and measure the mean and standard deviation across 1000 subsamples (sampling without replacement). Given that the standard deviation is reasonably small for a sample size of 1000 molecules (see Supplementary Table S4), we provide a selected set of 1000 molecules if a full evaluation is not feasible (see Supplementary Table S6). Noteworthy, this work only explores three state-of-the-art and a common baseline single-step retrosynthesis models, and even though representative of the common research directions, gives us only a snapshot of possible single-step and multi-step retrosynthesis combinations. ## 5 Conclusion In this work, we create the first in-depth study combining state-of-the-art single-step retrosynthesis with multi-step synthesis planning, analyzing the gains and pitfalls of combining the two research fields. We find that there is generally no direct relationship between high single-step performance and successfully finding synthesis routes, both for publicly available and proprietary datasets, emphasizing the need to develop and evaluate single-step retrosynthesis models in a multi-step synthesis planning framework. Moreover, we show that the default single-step retrosynthesis benchmark dataset, USPTO-50k, is insufficient as methods developed for this small, homogenous dataset are not transferable to real-world, larger, and more diverse datasets. This is true for both single-step performance, where performance rankings between models are not transferable, and scalability, where model implementations are not transferable. For multi-step synthesis planning, we show that the single-step model is an essential but thus far ignored aspect of the search algorithm. By merely changing the single-step retrosynthesis model it is possible to improve route-finding success by up to +28%, reaching success rates above 90% compared to the commonly used baseline model, when trained on the same reaction datasets. Furthermore, we show that every single-step model produces unique synthesis routes when used in multi-step synthesis planning, and each single-step model also differs in important aspects such as route-finding success, the average number of found synthesis routes, search times, and chemical validity. To summarize, we show that the combination of single-step retrosynthesis prediction and multi-step synthesis planning is a crucial aspect when developing future methods. ## Acknowledgements This study was partially funded by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Innovative Training Network European Industrial Doctorate grant agreement No. 956832 "Advanced machine learning for Innovative Drug Discovery". Parts of this work were performed using the ALICE compute resources provided by Leiden University.
2301.09613
Anisotropic magnetism and electronic structure of trigonal EuAl$_2$Ge$_2$ single crystals
The magnetic and electronic properties of the layered Zintl-phase compound EuAl$_2$Ge$_2$ crystallizing in the trigonal CaAl$_2$Si$_2$-type structure are reported. Our neutron-diffraction measurements show that EuAl$_2$Ge$_2$ undergoes A-type antiferromagnetic (AFM) ordering below $T_{\rm N} = 27.5(5)$~K, with the Eu moments (Eu$^{2+}$, $S = 7/2$) aligned ferromagnetically in the $ab$ plane. The $H = 0$ magnetic structure consists of trigonal AFM domains associated with $ab$-plane magnetic anisotropy and a field-induced reorientation of the Eu spins in the domains is evident at $T = 2$~K below the critical field $H_{c1} = 2.5(1)$ kOe. Electrical resistivity and ARPES measurements show that EuAl$_2$Ge$_2$ is metallic both above and below $T_{\rm N}$. In the AFM phase, we directly observe folded bands in ARPES due to the doubling of the magnetic unit cell along the $c$ axis with an enhancement of quasiparticle weight due to the complex change in the coupling between the magnetic moments and itinerant electrons on cooling below $T_{\rm N}$. The observed electronic structure is well reproduced by first-principle calculations, which also predict the presence of nontrivial electronic states near the Fermi level in the AFM phase with $Z_2$ topological numbers 1;(000).
Santanu Pakhira, Asish K. Kundu, Farhan Islam, M. A. Tanatar, Tufan Roy, Thomas Heitmann, T. Yilmaz, E. Vescovo, Masahito Tsujikawa, Masafumi Shirai, R. Prozorov, David Vaknin, D. C. Johnston
2023-01-23T18:20:37Z
http://arxiv.org/abs/2301.09613v2
Anisotropic magnetism and electronic structure of trigonal EuAl\({}_{2}\)Ge\({}_{2}\) single crystals ###### Abstract Understanding the interplay between magnetic and electronic degrees of freedom is of profound recent interest in different Eu-based magnetic topological materials. In this work, we studied the magnetic and electronic properties of the layered Zintl-phase compound EuAl\({}_{2}\)Ge\({}_{2}\) crystallizing in the trigonal CaAl\({}_{2}\)Si\({}_{2}\)-type structure. We report zero-field neutron diffraction, temperature \(T\)- and magnetic-field \(H\)-dependent magnetic susceptibility \(\chi(T,H)\), isothermal magnetization \(M(T,H)\), heat capacity \(C_{\rm p}(T,H)\), and electrical resistivity \(\rho(T,H)\) measurements, together with \(T\)-dependent angle-resolved photoemission spectroscopy (ARPES) measurements complemented with first-principle calculations. EuAl\({}_{2}\)Ge\({}_{2}\) undergoes second-order A-type antiferromagnetic (AFM) ordering below \(T_{\rm N}=27.5(5)\) K, with the Eu moments (Eu\({}^{2+}\), \(S=7/2\)) aligned ferromagnetically in the \(ab\) plane while these layers are stacked antiferromagnetically along the \(c\) axis. The critical fields at which all moments become parallel to the field are 37.5(5) and 52.5(5) kOe for \(H\parallel ab\) and \(H\parallel c\), respectively. The \(H=0\) magnetic structure consists of trigonal AFM domains associated with \(ab\)-plane magnetic anisotropy and a field-induced reorientation of the Eu spins in the domains is also evident at \(T=2\) K below the critical field \(H_{c1}=2.5(1)\) kOe. The \(\rho(T)\) measurements reveal metallic behavior transforming into a slight resistivity increase on cooling towards \(T_{\rm N}\). A pronounced loss of spin-disorder scattering is observed below \(T_{\rm N}\). The ARPES results show that EuAl\({}_{2}\)Ge\({}_{2}\) is metallic both above and below \(T_{\rm N}\), and the Fermi surface is anisotropic with two hole pockets at the zone center and one small electron pocket at each M point. In the AFM phase, we directly observe folded bands in ARPES due to the doubling of the magnetic unit cell along the \(c\) axis with an enhancement of quasiparticle weight due to the complex change in the coupling between the magnetic moments and itinerant electrons on cooling below \(T_{\rm N}\). The observed electronic structure is well reproduced by first-principle calculations, which also predict the presence of nontrivial electronic states near the Fermi level in the AFM phase with \(Z_{2}\) topological numbers 1;(000). ## I Introduction It is rewarding to study different classes of novel quantum materials having a complex interplay of lattice, spin, and electronic degrees of freedom. These materials can exhibit a plethora of interesting physical properties including superconductivity, heavy fermion behavior, quantum phase transitions, complex magnetic order, magnetic frustration, valence fluctuations, and nontrivial topological phases. One such family of materials is comprised of Zintl-phase compounds that have gained significant recent interest owing to the complex interplay of magnetic and electronic degrees of freedom. These materials exhibit topological states, proximity between metal-semimetal-semiconductor-insulator phases, anomalous and topological Hall effects, low-field-induced spin reorientations within antiferromagnetic (AFM) domains, along with large thermoelectricity as recently reported in various compounds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Many \(AM_{2}X_{2}\)-type Zintl-phase compounds have been investigated, where \(A\) is an alkaline or lanthanide element, \(M\) is a metallic \(sp\) element, and \(X\) is an \(sp\)-element anion where the \(A\) atom has either a planar triangular or square-lattice structure. These materials have recently been reported to exhibit electronic states having nontrivial band topology. These states include a topological insulating state, a Dirac/Weyl-type semimetallic state, or an axion-insulating state, and are attractive candidates for dissipationless electron transport [9; 11; 12; 13; 14; 15; 16; 17]. It has been experimentally found that when the \(A\) site of these compounds is fully or partially occupied by a rare-earth element, the materials show enhanced carrier mobility and carrier concentration compared to those with \(A\) as an alkaline-earth metal [18; 19]; the origin of this behavior is currently unknown. For example, such magnetic Eu-based compounds are of significant interest due to their complex interplay of magnetism and band topology, as reported for EuIn\({}_{2}\)As\({}_{2}\) EuCd\({}_{2}\)As\({}_{2}\), EuMg\({}_{2}\)Bi\({}_{2}\), and EuSn\({}_{2}\)As\({}_{2}\)[11; 12; 13; 14; 20; 21; 22; 23; 24]. The magnetic properites associated with different anisotropy energies could thus also play an important role in tuning the electronic states in these materials associated with magnetic ordering. Although the Eu\({}^{2+}\) ion with spin \(S=7/2\) and orbital angular momentum \(L=0\) exhibits negligible single-ion anisotropy, the magnetic properties in most of these materials are anisotropic [20; 21; 23; 25; 26; 27; 28]. Here the anisotropy arises from magnetic-dipole and/or anisotropic RKKY interactions. To further investigate the properties of this class of materials, here we report the growth of EuAl\({}_{2}\)Ge\({}_{2}\) single crystals with the trigonal CaAl\({}_{2}\)Si\({}_{2}\) crystal structure [29] and studies of their magnetic, electronic-transport, and electronic-structure properties. These include zero-field neutron-diffraction measurements of the ordered magnetic structure, temperature \(T\)- and magnetic-field \(H\)-dependent magnetization \(M(H,T)\), heat capacity \(C_{\rm p}(T)\), and electrical-resistivity \(\rho(H,T)\) measurements, along with \(T\)-dependent angle-resolved photoemission spectroscopy (ARPES) studies of the electronic structure. The experimental electronic structure is mapped by calculating the band structure of EuAl\({}_{2}\)Ge\({}_{2}\) using density-functional theory (DFT). We find that EuAl\({}_{2}\)Ge\({}_{2}\) is metallic as revealed by the \(\rho(T)\) and ARPES measurements complemented with theoretical band-structure calculations. The neutron-diffraction experiments demonstrate that EuAl\({}_{2}\)Ge\({}_{2}\) exhibits A-type AFM order below its Neel temperature \(T_{\rm N}=27.5(5)\) K. In this magnetic structure the Eu\({}^{2+}\) moments \(\mu=7\ \mu_{\rm B}\) with spectroscopic-splitting factor \(g=2\) and spin \(S=7/2\) are aligned ferromagnetically in each \(ab\)-plane layer, where the moments in adjacent layers along the \(c\) axis are aligned antiferromagnetically. The \(C_{\rm p}\) data for \(H=0\) exhibit a second-order \(\lambda\)-type peak at \(T_{\rm N}\). The ARPES results further reveal magnetism-induced band folding and enhancement of the quasiparticle intensity in the magnetic ground state. Splitting of the energy bands is observed even above \(T_{\rm N}\), possibly related to time-reversal-symmetry breaking associated with short-range ferromagnetic (FM) correlations above \(T_{\rm N}\). Over the broad field range \(0\leq H\leq 55\) kOe, the \(M(H)\) data at \(T=2\) K appear to be linear for both \(H\parallel ab\) and \(H\parallel c\) with respective critical fields \(H_{ab}^{c}=37.5\) and \(H_{c}^{c}=52.5\) kOe, at which all moments become parallel to the respective field. However, detailed \(M(H_{ab},T=2\) K) measurements at low fields \(H_{ab}\leq H_{\rm c1}=2.5\) kOe exhibit anomalous positive curvature, whereas for \(H_{ab}>H_{\rm c1}\) the data are again linear up to \(H_{ab}^{c}\). This behavior is quantitatively described by a model where the A-type AFM structure occurs in three trigonal domains in which the Eu moments in each domain are aligned at \(120^{\circ}\) to each other in \(H=0\). With increasing \(H_{ab}\) the moments in each domain reorient to become perpendicular to \({\bf H}_{ab}\) until \(H_{\rm c1}\) is reached, above with all moments progressively cant towards \({\bf H}_{ab}\) until \(H_{ab}^{c}\) is attained. Experimental and theoretical details are given in Sec. II. The results and discussion of the various measurements and analyses are presented in Sec. III, and concluding remarks are provided in Sec. IV. ## II Experimental and theoretical details Single crystals of EuAl\({}_{2}\)Ge\({}_{2}\) were grown using the flux method with starting composition Eu:Al:Ge = 1:20:2. The Eu (Ames Lab), Al (Alfa Aesar, 99.9995%), and Ge (Alfa Aesar, 99.9999%) were loaded into a 2 mL alumina crucible and sealed in a silica tube under 1/4 atm high-purity argon. The assembly was heated to 1175 \({}^{\circ}\)C inside a box furnace at a rate of 100 \({}^{\circ}\)C/h. After holding the temperature for 6 h, the furnace was cooled to 700 \({}^{\circ}\)C at a rate of 10 \({}^{\circ}\)C/h. The assembly was then centrifuged to separate the crystals from the molten flux. Hexagonal plate-like crystals with typical dimensions \(3\times 3\times 2\) mm\({}^{3}\) were obtained from the growth. The homogeneity and chemical composition of the crystals were confirmed using a JEOL scanning-electron microscope (SEM) equipped with an energy-dispersive x-ray spectroscopy (EDS) analyzer. The magnetic measurements were carried out using a Magnetic-Properties-Measurement System (MPMS) from Quantum Design, Inc., in the \(T\) range 1.8-300 K and with \(H\) up to 5.5 T (1 T \(\equiv 10^{4}\) Oe). A Physical Properties Measurement System (PPMS, Quantum Design, Inc.) was used to measure \(C_{\rm p}(T)\) and \(\rho(T)\) in the \(T\) range 1.8-300 K and \(H\) up to 9 T. Four-probe \(\rho(T)\) measurements were performed. The measurements were performed on as-grown single crystals. Due to the sensitivity of EuAl\({}_{2}\)Ge\({}_{2}\) to the ambient environment leading to a rapid sample decomposition, the crystals were not shaped into resistivity bars with precision geometric-factor control by polishing and cutting. However, the crystals had natural shapes suitable for in-plane resistivity measurements, having a length at least 3 times larger than the width and thickness. Resistivity measurements were performed along arbitrary directions in the \(ab\) plane. In all resistivity measurements the magnetic field was oriented transverse to the current direction. Contacts to the fresh surfaces of the crystals were made by attaching 50 \(\mu\)m-diameter silver wires with In solder and mechanically reinforcing the contact with DuPont 4929N silver paint [30]. The contact resistance was typically in the \(\Omega\) range. After application of the contacts was complete, the samples were covered with Apiezon N-grease to provide temporal protection from degradation. For measurements in magnetic fields oriented along the \(c\) axis and \(ab\) plane, the samples were attached with Apiezon N-grease to the sides of a plastic cube. This provides alignment with about \(\pm 5^{\circ}\) accuracy [31]. Single-crystal neutron-diffraction experiments were performed in \(H=0\) using the TRIAX triple-axis spectrometer at the University of Missouri Research Reactor (MURR). An incident neutron beam of energy 30.5 meV was directed at the sample using a pyrolytic graphite (PG) monochromator. A PG analyzer was used to reduce the background. Neutron wavelength harmonics were removed from the beam using PG filters placed before the monochromator and in between the sample and analyzer. Beam divergence was limited using collimators before the monochromator; between the monochromator and sample; sample and analyzer; and analyzer and detector of \(60^{\prime}-60^{\prime}-40^{\prime}-40^{\prime}\), respectively. A \(\approx 20\) mg EuAl\({}_{2}\)Ge\({}_{2}\) crystal was mounted on the cold tip of an Advanced Research Systems closed-cycle refrigerator with a base temperature of 4 K. The crystal was aligned in the (\(HHL\)) scattering planes. The lattcle parrameters at base temperature are \(a=4.19(1)\) and \(c=7.27(1)\) A. ARPES experiments were performed at the Electron Spectro Microscopy (ESM) 21-ID-1 beamline of the National Synchrotron Light Source II, USA. The beamline is equipped with a Scienta D30 electron analyzer, with base pressure better than \(\sim 1\times 10^{-11}\) mbar. Prior to the ARPES experiments, samples were cleaved inside an ultra-high vacuum chamber (UHV) at \(\sim 9\) K. All the measurements were performed using horizontally polarized light. The uncertainty in the temperature values for the ARPES measurements is \(\pm 2\) K. The Vienna _ab initio_ simulation package was used for the first-principles calculations [32; 33]. For the exchange and correlation energy/potential we used the PBE functional [34]. The projected-augmented-wave [35] method was used to represent the core electrons. The cut-off energy for the plane waves was set to 550 eV. A \(k\)-mesh of 14\(\times\)14\(\times\)7 (AFM phase) and 14\(\times\)14\(\times\)12 (PM phase) was used for the Brillouin-Zone integration. Spin-orbit-coupling (SOC) was considered in all calculations. The GGA + U method [36] was used to treat the correlation effects of 4\(f\) states in Eu. Furthermore, WANNIER90 was used for the construction of the first-principle tight binding Hamiltonian and constant energy surfaces [37]. The \(s\) and \(p\) orbitals of Ge and Al and \(s\), \(p\), \(d\), and \(f\) orbitals of Eu were used to construct maximally-localized Wannier functions. In the case of the PM phase, we treated the \(f\) electrons of Eu as core states. The WANNIERTOOLS package was used for the calculation of the \(Z_{2}\) topological number [38]. For the visualization of the Fermi surfaces, we used FermiSurfer [39] ## III Results and Discussion ### Zero-field neutron diffraction Figure 1(a) shows zero-field neutron-diffraction scans along the (\(00L\)) direction in reciprocal-lattice units (r.l.u.) at 6 K and 30 K, where reflections at half-integer \(L\) values are apparent at \(T=6\) K. For more clarity, Fig. 1(b) shows the difference between these two scans, where within experimental uncertainty, there is no evidence for other reflections associated with a modulated structure along the \(c\) axis. We also note that the intensities of the new peaks become weaker at larger \(L\) values, roughly following the falloff expected from the magnetic form factor of Eu\({}^{2+}\). Similar differences [i.e., \(I(6\) K)\(-I(30\) K)] for scans along (\(\frac{1}{2}\frac{1}{2}L\)) and (\(11L\)), shown in Figs. 1(c,d), respectively, do not reveal any magnetic peaks. Qualitatively, these newly-emerging Bragg reflections indicate a doubling of the unit cell along the \(c\) axis. These qualitative observations unequivocally establish that these reflections are associated with A-type AFM ordering with propagation vector \(\vec{\tau}=\left(0,0,\frac{1}{2}\right)\), consisting of layers of moments aligned ferromagnetically in the \(ab\) plane, with moments in adjacent planes along the \(c\) axis aligned antiferromagnetically. The proposed A-type AFM structure is shown in Fig. 1(f), where adjacent nearest-neighbor FM layers along the \(c\) axis are rotated by 180\({}^{\circ}\) with respect to each other. The direction of the FM moment within an Eu layer cannot be determined from neutron diffraction alone. Using published values, we obtain good agreement with lattice parameters; however, the peak intensities differ significantly from the calculated values due to strong absorption effects by Eu, which are not accounted for in our calculations. Nevertheless, we are able to confirm the A-type magnetic structure and obtain an estimate for the Eu ordered magnetic moment \(\mu=g\langle S\rangle\)\(\mu_{\rm B}=(6.5\pm 1)\)\(\mu_{\rm B}\) at \(T=6\) K by calculating the magnetic and chemical structure factors, where \(S\) is the spin magnetic quantum number, \(g\) is the spectroscopic-splitting factor, and \(\mu_{\rm B}\) is the Bohr magneton. We note that the large uncertainty in the evaluation of the ordered magnetic moment is mainly due to strong-absorption effects which were not accounted for. Within the error, the fitted value of \(\mu\) agrees with the expected value \(\mu=7\,\mu_{\rm B}\)/Eu using \(g=2\) and \(S=7/2\). Figure 1(e) shows the integrated intensity of the (0 0 \(\frac{1}{2}\)) magnetic peak as a function of temperature where we use a weighted power-law function by a Gaussian distribution of \(T_{\rm N}\) \[I_{(0\,0\,0.5)}(T)=C|1-T/T_{\rm N}|^{2\beta}\propto\mu^{2}, \tag{1}\] yielding \(T_{\rm N}=(27.3\pm 0.8)\) K and \(\beta=0.21\pm 0.01.\) The \(T_{\rm N}\) is in good agreement with the value \(T_{\rm N}=(27.5\pm 0.5)\) K obtained from the \(\chi(T)\) and \(C_{\rm p}(T)\) measurements below. ### Magnetic Susceptibility The inverse magnetic susceptibility \(\chi^{-1}(T)\) data measured under an applied field \(H=1\) kOe for both \(H\parallel ab\) and \(H\parallel c\) are shown in Figs. 2(a) and 2(b), respectively. The data for \(T\geq 50\) K for both field directions were fitted by the modified Curie-Weiss law \[\chi_{\alpha}(T)=\chi_{0}+\frac{C_{\alpha}}{T-\theta_{\rm pa}}\quad(\alpha\ = ab,\ c), \tag{2}\] where \(\chi_{0}\) is the temperature-independent contribution, \(C_{\alpha}\) is the Curie constant, and \(\theta_{\rm p}\) is the paramagnetic Weiss temperature. The Curie constant \(C_{\alpha}\) is given by \[C_{\alpha}=\frac{N_{\rm A}g_{\alpha}{}^{2}S(S+1)\mu_{\rm B}^{2}}{3k_{\rm B}}= \frac{N_{\rm A}\mu_{\rm eff,\alpha}^{2}}{3k_{\rm B}},\] (3a) where \[N_{\rm A}\] is Avogadro's number and the effective magnetic moment is given by \[\mu_{\rm eff,\alpha}=g_{\alpha}\sqrt{S(S+1)}\,\mu_{\rm B}. \tag{3b}\] The fits of the \(\chi_{\alpha}^{-1}(T)\) data by Eq. (2) is depicted in Figs. 2(a) and 2(b) for \(H\parallel ab\) and \(H\parallel c\), respectively, and the fitted parameters are listed in Table 1. The effective moments are close to the value 7.94 \(\mu_{\rm B}\)/Eu expected for Eu\({}^{2+}\) spins with \(S=7/2\) and \(g=2\). The positive values of the Weiss temperatures \(\theta_{\rm p\alpha}\) are consistent with the A-type AFM order revealed by the above zero-field neutron-diffraction measurements, where the in-plane FM interactions between the Eu spins are dominant over the interplane AFM interactions. The \(T\) dependences of the magnetic susceptibilities \(\chi\) of EuAl\({}_{2}\)Ge\({}_{2}\) measured in \(H=0.1\) kOe for the in-plane (\(H\parallel ab\)) and out-of-plane (\(H\parallel c\)) field directions are shown in Fig. 3(a). A sharp AFM transition is observed at \(T_{\rm N}=27.5(5)\) K, which is the same as reported earlier for polycrystalline EuAl\({}_{2}\)Ge\({}_{2}\)[40]. The anisotropy between \(\chi_{ab}\) and \(\chi_{c}\) above \(T_{\rm N}\) likely arises from a combination of magnetic-dipole and magnetocrystalline interactions. The \(\chi_{J}(T)\) data above \(T_{\rm N}\) for Heisenberg interactions in the absence of anisotropy are obtained as the average \[\chi_{J}(T\geq T_{\rm N})=\frac{1}{3}[2\chi_{ab}(T)+\chi_{c}(T)] \tag{4}\] which is plotted in Fig. 3(b). Then the data at \(T\leq T_{\rm N}\) are shifted vertically until they match the \(\chi_{J}(T\geq T_{\rm N})\) data at \(T_{\rm N}\) as shown. The \(\chi_{J,ab}\) data strongly decrease on cooling from \(T_{\rm N}\) to \(T\sim 5\) K, whereas the out-of-plane susceptibility \(\chi_{J,ab}\) is less dependent on the temperature, signifying that the \(ab\) plane is the easy plane. This observation is in good agreement with the neutron-diffraction results revealing the A-type nature of the magnetic ground state with Figure 1: (a) Zero-field neutron-diffraction pattern along (\(00L\)) of single-crystal EuAl\({}_{2}\)Ge\({}_{2}\) at 6 and 30 K, as indicated. The aluminum Bragg reflections marked on the figure originate from the sample holder. The magnetic Bragg reflections are obtained by subtracting the diffraction pattern at 30 K from the one at 6 K for (b) (\(00L\)), (c) \((\frac{1}{2}\frac{1}{2}L)\), and (d) (\(11L\)) scans. The difference patterns in (b) show clear magnetic peaks at half-integer \(L\) up to \(L=3.5\). No such peak is observed in (c,d) along the \((\frac{1}{2}\frac{1}{2}L)\) and (\(11L\)) directions. These observations are consistent with an A-type AFM state, _i.e_, the \(H=0\) ground state is such that the intraplane moments are ferromagnetically aligned in the \(ab\) plane while the moments in adjacent Eu planes along the \(c\) axis are aligned antiferromagnetically. Note that structure-factor calculations for this model indicate (\(11L\)) at half-integer values of \(L\); we argue that their absence in (d) is due to the form factor of Eu\({}^{2+}\) at these relatively large momentum transfers. (e) Integrated intensity as a function of temperature \(T\) of the (\(0\;0\;\frac{1}{2}\)) magnetic Bragg reflection fitted with a power-law to yield \(T_{\rm N}=(27.3\pm 0.8)\) K and \(\beta=0.21\pm 0.01\). (f) Chemical and A-type AFM ground-state structure of EuAl\({}_{2}\)Ge\({}_{2}\). Neutron-diffraction data are insufficient to determine the in-plane moment directions. Therefore, we arbitrarily show the in-plane moments pointing along the next-nearest-neighbor direction. \begin{table} \begin{tabular}{c c c c c} \hline \hline Field & \(\chi_{0}\) & \(C_{\alpha}\) & \(\mu_{\rm eff\alpha}\) & \(\theta_{\rm p\alpha}\) \\ direction & \(\left(10^{-4}\;\frac{\rm cm^{3}}{\rm mol}\right)\) & \(\left(\frac{\rm cm^{3}K}{\rm mol}\right)\) & \((\mu_{\rm B}/\rm Eu)\) & (K) \\ \hline H \(\parallel ab\) & \(-2.6(3)\) & 7.86(1) & 7.93(1) & 24.26(7) \\ H \(\parallel c\) & \(-1.9(3)\) & 7.99(1) & 7.99(1) & 21.86(7) \\ \hline \hline \end{tabular} \end{table} Table 1: The obtained Parameters from the fits of the data in Figs. 2(a) and 2(b) by Eq. (2). Listed parameters are the \(T\)–independent contribution to the magnetic susceptibility \(\chi_{0}\), Curie constant per mol \(C_{\alpha}\) in \(\alpha=ab,c\) directions, effective moment per Eu \(\mu_{\rm eff}(\mu_{\rm B}/\rm Eu)\approx\sqrt{8C}\) and Weiss temperature \(\theta_{\rm p\alpha}\) obtained from the \(\chi^{-1}(T)\) versus \(T\) data for \(H=1\) kOe. the moments aligned in the \(ab\) plane. However, below \(\sim 5\) K, both \(\chi_{J,c}\) and \(\chi_{J,ab}\) increase sharply, indicating the occurrence of an additional magnetic transition of unknown nature at \(T\sim 5\) K. Our neutron-diffraction measurements could not examine the additional transition as their low-\(T\) limit was 6 K. Here we utilize the molecular field theory (MFT) [41; 42] for \(c\)-axis helical antiferromagnets with the moments aligned in the \(ab\) plane with \(c\)-axis propagation vector \(k\) and interlayer spacing \(d\) for which \(kd\) is the turn angle between moments in adjacent layers. The in-plane magnetic susceptibility \(\chi_{Jab}(T)\) associated with Heisenberg spins and spin interactions \(J\) for \(T\leq T_{\rm N}\) and no anisotropy can be written as \[\frac{\chi_{J,ab}(T\leq T_{\rm N})}{\chi_{J}(T_{\rm N})}=\frac{(1+\tau^{*}+2f+ 4B^{*})(1-f)/2}{(\tau^{*}+B^{*})(1+B^{*})-(f+B^{*})^{2}},\] (5a) where \[f=\theta_{\rm p\,ave}/T_{\rm N}, \tag{5b}\] \[B^{*}=2(1-f)\cos(kd)\left[1+\cos(kd)\right]-f,\] (5c) \[t=\frac{T}{T_{\rm N}},\quad\tau^{*}(t)=\frac{(S+1)t}{3B^{\prime}_{S}(y_{0})}, \quad y_{0}=\frac{3\bar{\mu}_{0}}{(S+1)t},\] (5d) the ordered moment versus \[T\] in \[H=0\] is denoted by \[\mu_{\rm sat}=gS\mu_{\rm B}=7\,\mu_{\rm B}\] here is determined by numerically solving the self-consistency equation \[\bar{\mu}_{0}=B_{S}(y_{0}),\] (5e) \[B^{\prime}_{S}(y_{0})=\left[dB_{S}(y)/dy\right]|_{y=y_{0}}\], and the Brillouin function \[B_{S}(y)\] is \[B_{S}(y)=\frac{1}{2S}\left\{(2S+1){\rm coth}\left[(2S+1)\frac{y}{2}\right]-{ \rm coth}\left(\frac{y}{2}\right)\right\}. \tag{5f}\] Figure 3: (a) Temperature dependence of the magnetic susceptibilities measured for \(H=0.1\) kOe with \(H\parallel ab\) (black squares) and \(H\parallel c\) (red circles). The upturns in the \(\chi_{ab}(T)\) and \(\chi_{c}(T)\) data below \(\sim 5\) K may be associated with an additional magnetic ordering of unknown type. (b) Spherically-averaged Heisenberg magnetic susceptibility \(\chi_{J}(T)\) in the PM state with \(T\geq T_{\rm N}\) obtained using Eq. (4) (filled blue triangles). The blue curve connects the data points. The \(\chi_{ab}(T)\) and \(\chi_{c}(T)\) data in (a) for \(T\leq T_{\rm N}\) are respectively shifted vertically to match the values at \(T_{\rm N}\) to the value \(\chi_{J}(T=T_{\rm N})=0.96\) cm\({}^{3}\)/mol. The \(\chi_{J,ab}(T\leq T_{\rm N})\) for A-type AFM order predicted by Eqs. (5) for \(kd=\pi\) rad and \(f=\theta_{\rm p\,ave}/T_{\rm N}=0.853\) is shown as the green curve. For A-type ordering with the moments aligned in the \(ab\) plane, one theoretically expects \(\chi_{J,ab}(0\ {\rm K})/\chi_{J}(T_{\rm N})=1/2\), close to the observed value. Figure 2: Inverse magnetic susceptibility as a function of temperature \(\chi^{-1}(T)\) measured for \(H=1\) kOe, when (a) \(H\parallel ab\) and (b) \(H\parallel c\). Using the value of \(f\) calculated from the values of \(\theta_{\rm p,ave}\) and \(T_{\rm N}\) from Table 1, the calculated \(\chi_{J,ab}(T)\) for \(T\leq T_{\rm N}\) is shown by the green curve in Fig. 3(b). As seen in the figure, the calculated curve deviates somewhat from the experimental \(\chi_{J,ab}(T)\) data, likely due to the additional higher-\(T\) magnetic precursor contributions of the anticipated low-\(T\) order below 5 K. According to the MFT [41; 42], at \(T=0\) we have \[\frac{\chi_{J,ab}(T=0)}{\chi_{J,ab}(T_{\rm N})}=\frac{1}{2[1+2\,\cos(kd)+2\, \cos^{2}(kd)]}. \tag{6}\] Thus, for an A-type AFM, where the turn angle between adjacent \(ab\)-plane FM layers is \(kd\to 180^{\circ}\), one expects \(\chi_{J,ab}(T=0)/\chi_{J,ab}(T_{\rm N})\to 1/2\), close to the value in Fig. 3(b). The \(\chi(T)\) measured at several applied magnetic fields \(H\) are shown in Figs. 4(a) and 4(b) for \(H\parallel ab\) and \(H\parallel c\), respectively. Interestingly, although the out-of-plane magnetic susceptibility \(\chi_{c}\) remain almost independent of \(H\) for \(H\leq 10\) kOe, the in-plane susceptibility \(\chi_{ab}\) changes significantly with \(H\) for \(T<T_{\rm N}\) and \(H\) up to 5 kOe. Similar behavior was also observed for the trigonal A-type AFM compounds EuMg\({}_{2}\)Bi\({}_{2}\), EuMg\({}_{2}\)Sb\({}_{2}\), EuSn\({}_{2}\)As\({}_{2}\), and tetragonal EuGa\({}_{4}\) with the moments aligned in the \(ab\) plane [23; 10; 20; 43]. We have argued that the A-type ground state spin structure of these materials consist of three-fold (for trigonal) or four-fold (for tetragonal) AFM domains associated with in-plane magnetic anisotropy. As Eu\({}^{2+}\) moments with \(L=0\) provide negligible single-ion anisotropy, magnetic dipole interaction and other magnetocrystalline anisotropy energy may play a critical role for the formation of AFM domains in these materials. The \(H\)-dependent change in the \(\chi_{ab}(T)\) behavior is due to the reorientation of the spins with in-plane field \(H_{\rm ab}\) up to a critical field \(H_{c1}\), where all the spins in different domains become perpendicular to the in-plane applied field direction. The spins tend to align along the field direction for \(H>H_{c1}\), as expected for a collinear antiferromagnet. ### Isothermal magnetization versus applied magnetic field measurements #### iii.3.1 Overview The evolution of the ground-state spin configuration in EuAl\({}_{2}\)Ge\({}_{2}\) is further probed by isothermal magnetization versus applied magnetic field \(M(H)\) measurements. The \(M(H)\) behavior in the hysteresis mode for \(-5.5\;\rm T\leq H\leq 5.5\;\rm T\) measured at \(T=2\) K is shown in Fig. 5(a). No magnetic hysteresis is observed for fields applied either in the \(ab\) plane or along the \(c\) axis. Figures 5(b) and 5(c) show the \(M(H)\) behavior measured at different temperatures for \(H\parallel ab\) (\(M_{ab}\)) and \(H\parallel c\) (\(M_{c}\)), respectively, for our full field range 0-55 kOe. Here both \(M_{ab}\) and \(M_{c}\) appear to increase linearly with \(H\) and saturate above the respective critical field \(H_{ab}^{c}=37.5(5)\) kOe and \(H_{c}^{c}=52.5(5)\) kOe with a saturation moment \(\mu_{\rm sat}=7.0(5)\,\mu_{\rm B}/{\rm Eu}\) at \(T=2\,\rm K\). The measured \(\mu_{\rm sat}\) value agrees with \(\mu_{\rm sat}=gS\mu_{\rm B}=7\,\mu_{\rm B}/{\rm Eu}\) expected for Eu\({}^{+2}\) ions with spectroscopic-splitting factor \(g=2\) and spin \(S=7/2\). The significant difference between the critical-field values for the two field directions indicates the presence of a considerable magnetic anisotropy in the system with \(ab\)-plane ordering preferred over \(c\)-axis ordering in the A-type AFM structure, as also observed in the magnetic susceptibility behavior in Fig. 4. Figures 5(b) and 5(c) show that the \(H^{c}\) values decrease with increase in the temperature for \(T<T_{\rm N}\) as expected. The \(M(H)\) data measured at \(T=50\) K, greater than \(T_{\rm N}=27.5\) K, are also nonlinear for both the field directions, suggesting the presence of short-range dynamic magnetic correlations in EuAl\({}_{2}\)Ge\({}_{2}\) above \(T_{\rm N}\). #### iii.3.2 Low-field \(M_{ab}(h)\) data The \(M_{ab}(H)\) data at \(T=2\) K \(\ll T_{\rm N}=27.5\) K in Fig. 5(a) measured over our maximum field range below \(T_{\rm N}\) appear to increase linearly up to \(H_{ab}^{c}=37.5(5)\) kOe Figure 4: Magnetic susceptibility \(\chi_{\alpha}(T)\), \(\alpha=ab,c\), at different applied magnetic fields for (a) \(H\parallel ab\) and (b) \(H\parallel c\). Although, the \(\chi_{c}(T)\) is weakly dependent on \(H\) below \(T\leq T_{\rm N}\), \(\chi_{c}(T)\) is strongly \(H\)-dependent up to \(H=10\) kOe. above which they saturate. However, a careful study at low fields revealed that \(M_{ab}(H)\) at \(T=2\) K exhibits positive curvature below \(H\lesssim 2.5\) kOe as shown in Fig. 5(d). The positive curvature is more clearly reflected in the \(dM_{ab}/dH\) versus \(H\) at \(T=2\) K plotted in Fig. 5(e) that exhibits a broad peak at \(H_{\rm c1}=2.5(1)\) kOe. On the other hand, no nonlinearity is observed in the \(M_{ab}(H)\) data at \(T>T_{\rm N}\) or in the \(M_{c}(H)\) data at any temperature. A similar behavior of \(M_{ab}(H)\) was observed by us at \(T\approx 2\) K, far below the respective \(T_{\rm N}\) for other Eu-based trigonal compounds EuMg\({}_{2}\)Bi\({}_{2}\) and EuMg\({}_{2}\)Sb\({}_{2}\) containing triangular Eu layers, as well as for the tetragonal compound EuGa\({}_{4}\) containing square-lattice Eu layers [23; 10; 28; 43; 21], where each compound exhibits A-type AFM order with the moments aligned in the \(ab\) plane as in EuAl\({}_{2}\)Ge\({}_{2}\). #### iii.1.3 Theoretical modeling of the low-field M\({}_{ab}(H)\) data #### iii.1.4 Overview In order to model the nonlinear low-field \(M_{ab}(H)\) data at \(T\ll T_{\rm N}\) for EuMg\({}_{2}\)Bi\({}_{2}\), EuMg\({}_{2}\)Sb\({}_{2}\), and EuGa\({}_{4}\), we previously proposed that the A-type AFM ground state of each contains threefold or fourfold A-type AFM domains of moments for the trigonal and tetragonal spin systems, respectively. In the trigonal case, the three domains are associated with a weak \(ab\)-plane magnetic anisotropy energy \[E_{\rm anis}=K_{3}\sin(3\phi) \tag{7}\] with minima in the \(ab\)-plane azimuthal angle \(\phi\) at \(\pi/2\), \(5\pi/6\) and \(-5\pi/6\) rad, where \(K_{3}\) is the anisotropy constant. Thus in \(H=0\), the collinear moments in adjacent layers in EuAl\({}_{2}\)Ge\({}_{2}\) form three domains with the collinear moments oriented along these three minima as shown in Fig. 6(a). Upon application of \(ab\)-plane magnetic field \({\bf H}_{x}\), the antiparallel spins in domains B and C initially rotate in a direction to become perpendicular to \({\bf H}\) at \(H_{\rm c1}\) as shown by the arrows in Fig. 6(a) attached to an angular deviation \(\Delta\phi\) for a particular value of the field \(H_{x}\). This happens because for a collinear antiferromagnet at \(T=0\) K, the magnetic susceptibility parallel to the moments is zero, whereas the susceptibility if the moments are perpendicular to the field the magnetic susceptibility \(\chi_{\perp}=\chi(T)\)N is nonzero according to molecular-field theory (MFT) [42] and hence the lowest energy occurs if the moments are perpendicular to \({\bf H}_{x}\), as discussed further below. With a sufficiently large \(H_{x}\equiv H_{\rm c1}\), all moments Figure 5: (a) Magnetic field dependence of isothermal magnetization \(M(H)\) in the hysteresis mode for \(-5.5~{}{\rm T}\leq H\leq 5.5~{}{\rm T}\) measured at \(T=2\) K for both \(H\parallel ab\) and \(H\parallel c\). \(M(H)\) behavior measured at different temperatures for (b) \(H\parallel ab\) and (c) \(H\parallel c\). (d) Low-field \(M(H)\) data showing nonlinearity in the \(M_{ab}(H)\) behavior for \(T<T_{\rm N}\), whereas \(M_{c}(H)\) is linear down to the lowest measured temperature \(2\) K. This nonlinearity is clearly reflected in the \(dM/dH\) data shown in (e). (f) The experimental magnetization \(M_{ab}(H)\) at \(T=2\) K along with the theoretical prediction for \(T=0\) K with \(H_{\rm c1}\approx 2.5\) kOe. The dashed line is the guide to the eye of the high-field extrapolated linear behavior. The \(M_{ab}(H)\) data exhibit positive curvature for \(H<H_{\rm c1}\) as predicted by our theory, but the origin of the quantitative difference between experiment and theory is not clear at present. are oriented perpendicular to \(\mathbf{H}_{x}\) apart from a small canting \(\lesssim 1^{\circ}\) towards \(\mathbf{H}_{x}\) that is responsible for the measured magnetization at this field. As discussed quantitatively below, the positive curvature in \(M_{ab}(H)\) for \(H_{x}<H_{\mathrm{c1}}\) as seen in Fig. 5(f) arises from this magnetic-field-induced reorientation of the moments in Domains B and C. At fields larger than \(H_{\mathrm{c1}}\), according to MFT [42]\(M_{ab}(H)\) increases linearly up to the critical field \(H_{ab}^{\mathrm{c}}\) at which all moments are aligned parallel to \(\mathbf{H}_{x}\) and hence the magnetization saturates to the value \(7\mu_{\mathrm{B}}/\mathrm{Eu}\), in agreement with the experimental data in Fig. 5(a). #### b.3.2 Calculations Here we summarize the development of the model in Ref. [10] for EuMg\({}_{2}\)Bi\({}_{2}\) and EuMg\({}_{2}\)Sb\({}_{2}\) as applied to EuAl\({}_{2}\)Ge\({}_{2}\). In the small fields \(0\leq H_{x}\leq H_{\mathrm{c1}}\), the angles of the moments in domains A, B, and C in Fig. 6(a) with respect to the positive \(x\) axis are respectively given by \[\phi_{\mathrm{A}} = \frac{\pi}{2},\] \[\phi_{\mathrm{B}} = -\frac{5\pi}{6}+\Delta\phi\quad(0\leq\Delta\phi\leq\pi/3), \tag{8}\] \[\phi_{\mathrm{C}} = -\frac{\pi}{6}-\Delta\phi.\quad(0\leq\Delta\phi\leq\pi/3).\] The anisotropy energy averaged over the moments in the three domains in the field range \(0\leq H_{x}\leq H_{\mathrm{c1}}\) using Eqs. (7) and (8) is \[E_{\mathrm{anis\,ave}}=-\frac{K_{3}}{3}[1+2\cos(3\Delta\phi)]. \tag{9}\] The magnetic energy in the regime \(0\leq H_{x}\leq H_{\mathrm{c1}}\) is given by \[E_{\mathrm{mag}}=-\chi_{\perp}H_{x}^{2}\sin(\phi),\] (10a) where \[\chi_{\perp}\] is the \[ab\] -plane magnetic susceptibility at \[T=0\] K when all moments are perpendicular to \[\mathbf{H}_{x}\], _i.e._, when \[\phi=\pi/2\]. Summing over the angles of the moments in the three domains in Eq. (8) and dividing by 3 gives the average magnetic energy as \[E_{\mathrm{mag\,\,ave}}=-\frac{\chi_{\perp}H_{x}^{2}}{3}\left[1+2\sin^{2} \left(\frac{\pi}{6}+\Delta\phi\right)\right].\] (10b) The total average energy \[E_{\mathrm{ave}}=E_{\mathrm{anis\,ave}}+E_{\mathrm{mag\,ave}}\] is given by the sum of Eqs. (9) and ( 10b ). Then normalizing \[E_{\mathrm{mag\,\,ave}}\] by \[K_{3} = -\frac{1}{3}\bigg{\{}1+2\cos(3\Delta\phi)]\] \[+\frac{\chi_{\perp}}{K_{3}}H_{x}^{2}\left[1+2\sin^{2}\left(\frac{ \pi}{6}+\Delta\phi\right)\right]\bigg{\}}.\] Minimizing \(E_{\mathrm{ave}}/K_{3}\) with respect to the quantity \(\chi_{\perp}H_{x}^{2}/K_{3}\) yields the relationship between \(\Delta\phi\) and \(H_{x}\) given by \[3\csc\left(\frac{\pi+6\Delta\phi}{3}\right)\sin(3\Delta\phi)=\frac{\chi_{\perp} H_{x}^{2}}{K_{3}},\] (12a) which yields \[\frac{\chi_{\perp}H_{x}^{2}}{K_{3}}(\Delta\phi=0) = 0, \tag{12b}\] \[\frac{\chi_{\perp}H_{\mathrm{c1}}^{2}}{K_{3}}(\Delta\phi=\pi/3) = 9/2. \tag{12c}\] Figure 6: (a) Reorientation of the Eu magnetic moments in the three trigonal \(ab\)-plane antiferromagnetic domains in a small \(ab\)-plane magnetic field \(H_{x}<H_{\mathrm{c1}}\). Here, the two oppositely-directed arrows in each domain represent the moment orientations in adjacent layers of the A-type AFM structure in small fields. The arrows indicate the direction and increment \(\Delta\phi\) of rotation of the moments in domains B and C towards the vertical orientation, perpendicular to the applied field \(\mathbf{H}_{x}\). The moments in each domain remain antiparallel to each other for \(H_{x}<H_{\mathrm{c1}}\) apart from a small canting (\(\lesssim 1^{\circ}\)) towards the magnetic field direction that gives rise to the measured magnetization in this field range. (b) Orientation of the moments at the critical field \(H_{x}=H_{\mathrm{c1}}\) where all moments are perpendicular to \(\mathbf{H}_{x}\) except for the small canting towards \(\mathbf{H}_{x}\). At higher fields, all moments cant toward \(\mathbf{H}_{x}\) for \(H_{\mathrm{c1}}<H_{x}<H_{ab}^{\mathrm{c}}\) until at the critical field \(H_{ab}^{\mathrm{c}}\) all moments are aligned ferromagnetically in the direction of \(\mathbf{H}_{x}\). Equation (12c) allows the anisotropy constant \(K_{3}\) in EuAl\({}_{2}\)Ge\({}_{2}\) to be calculated from the known values of the molar \(\chi_{\perp}=\chi_{J}(T_{\rm N})=0.96\) cm\({}^{3}\)/mol from Fig. 3(b) and \(H_{\rm c1}=2.5\) kOe according to \[K_{3}=\frac{\chi_{\perp}H_{\rm c1}^{2}}{(9/2)N_{\rm A}}=1.4\times 10^{-3}\ {\rm meV/Eu}, \tag{13}\] where \(N_{\rm A}\) is Avogadro's number. For comparison, \(K_{3}=6.5\times 10^{-5}\) meV/Eu in trigonal EuMg\({}_{2}\)Bi\({}_{2}\)[10], \(K_{3}=1.8\times 10^{-5}\) meV/Eu in trigonal EuMg\({}_{2}\)Sb\({}_{2}\)[10], and \(K_{4}=1.4\times 10^{-3}\) meV/Eu in tetragonal EuGa\({}_{4}\)[43]. For \(0\leq H_{x}\leq H_{\rm c1}\), the magnetization \(M_{x}\) of the collinear moments in a domain at \(T=0\) versus \(H_{x}\) only arises from the component of \({\bf M}\) perpendicular to the ferromagnetically-aligned layers in the A-type AFM structure, because the parallel component gives no contribution at \(T=0\) K. The normalized magnetization averaged over the three domains using Eqs. (8) is \[\frac{M_{x\,{\rm ave}}(\Delta\phi)}{M_{x}(H_{\rm c1})}=\frac{1}{3}\left[1+2 \sin^{2}\left(\frac{\pi}{6}+\Delta\phi\right)\right]. \tag{14}\] Solving for \(\Delta\phi(H_{x})\) using Eq. (12a) and the known values of \(K_{3}\) and \(M_{x}(H_{\rm c1})\), a plot of \(M_{x\,{\rm ave}}\) versus \(H_{x}\) over the range \(0\leq H_{x}\leq H_{\rm c1}\) is shown in Fig. 5(f). At higher fields \(H_{\rm c1}\leq H\leq H_{ab}^{\rm c}\), one has \(M(H_{x})=\chi_{\perp}H_{x}\), above which the magnetization saturates. ### Heat capacity The temperature dependence of the zero-field heat capacity \(C_{\rm p}(T)\) of EuAl\({}_{2}\)Ge\({}_{2}\) is shown in Fig. 7(a). A clear \(\lambda\)-type peak is observed in the \(C_{\rm p}(T)\) data at \(T_{\rm N}=27.5\) K, indicating the second-order nature of the AFM transition. The peak position shifts to lower temperature with increasing applied field, as shown in the inset of Fig. 7(a). The \(C_{\rm p}(T)\) tends to saturate at a value of \(\approx 124\) J/mol K, at \(T=300\) K, close to the classical Dulong-Petit high-\(T\) limit \(3nR=124.71\) J/mol K, where \(n=5\) is the number of atoms per formula unit and \(R\) is the molar gas constant. The molar \(C_{\rm p}(T)\) data were fitted by an electronic contribution \(\gamma T\) plus the Debye lattice heat-capacity model according to \[C_{\rm p}(T) = \gamma T+nC_{\rm V\,Debye}(T), \tag{15}\] \[C_{\rm V}(T) = 9R\left(\frac{T}{\Theta_{\rm D}}\right)^{3}\int_{0}^{\Theta_{\rm D }/T}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx,\] where \(\gamma\) is the Sommerfeld electronic specific-heat coefficient and \(\Theta_{\rm D}\) is the Debye temperature. An accurate Pade approximant expression for \(C_{\rm V}(T)\)[44] was used for the fit. The fit is shown by the black curve in Fig. 7(a), where \(\gamma=21(2)\) mJ/mol K\({}^{2}\) and \(\Theta_{\rm D}=332(2)\) K. The \(\gamma\) value is significantly larger than the value of \(6(1)\) mJ/mol K\({}^{2}\) estimated from the theoretical density of states at the Fermi energy \(D(E_{\rm F})\) value below. The enhancement may be due to electron-electron and/or electron-phonon interactions. Although the AFM ordering temperature of EuAl\({}_{2}\)Ge\({}_{2}\) is \(T_{\rm N}=27.5\) K, the \(C_{\rm p}(T)\) data exhibit a positive deviation from the fit in Fig. 7(a) for the electronic and lattice contributions up to \(~{}80\) K, indicating the presence of short-range magnetic correlations up to \(\sim 80\) K. The magnetic contribution \(C_{\rm mag}(T)\) to the heat capacity is obtained by subtracting the electronic and lattice contributions from the measured \(C_{\rm p}(T)\) data using the above fit and is shown as the red symbols in Fig. 7(b). The \(C_{\rm mag}(T)\) remains finite for \(T_{\rm N}<T\lesssim 80\) K due to the presence of short-range dynamic magnetic correlations, in accordance with the observed nonlinear \(M(H)\) behavior in Fig. 3(a) for \(T>T_{\rm N}\) discussed earlier. Figure 7: (a) Temperature dependence of the zero-field \(C_{\rm p}(T)\) for EuAl\({}_{2}\)Ge\({}_{2}\) (filled red circles) along with a fit by Eq. (15) (solid black curve). Inset: \(C_{\rm p}\) vs \(T\) in magnetic fields \(H\) from 0 to 7 T. (b) Plot of \(C_{\rm mag}/T\) vs \(T\) in \(H=0\) below 100 K (filled red circles, left ordinate) and the corresponding magnetic entropy \(S_{\rm mag}\) vs \(T\) (right ordinate) calculated from the \(C_{\rm mag}(T)/T\) data using Eq. (17). Also shown as a blue curve is \(C_{\rm mag}(T)/T\) calculated for \(S=7/2\) and \(T_{\rm N}=27.4\) K using the molecular-field theory prediction in Eq. (16). The magnetic entropy \(S_{\rm mag}(T)\) calculated using Eq. (17) is plotted as the green triangles with the scale on the right ordinate. In Fig. 7(b), we have also shown the theoretical \(C_{\rm mag}(T)/T\) behavior based on the MFT [42] for this system with \(S=7/2\) as the blue line. According to MFT, the molar \(C_{\rm mag}(t)\) is given by \[C_{\rm mag}(t)=R\frac{3S\bar{\mu}_{0}^{2}(t)}{(S+1)t[\frac{(S+1)t}{3B_{S}^{2}(t )}-1]}, \tag{16}\] where the symbols are defined in Eqs. (5). The MFT prediction below \(T_{\rm N}\) in Fig. 7(b) does not agree well with the data, although the overall shapes below \(T_{\rm N}\) are similar. In this regard we must keep in mind the presence of the additional transition below \(\sim 5\) K noted above and also the presence of substantial short-range magnetic correlations above \(T_{\rm N}\). The temperature dependence of the magnetic entropy \(S_{\rm mag}(T)\) is calculated using the experimental data (red symbols) in Fig. 7(b) and the relation \[S_{\rm mag}(T)=\int_{0}^{T}\frac{C_{\rm mag}(T)}{T}dT, \tag{17}\] as shown by the green symbols with the scale on the right ordinate of Fig. 7(b). The \(S_{\rm mag}(T)\) saturates at \(T>80\) K to a value of \(\approx 18\) J/mol K, which is comparable with the theoretical saturation entropy \(S_{\rm mag}=R\)ln\((2S+1)=17.29\) J/mol K for Eu\({}^{2+}\) ions with \(S=7/2\). The release of the entropy at temperatures higher than \(T_{\rm N}\) is due to short-range magnetic correlations above \(T_{\rm N}\) as indicated from the \(C_{\rm mag}(T)/T\) vs \(T\) data in Fig. 7(b), as also previously found in other Eu- and Gd-based \(S=7/2\) compounds [20; 23; 28; 45; 46]. ### Electrical resistivity While the general trend of the electrical resistivity \(\rho\) in the paramagnetic state of EuAl\({}_{2}\)Ge\({}_{2}\) is a metallic decrease on cooling below room temperature as illustrated in the inset of Fig. 8, anomalous behavior is observed on approaching \(T_{\rm N}\) from above as shown in the main panel. In particular, the resistivity develops significant positive curvature from \(\sim 80\) K down to \(T_{\rm N}=27\) K, corresponding to the development of dynamic short-range magnetic correlations observed in the heat capacity data in Fig. 7. Loss of spin-disorder scattering due to long-range AFM ordering leads to the rapid decrease in the resistivity on cooling below \(T_{\rm N}\). #### iii.5.1 Electrical resistivity in magnetic fields \(H\parallel c\) axis In Fig. 9 we show the field-dependent resistivity, measured in magnetic fields parallel to the crystal \(c\) axis. Measurements were taken at characteristic temperatures of 60 K (in the paramagnetic state with weak magnetic correlations, cyan line), at 33 K in the correlated paramagnet state (purple line), and at 20 K (green line) and 5 K (black line) in the type-A AFM state. Magnetization versus field measurements at 5 K and 20 K in this configuration, Fig. 5(c) above, show a linear increase at low fields and saturation at fields at about 5 T and 3 T, respectively, in very good agreement with the features seen in the \(\rho_{a}(H_{c})\) curves. At 20 K the resistivity decreases above 3 T, evidencing the suppression of spin-disorder scattering. At \(T=5\) K, the \(\rho(H)\) curve shows a slope change at \(\sim 5\) T. For comparison we show resistivity data measured at 5 K in the \(H\parallel ab\) configuration, revealing a much clearer feature at the saturation field of \(\approx 3.5\) T (red curve in Fig. 9). Note that in the paramagnetic state at 60 K, the re Figure 8: Temperature \(T\)-dependent in-plane electrical resistivity \(\rho\) of four EuAl\({}_{2}\)Ge\({}_{2}\) crystals in \(H=0\) T below 100 K. The inset shows the full temperature dependence of crystal #A up to room temperature. Figure 9: In-plane resistivity \(\rho\) of EuAl\({}_{2}\)Ge\({}_{2}\) crystal #A in magnetic fields in the \(H\parallel c\) configuration. Measurements were taken at temperatures of 60 K in the paramagnetic state during the initial development of magnetic correlations (cyan line), at 33 K in the more-correlated paramagnetic state (purple line), and at 20 K (green line) and 5 K (black line) in the A-type AFM state. For reference we show data taken in the \(H\parallel ab\) configuration at 5 K (red line) for which the critical field is about 3.5 T from Fig. 5(c). sistivity in Fig. 9 increases monotonically with magnetic field, close to the \(\rho\sim H^{2}\) dependence expected for weak-field orbital magnetoresistance [47]. The symmetry of the curve with respect to the sign of the magnetic field suggests minimal contribution of a spurious Hall effect signal in the resistivity measurements. In the correlated paramagnet state at 33 K the resistivity decreases with field up to a field of \(\sim 6\) T, due to field-induced suppression of spin-disorder scattering. Positive magnetoresistance is restored in the spin-polarized state above 6 T. #### iii.6.2 Electrical resistivity in magnetic fields \(H\parallel ab\) plane In Fig. 10 we show the evolution of the temperature-dependent resistivity of EuAl\({}_{2}\)Ge\({}_{2}\) with magnetic field applied parallel to the conducting \(ab\) plane. This field effectively alters the interplane alignment of the ferromagnetic planes in the type A antiferromagnet with respect to the field, as discussed in Sec. III.3. A strong enough magnetic field of 1 T (red curve) suppresses the pre-transition resistivity increase and brings the sharp feature observed in zero field at \(T_{\rm N}=27\) K to somewhat lower temperatures. With a further field increase to 2 T (green curve), the sharp feature at \(T_{\rm N}\) is smeared and transforms into a broad crossover. It shifts to higher temperatures at 3 T (blue curve) and becomes hard to distinguish at higher fields, clearly showing the importance of the spin-polarized state for its observation. Figure 11 shows the field-dependent resistivity measured in magnetic fields parallel to the sample \(ab\) plane. Measurements were taken at characteristic temperatures of 60 K in the paramagnetic state above magnetic correlations development (blue line), at 33 K in the correlated paramagnetic state (green line), and at 20 K (red line) and 5 K (black line) in the A-type AFM state. Magnetization measurements at 5 K and 20 K in this configuration, Fig. 5(b), show positive curvature at the lowest fields, zoomed in Fig. 5(d), followed by a linear increase and saturation at fields at about 3.5 T and 2.5 T, respectively. This is in very good agreement with the features seen in \(\rho(H)\) curves. At 20 K the resistivity decreases above 2 T, reaches a minimum at 3 T and increases on further field increase. Note a tiny resistivity increase for the 5 K and 20 K curves, presumably related to magnetic-moment rotations as discussed above in Sec. III.3. ### Electronic structure from ARPES measurements and DFT calculations In order to understand the interplay of magnetism and electronic structure in EuAl\({}_{2}\)Ge\({}_{2}\), ARPES measurements have been performed at different temperatures, with a particular emphasis on the temperature range bridging \(T_{\rm N}\). The experimentally-observed electronic structure was also compared with the theoretical electronic structure by density-functional-theory (DFT)-based calculations. Figure 12(a) shows the ARPES spectrum of EuAl\({}_{2}\)Ge\({}_{2}\) along the \(\Gamma-{\rm K-M-\Gamma}\) path, measured in the AFM phase at \(T=9\) K. The spectrum shows two hole-like and one electron-like bands crossing the Fermi level at the \(\Gamma\) and M points of the Brillouin zone (BZ), respectively. These hole-like bands appear to cross at \(-0.5\) eV along \(\Gamma-{\rm M}\) [indicated by green arrow in (a)], but they are well separated along \(\Gamma-{\rm K}\). For better visualization Figure 11: In-plane resistivity \(\rho\) of EuAl\({}_{2}\)Ge\({}_{2}\) in magnetic fields with the \(H\parallel ab\) configuration. Measurements were taken at characteristic temperatures of 60 K (in the paramagnetic state with weak magnetic correlations, blue line), at 33 K in the correlated paramagnet state (green line), and at 20 K (red line) and 5 K (black line) in the A-type AFM state. Figure 10: Temperature-dependent resistivity of EuAl\({}_{2}\)Ge\({}_{2}\) in magnetic fields \(H\parallel ab\). The sharp feature accompanying long-range AFM ordering at \(T_{\rm N}=27\) K in zero field moves to somewhat lower temperature in a field of 1 T (red) and smears and moves to higher temperatures in fields of 2 T (green) and 3 T (blue). Measurements in positive and negative fields of 9 T reveal some contamination of the resistivity signal with the Hall voltage, suggesting a sign change of the Hall effect at around 30 K in the 9 T field. of the electron pocket, a closer view is shown in the inset of Fig. 12(a). Extremely less-dispersive bands with high intensity are observed around \(-1.5\) eV due to the localized Eu-\(4f\) levels. Most of the experimental features are reasonably well-reproduced by DFT calculations, which considers the effect of spin-orbit-coupling (SOC) and a Hubbard \(U=5\) eV to account for the effect of strong localization of the half-filled Eu-\(4f\) orbitals of EuAl\({}_{2}\)Ge\({}_{2}\) in its A-type AFM spin configuration as obtained from our neutron diffraction measurements [Fig. 12(b)]. In order to identify potential changes in the electronic structure associated with the magnetic transition, the AFM and paramagnetic (PM) band structures are plotted together in Fig. 12(c). In the AFM phase, several new bands appear compared to the PM phase, due to the folding of electronic states originating from the doubling of the magnetic unit cell. For example, an electron-like band is observed in the AFM phase, just above \(E_{\rm F}\) at the \(\Gamma\) point, whereas it is absent in the PM phase. This electron-like band crosses two hole-like bands and causes various band anticrossings, as indicated by the arrows in the inset of Fig. 12(c). Unfortunately, these states are inaccessible by photoemission spectroscopy as they appear above \(E_{\rm F}\). However, potential changes in the electronic states between the PM and AFM phases are also expected below \(E_{\rm F}\) as indicated by asterisk and triangle symbols with the dashed box, and arrow and that should be directly accessible by ARPES. Indeed we resolve those folded shallow bands in the AFM phase of EuAl\({}_{2}\)Ge\({}_{2}\) as indicated in Fig. 12(d), whereas no such states are observed in the PM phase. Generally, folded electronic states appear weaker in photoemission, regard Figure 12: Electronic structure of EuAl\({}_{2}\)Ge\({}_{2}\). (a) ARPES spectrum of EuAl\({}_{2}\)Ge\({}_{2}\) along the \(\Gamma-{\rm K}-{\rm M}-\Gamma\) path measured in the AFM phase (9 K) using \(h\nu=91\) eV (\(k_{z}\sim 0\)). The inset shows the zoomed-in spectra of the electron pocket at the M point. The arrow indicates the crossing point of two bands. (b) Theoretical band dispersions including spin-orbit coupling (SOC), Hubbard \(U=5\) eV, and A-type AFM spin-configuration using DFT. The arrow indicates the crossing of bands. (c) Theoretical band dispersions in the AFM and PM phases are plotted together. The inset shows zoomed-in spectra around \(\Gamma\). Band inversion/avoided-crossing features are indicated by the two blue arrows in the inset. Compared to the PM phase, a few extra bands appear in the AFM phase and some of them are indicated by an arrow, star, and triangle symbols. (d) Two-dimensional second-derivative of the ARPES spectra along \(\Gamma-{\rm M}\) for AFM and PM phases. Bands within the dashed box are captured by theoretical calculations in (c). Fermi surface and constant-energy contours for the AFM phase in the experiment [(e)-(f)] and theory [(g)-(h)] and similarly, for the PM phase (40 K) in the experiment [(i)-(j)] and theory [(k)-(l)]. Different energy values are used between the experiment and theory as the position of the Fermi level is slightly different between them. The ARPES spectra in Fig. 13 were taken along the cut shown by the white dashed line in (e). less of whether they are due to magnetism or charge density waves [48; 49; 50; 51]. Further, to map the dispersion of the electronic states in the \(k_{x}\)-\(k_{y}\) plane, Fermi surface (FS) mapping was performed. Figures 12(e) and 12(i) show the FS of EuAl\({}_{2}\)Ge\({}_{2}\) for the AFM and PM phases, respectively. In both cases, three Fermi pockets are observed, two at the center of the BZ, and one at the M point. The circular and hexagonal Fermi pockets at the center of the BZ are formed by the inner and outer hole-like bands, respectively [Fig. 12(a)], and the elongated oval-shaped Fermi pocket at the M point is the electron pocket. This electron pocket is formed by the bottom of the conduction band that enters inside the Fermi level. The inner Fermi pocket is isotropic whereas the other two are very anisotropic that could produce the anisotropic magnetic properties as observed in our experiments. All these FSs are well reproduced by theoretically-computed contours at \(E_{\rm F}+40\) meV [Figs. 12(g) and 12(k)]. This energy shift was used to better match the shape and sizes of the experimental FS features, suggesting that the sample is slightly electron-doped. The FS features and dispersion of electronic states suggest that EuAl\({}_{2}\)Ge\({}_{2}\) is metallic, both in the AFM and PM phases. Further, according to the band structure, folded bands between two consecutive BZs should connect the M point at a deeper energy that cuts the folded bands at the M point. Indeed, we observe this signature both in our ARPES and theoretical simulated constant-energy contours, as shown in Figs. 12(f) and 12(h), respectively. In the PM case, no such intensity is observed at the M point due to the absence of band folding [Figs. 12(j) and 12(i)]. Recently, magnetism-induced band folding and nontrivial band topology were reported in the Eu-based AFM system EuCd\({}_{2}\)As\({}_{2}\)[48; 49]. As discussed above, our DFT calculations also predicted inverted band features in the AFM state of EuAl\({}_{2}\)Ge\({}_{2}\) near \(E_{\rm F}\), which is typically observed in materials hosting nontrivial band topology. To correctly verify its nontrivial topological origin, we have calculated the \(Z_{2}\) topological numbers using the Wilson loop (Wannier charge center) method [37] for the six time-reversal-invariant momentum planes. The obtained \(Z_{2}\) topological numbers \(v_{0}\);(\(v_{1}v_{2}v_{3}\)) = 1;(000) indicate the presence of nontrivial electronic states in this system. Further theoretical studies are needed to determine the exact nature of the Figure 13: Electronic structure of EuAl\({}_{2}\)Ge\({}_{2}\) across the magnetic transition. (a)–(c) ARPES spectrum around \(\Gamma\) close to \(E_{\rm F}\) along the cut shown by the dashed line in Fig. 12(e) for various temperatures 9 K, 27 K, and 32 K, respectively. (d) Temperature dependence of the energy distribution curves (EDCs) at the momentum indicated by a vertical line in (a). (e)–(g) Zoomed view of the ARPES spectra within the region as indicated by a dashed rectangle in (c) for 9 K, 32 K, and 40 K, respectively. Arrows indicate the splitting of bands. (h) Momentum distribution curves along the dashed line in (e). 'topology' of the system. To obtain more insight into the electronic structure change across the magnetic transition, we have performed high-resolution ARPES measurements close to \(E_{\rm F}\) at various temperatures [Figs. 13(a)-13(c)]. While they exhibit very similar spectral features across the transition, the quasiparticle weight decreases significantly. This can be better visualized in their energy-distribution curves (EDCs) in Fig. 13(d). The temperatures at which the quasiparticle weight drops correlate well with magnetic transition temperatures. This drop in quasiparticle weight in the PM phase is most possibly related to the complex interplay between the orbital and spin degrees of freedom, caused by the change of coupling between magnetic moments and itinerant electrons across magnetic transitions. Quasiparticle enhancement in magnetically-ordered states has been reported in other magnetic materials due to the decrease of spin fluctuations and changes in the scattering mechanism [52; 53]. Further zooming the ARPES spectra in momentum reveals that the individual hole-like bands actually split in two. The splitting is better resolved for the outer bands as indicated by vertical lines in Figs. 13(e) and 13(f). The momentum distribution curves (MDCs) also show clear two-peak structures of the outer band. It is interesting to note that the band splitting survives above \(T_{\rm N}\). However, according to the theoretical calculations, all the bands in the AFM and PM phases are twofold degenerate, so no such band splitting is expected. Thus only two hole-like bands are expected to cross the \(E_{\rm F}\) [Fig. 12(c)]. Generally, band splitting occurs when either time-reversal symmetry \(T\) or parity \(P\) symmetry is broken. Even though \(T\) is broken in the AFM phase, the double degeneracy of the bands is protected by the combination of \(P\), \(T\), and translation (\(L\)) symmetries by one unit along the \(c\)axis [49]. The observation of band splitting in the PM phase is quite surprising as both the \(T\) and \(P\) symmetries should be preserved. On the other hand, based on our magnetic measurements, the persistence of short-range FM correlations above \(T_{\rm N}\) may cause the \(T\) symmetry to break in the PM phase, leading to band splitting. In EuCd\({}_{2}\)As\({}_{2}\), an analogous band splitting was reported [48]. The band splitting was explained as resulting from quasi-static and quasi-long-range FM fluctuations experienced by the itinerant electrons. In the AFM phase of EuAl\({}_{2}\)Ge\({}_{2}\), the magnetic moments align ferromagnetically within a basal plane, which results from dominant in-plane FM exchange interactions. Since ARPES is a very surface-sensitive technique, these FM interactions may result in the band splitting in the magnetically-ordered state, as observed in Fig. 13(e). ## IV Concluding remarks We find that EuAl\({}_{2}\)Ge\({}_{2}\) is a metallic antiferromagnet with nontrivial electronic states in the AFM phase near \(E_{\rm F}\). The compound exhibits A-type AFM order below \(T_{\rm N}=27.5(5)\) K with the Eu moments aligned in the \(ab\) plane. The anisotropic magnetic properties exhibited by the system, associated with the Eu\({}^{2+}\) spins, indicate the presence of substantial magnetic dipole and magnetocrystalline anisotropy. The presence of in-plane magnetic anisotropy results in trigonal threefold AFM domain formation in \(H=0\). The moments in the domains exhibit a field-induced reorientation at \(H_{\rm cl}\sim 2.5(1)\) kOe to become perpendicular to the field direction for \(T<T_{\rm N}\). The \(ab\)-plane and \(c\)-axis critical fields at \(T=2\) K are \(H_{ab}^{c}=37.5(5)\) kOe and \(H_{c}^{c}=52.5(5)\) kOe at which all moments polarized along the respective applied-field directions. The presence of dynamic short-range magnetic correlations within the \(ab\) planes is evident above \(T_{\rm N}\) from the zero-field heat capacity and resistivity studies. A slight resistivity increase on cooling before loss of spin disorder scattering below \(T_{\rm N}\) suggests magnetic correlations which are different from long-range AFM ordering. Similarly, ARPES studies reveal band splitting even above \(T_{\rm N}\), suggesting a possible breaking of the \(T\) symmetry associated with the magnetic correlations above \(T_{\rm N}\) which are therefore identified to be ferromagnetic in nature. The ARPES results further reveal that EuAl\({}_{2}\)Ge\({}_{2}\) is metallic with a well-defined Fermi surface. The Fermi surface is formed by the two pockets at the zone center (\(\Gamma\)) and electron pockets at each M point. The outer hole pocket and the electron pockets at M are very anisotropic. In addition to the various dispersive bands, extremely less-dispersive bands are observed around an energy \(-1.5\) eV below the Fermi energy due to the localized Eu-\(4f\) levels. Various folded bands are also observed in the AFM phase due to the doubling of the unit cell. All these electronic states are modeled well by considering spin-orbit-coupling (SOC), \(U\)\(=5\) eV and the A-type \(ab\)-plane AFM configuration of the Eu magnetic moments. ###### Acknowledgements. The research at Ames National Laboratory was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. The research at Brookhaven National Laboratory was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Contract No. DE-SC0012704. This work was also supported in part by the Center for Spintronics Research Network, Tohoku University.
2301.06520
UL-DL duality for cell-free massive MIMO with per-AP power and information constraints
We derive a novel uplink-downlink duality principle for optimal joint precoding design under per-transmitter power and information constraints in fading channels. The information constraints model limited sharing of channel state information and data bearing signals across the transmitters. The main application is to cell-free networks, where each access point (AP) must typically satisfy an individual power constraint and form its transmit signal using limited cooperation capabilities. Our duality principle applies to ergodic achievable rates given by the popular hardening bound, and it can be interpreted as a nontrivial generalization of a previous result by Yu and Lan for deterministic channels. This generalization allows us to study involved information constraints going beyond the simple case of cluster-wise centralized precoding covered by previous techniques. Specifically, we show that the optimal joint precoders are, in general, given by an extension of the recently developed team minimum mean-square error method. As a particular yet practical example, we then solve the problem of optimal local precoding design in user-centric cell-free massive MIMO networks subject to per-AP power constraints.
Lorenzo Miretti, Renato L. G. Cavalcante, Emil Björnson, Sławomir Stańczak
2023-01-16T17:02:59Z
http://arxiv.org/abs/2301.06520v5
# UL-DL Duality for Cell-free Massive MIMO with Per-AP Power and Information Constraints ###### Abstract We derive a novel uplink-downlink duality principle for optimal joint precoding design under per-transmitter power and information constraints in fading channels. The main application is to cell-free networks, where each access point (AP) must typically satisfy an individual power constraint and form its transmit signal on the basis of possibly partial sharing of data bearing signals and channel state information. Our duality principle applies to ergodic achievable rates given by the popular hardening bound, and it can be interpreted as a nontrivial generalization of a previous result by Yu and Lan for deterministic channels. This generalization allows us to cover more involved information constraints, and to show that optimal joint precoders can be obtained using a variation of the recently developed team minimum mean-square error method. As particular examples, we solve the problems of optimal centralized and local precoding design in user-centric cell-free massive MIMO networks subject to per-AP power constraints. Duality, cell-free, massive MIMO, distributed precoding, team decision theory, MMSE. ## I Introduction Cell-free massive MIMO networks have attracted significant interest for their potential in enhancing the performance of future generation mobile access networks. The main focus is the evolution of known coordinated multi-point (CoMP) concepts towards practically attractive access solutions that combine the benefits of access point (AP) cooperation and ultra-dense deployments. To this end, considerable research effort has been devoted to the development of scalable and possibly user-centric system architectures and algorithms covering, for instance, power control, pilot-based channel estimation, joint processing such as precoding and combining, fronthaul overhead, network topology, and initial access [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Against this background, in this study we address the open problem of optimal joint downlink precoding design in cell-free massive MIMO networks, by considering minimum quality-of-service requirements and by assuming that each AP is subject to an individual power constraint and an individual information constraint. The information constraints model limited AP cooperation capabilities, which are motivated by the need for realizing scalable cell-free architectures with reduced fronthaul and joint processing load. More specifically, the information constraints model: * limited sharing of data bearing signals, for instance, only within user-centric cooperation clusters as in [7, 8]; and * limited sharing of channel state information (CSI), covering, for instance, the local CSI model as in the original version of the cell-free massive MIMO paradigm [1]. Each AP must form its transmit signal as a function of the data bearing signals and CSI specified by the constraints, and no additional information exchange between the APs is allowed. This is in contrast to related works such as [11], which covers iterative information exchange during precoding computation. In the cell-free massive MIMO literature, the best performing joint precoders are typically designed from joint uplink combiners, motivated by a known uplink-downlink duality principle for fading channels [19, Ch. 4][20, Ch. 6]. However, optimal joint precoders are generally unknown owing to the following two reasons. First, the known uplink-downlink duality principle for fading channels holds for a looser and somewhat less practical sum power constraint. Second, until very recently, optimal joint combiners were not known except for the case of full CSI sharing within each cooperation cluster, an information constraint leading to so-called _centralized_ combining. A partial solution to the first issue is given by the alternative uplink-downlink duality principle under per-antenna power constraints developed in [21], applied for instance in [22] in the context of CoMP. However, the method in [21] applies to deterministic channels, i.e., to fixed channel fading realizations, which imposes many limitations. A first limitation is that optimal schemes typically involve solving relatively complex optimization problems for each channel realization, which may be impractical in large systems. Perhaps the most important limitation is that addressing the second issue, i.e., designing optimal _distributed_ combiners under limited CSI sharing, becomes generally ill-posed for deterministic channels. This is because standard optimization problems taking as input a fixed channel realization produce solutions that may depend on fixed yet unknown channel variables, and hence violate the information constraints. Addressing the second issue is known to be quite challenging. In essence, depending on the information constraints, each AP may need to take combining decisions that are robust not only to channel estimation errors, but also to the possibly unknown combining decisions taken at the other APs. A novel method for optimally addressing this issue, called the _team_ minimum mean-square error (MMSE) method, has been recently proposed in [23]. Together with the known uplink-downlink duality principle for fading channels, this method is used in [23, 24] to derive novel joint precoders under various relevant information constraints, including the aforementioned local CSI model. However, the obtained solutions are provenly optimal under a sum power constraint only. To address the above limitations, in this study1 we derive a novel uplink-downlink duality principle for fading channels under per-AP power constraints, which, as discussed in details throughout this study, significantly differs from [21]. Furthermore, building on this result, we show that optimal joint precoders are given by solutions to properly parametrized MMSE problems under per-AP information constraints, i.e., as properly parametrized variations of team MMSE precoders. In summary, this work can be interpreted as a nontrivial generalization of the method in [21] to fading channels, and of the method in [23] to per-AP power constraints. Moreover, we illustrate the application of our findings by deriving novel solutions to the problems of optimal centralized and local precoding design in user-centric cell-free massive MIMO networks under per-AP power constraints. Footnote 1: A part of the results in this study is presented in [25] without proof. This study extends [25] by providing complete derivations, the expressions for optimal centralized and local precoding, and additional details on the numerical implementation of the proposed algorithms. The paper is organized as follows. Section II and Section III provide the main definitions, mathematical tools, and modeling assumptions. Section IV presents and studies the main optimization problem using Lagrangian duality arguments. Building on the obtained insights, Section V derives the proposed uplink-downlink duality principle, which is then exploited in Section VI to characterize the optimal solution structure. Simple applications of the main results are illustrated in Section VII by means of numerical simulations. Finally, Section VIII summarizes the main results, and outlines some limitations and possible future directions. ## II Mathematical preliminaries ### _Notation and definitions_ We denote by \(\mathbb{R}_{+}\) and \(\mathbb{R}_{++}\) the sets of, respectively, nonnegative and positive reals. The Euclidean norm in \(\mathbb{C}^{K}\) is denoted by \(\|\cdot\|\). Let \((\Omega,\Sigma,\mathbb{P})\) be a probability space. We denote by \(\mathcal{H}^{K}\) the set of complex-valued random vectors, i.e., \(K\)-tuples of \(\Sigma\)-measurable functions \(\Omega\to\mathbb{C}\) satisfying \((\forall\mathrm{h}\in\mathcal{H}^{K})\) \(\mathsf{E}[\|\mathrm{h}\|^{2}]<\infty\). Together with the standard operations of addition and real scalar multiplication, we recall that \(\mathcal{H}^{K}\) is a real vector space. Given a random variable \(X\in\mathcal{H}\), we denote by \(\mathsf{E}[X]\) and \(\mathsf{V}(X)\) its expected value and variance, respectively. Inequalities involving vectors in \(\mathbb{R}^{K}\) should be understood coordinate-wise. The \(k\)th column of the \(K\)-dimensional identity matrix \(\mathbf{I}_{K}\) is denoted by \(\mathbf{e}_{k}\). ### _Lagrangian duality in general vector spaces_ The following key result is frequently invoked throughout our study, and can be found in [26]. **Proposition 1**.: _Consider the functions \(f:\mathcal{X}\to\mathbb{R}\) and \(\mathbf{g}:\mathcal{X}\to\mathbb{R}^{N}\), where \(\mathcal{X}\) is a real vector space, and the optimization problem_ \[\underset{X\in\mathcal{X}}{\text{minimize}} f(X)\] \[\text{subject to} \mathbf{g}(X)\leq\mathbf{0}.\] _Define the primal optimum \(p^{\star}:=\inf\{f(X)\mid\mathbf{g}(X)\leq\mathbf{0},X\in\mathcal{X}\}\), and the dual optimum \(d^{\star}:=\sup\{d(\mathbf{\lambda})\mid\mathbf{\lambda}\in\mathbb{R}_{+}^{N}\}\), where \(d(\mathbf{\lambda}):=\inf\{f(X)+\mathbf{\lambda}^{\star}\mathbf{g}(X)\mid X\in\mathcal{X}\}\) for \(\mathbf{\lambda}\in\mathbb{R}_{+}^{N}\). Each of the following holds:_ 1. \(d^{\star}\leq p^{\star}\) _(weak duality);_ 2. _If_ \(f\) _and_ \(\mathbf{g}\) _are proper convex functions_ _[_26_, pp. 39]__, and_ \(\{X\in\mathcal{X}\mid\mathbf{g}(X)<\mathbf{0}\}\neq\emptyset\) _(Slater's condition), then_ \(d^{\star}=p^{\star}\) _holds (strong duality). Furthermore, there exist Lagrangian multipliers_ \(\mathbf{\lambda}^{\star}\in\mathbb{R}_{+}^{N}\) _such that_ \(d(\mathbf{\lambda}^{\star})=d^{\star}\)_._ Proof.: For statement (i), see [26, Theorem 2.6.1(iii)]. For statement (ii), see [26, Theorem 2.9.3(ii)]. For \(\mathcal{X}=\mathbb{R}^{K}\), the above proposition corresponds to the familiar Lagrangian duality principle for optimization problems in Euclidean spaces [27]. In this study, we exploit the more general result in Proposition 1 to address optimization problems in the real vector space of complex-valued random vectors \(\mathcal{X}=\mathcal{H}^{K}\). ## III System model ### _Downlink achievable rates_ Consider the downlink of a cell-free wireless network composed of \(L\) APs indexed by \(\mathcal{L}:=\{1,\ldots,L\}\), each of them equipped with \(N\) antennas, and \(K\) single-antenna UEs indexed by \(\mathcal{K}:=\{1,\ldots,K\}\). By assuming a standard synchronous and frequency-flat channel model governed by an ergodic and stationary fading process, and simple transmission techniques based on linear precoding and on treating interference as noise, we focus on simultaneously achievable ergodic rates in the classical Shannon sense, approximated by the popular _hardening_ inner bound [28]. In more detail, we define the downlink rates achieved by each UE \(k\in\mathcal{K}\) for a given precoding design as \[R_{k}^{\mathrm{DL}}(\mathbb{T}):=\log\left(1+\mathrm{SINR}_{k}^{ \mathrm{DL}}(\mathbb{T})\right), \tag{1}\] \[\mathrm{SINR}_{k}^{\mathrm{DL}}(\mathbb{T}):=\frac{|\mathsf{E}[ \mathrm{h}_{k}^{\mathsf{H}}\mathbf{t}_{k}]|^{2}}{\sqrt{(\mathrm{h}_{k}^{\mathsf{H }}\mathbf{t}_{k})+\sum_{j\neq k}\mathsf{E}[\|\mathrm{h}_{k}^{\mathsf{H}}\mathbf{t}_{j} |^{2}]+1}}, \tag{2}\] where \(\mathrm{h}_{k}\in\mathcal{H}^{NL}\) is a random channel vector modeling the fading state between UE \(k\) and all APs, \(\mathrm{t}_{k}\in\mathcal{H}^{NL}\) is a joint precoding vector applied by all APs to the coded and modulated data bearing signal for UE \(k\), and \(\mathbb{T}:=[\mathrm{t}_{1},\ldots,\mathrm{t}_{K}]\in\mathcal{H}^{NL\times K}\) is the aggregate joint precoding matrix. We stress that precoders are defined and denoted as random quantities, since they may adapt to random fading realizations on the basis of the the available instantaneous CSI. This aspect is treated in detail in the next sections. ### _Per-AP power and information constraints_ In practical cell-free wireless networks, each AP must typically satisfy an individual power constraint. In addition, motivated by the need for realizing scalable cell-free architectures, each AP is also typically subject to an individual information constraint induced by limited data and instantaneous CSI sharing, which impair its cooperation capabilities. In this work, the above per-AP constraints are modelled as follows. Let \([\mathfrak{t}_{1,k}^{\mathsf{T}},\ldots,\mathfrak{t}_{L,k}^{\mathsf{T}}]^{ \mathsf{T}}:=\mathfrak{t}_{k}\), where \(\mathfrak{t}_{l,k}\in\mathcal{H}^{N}\) denotes the portion of the precoder applied by AP \(l\in\mathcal{L}\) to serve UE \(k\in\mathcal{K}\). By assuming unitary power data bearing signals, we consider the average power constraints \[(\forall l\in\mathcal{L})\ \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}] \leq P_{l}<\infty. \tag{3}\] For modeling impairments related to limited CSI sharing, we follow the recently proposed approach in [23] and let \[(\forall k\in\mathcal{K})\ \mathfrak{t}_{k}\in\mathcal{T}_{k}:=\mathcal{H}_{1} ^{N}\times\ldots\times\mathcal{H}_{L}^{N}, \tag{4}\] where \(\mathcal{H}_{l}^{N}\subseteq\mathcal{H}^{N}\) denotes the set of \(N\)-tuples of \(\Sigma_{l}\)-measurable functions \(\Omega\to\mathbb{C}\) satisfying \((\forall\mathds{h}\in\mathcal{H}_{l}^{N})\)\(\mathsf{E}[\|\mathds{h}\|^{2}]<\infty\), and where \(\Sigma_{l}\subseteq\Sigma\) is the sub-\(\sigma\)-algebra induced by the available CSI at AP \(l\in\mathcal{L}\), also called the _information subfield_ of AP \(l\)[29]. Informally, this constraint enforces the precoders of each AP to be functions of the available CSI only. From a mathematical point of view, \(\mathcal{H}_{l}^{N}\) is a subspace of the real vector space \(\mathcal{H}^{N}\), which in turn implies that the constraint in (4) models limited CSI sharing by constraining \(\mathfrak{t}_{k}\) within a subspace \(\mathcal{T}_{k}\) of the real vector space \(\mathcal{H}^{LN}\)[29]. **Remark 1**.: _The constraint in (4) is fairly general. In particular, it covers the case of local CSI [1] (i.e., where each AP has information on only the channel between the UEs and itself), but also more advanced cooperation structures involving quantized or delayed CSI sharing, or exploiting the peculiarities of efficient fronthauls such as in the so-called radio stripes concept where the APs are daisy-chained [23]. To keep our results general, in most of this study we do not specify the CSI structure. However, examples are provided in Section VI._ Furthermore, following the well-known user-centric network clustering approach [7, 8], we assume that each UE \(k\in\mathcal{K}\) is only served by a subset \(\mathcal{L}_{k}\subseteq\mathcal{L}\) of APs. As shown in [24], this additional practical impairment can be straightforwardly included in (4) by replacing \(\mathcal{H}_{l}^{N}\) with the set \(\{\mathbf{0}_{N}\ (\mathrm{a.s.})\}\) for each AP \(l\notin\mathcal{L}_{k}\), i.e., not serving UE \(k\). Since \(\{\mathbf{0}_{N}\ (\mathrm{a.s.})\}\) is a (trivial) subspace of \(\mathcal{H}^{N}\), this replacement does not alter the property of \(\mathcal{T}_{k}\) being a real vector space (in fact, a subspace of \(\mathcal{H}^{NL}\)). This key property will allow us to address the problem of optimal joint precoding design under per-AP information constraints using Lagrangian duality for real vector spaces (Proposition 1). ### _Dual uplink rates under arbitrary noise powers_ The main optimization approach developed in this work has a natural interpretation in terms of a virtual dual uplink channel with arbitrary noise powers. More specifically, similarly to the chosen downlink model, we consider virtual uplink ergodic rates given by the _use-and-then-forget_ inner bound [28] \[R_{k}^{\mathrm{UL}}(\mathsf{v}_{k},\mathbf{p},\mathbf{\sigma}):=\log \left(1+\mathrm{SINR}_{k}^{\mathrm{UL}}(\mathsf{v}_{k},\mathbf{p},\mathbf{\sigma}) \right), \tag{5}\] \[\mathrm{SINR}_{k}^{\mathrm{UL}}(\mathsf{v}_{k},\mathbf{p},\mathbf{\sigma}):=\] (6) \[\frac{p_{k}|\mathsf{E}[\mathds{h}_{k}^{\mathsf{H}}\mathsf{v}_{k} ]|^{2}}{p_{k}\mathsf{V}(\mathds{h}_{k}^{\mathsf{H}}\mathsf{v}_{k})+\sum_{j \neq k}p_{j}\mathsf{E}[|\mathds{h}_{j}^{\mathsf{H}}\mathsf{v}_{k}|^{2}]+ \mathsf{E}[\|\mathsf{v}_{k}\|_{\mathbf{\sigma}}^{2}]},\] where \(\mathsf{v}_{k}=[\mathsf{v}_{1,k}^{\mathsf{T}},\ldots,\mathsf{v}_{L,k}^{ \mathsf{T}}]^{\mathsf{T}}\in\mathcal{H}^{NL}\) is a joint combiner, \(\mathbf{p}:=[p_{1},\ldots,p_{K}]^{\mathsf{T}}\in\mathbb{R}_{i}^{K}\) is a vector of transmit powers, and where we define \((\forall\mathsf{v}_{k}\in\mathcal{H}^{NL})(\forall\mathbf{\sigma}\in\mathbb{R}_{++ }^{L})\) \[\mathsf{E}[\|\mathsf{v}_{k}\|_{\mathbf{\sigma}}^{2}]:=\sum_{l=1}^{K} \sigma_{l}\mathsf{E}[\|\mathsf{v}_{l,k}\|^{2}] \tag{7}\] for given \(\mathbf{\sigma}:=[\sigma_{1},\ldots,\sigma_{L}]^{\mathsf{T}}\in\mathbb{R}_{++}^{L}\). In the above expressions, \(\mathbf{\sigma}\) can be interpreted as a vector collecting uplink noise powers for each AP. We remark that the term _virtual_ here refers to the fact that the above rates may not be achievable in the true uplink channel, since \((\mathbf{p},\mathbf{\sigma})\) may differ from the true uplink transmit and noise powers. The major difference between \(R_{k}^{\mathrm{UL}}\) and \(R_{k}^{\mathrm{DL}}\) is that the former depends only on the joint combiner \(\mathsf{v}_{k}\) for the signal of UE \(k\in\mathcal{K}\), while the latter depends on the entire precoding matrix \(\mathbb{T}\). In fact, the uplink achievable rates are coupled only via the transmit and noise powers \(\mathbf{p},\mathbf{\sigma}\). This known aspect makes optimization on the uplink channel generally easier than on the downlink channel. ## IV Problem statement and Lagrangian duality To address the problem of optimal joint precoding design in cell-free networks, in this section we study a certain SINR feasibility problem under per-AP power and information constraints. In particular, given a tuple of power constraints \((P_{1},\ldots,P_{L})\in\mathbb{R}_{++}^{K}\) and of SINR requirements \((\gamma_{1},\ldots,\gamma_{K})\in\mathbb{R}_{++}^{K}\), we consider the following infinite dimensional optimization problem: \[\underset{\mathbb{T}\in\mathcal{T}}{\text{minimize}} \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}\|^{2}]\] (8) subject to \[(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{\mathrm{DL}}(\mathbb{T})\geq \gamma_{k},\] \[(\forall l\in\mathcal{L})\ \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}] \leq P_{l},\] where \(\mathcal{T}\subset\mathcal{H}^{NL\times K}\) is a real vector space obtained by collecting all per-AP information constraints defined in Section III-B. We recall that these constraints accommodate both limited instantaneous CSI sharing and user-centric network clustering. In the following, to avoid technical digressions, we assume that strictly feasible joint precoders exist, i.e., \[\left\{\mathbb{T}\in\mathcal{T}\Big{|}\begin{array}{l}(\forall k\in \mathcal{K})\ \mathrm{SINR}_{k}^{\mathrm{DL}}(\mathbb{T})>\gamma_{k}\\ (\forall l\in\mathcal{L})\ \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}]<P_{l} \end{array}\right\}\neq\emptyset.\] Furthermore, due to both mathematical convenience and practical reasons, in Problem (8) we focus on the subset of feasible joint precoders minimizing the total power consumption. ### _Lagrangian dual problems_ Inspired by the related results in [21] based on Lagrangian duality for finite dimensional optimization problems, in this section we apply Lagrangian duality for infinite dimensional optimization problems to the study of Problem (8). For convenience, we adopt the compact notation \[\mathrm{SINR}^{\mathrm{DL}}_{k}(\mathbb{T})=\frac{|b_{k}(\mathbb{T})|^{2}}{|c_{ k}(\mathbb{T})|^{2}-|b_{k}(\mathbb{T})|^{2}}, \tag{9}\] where \(b_{k}(\mathbb{T}):=\mathsf{E}[\mathsf{h}^{\mathsf{H}}_{k}\mathbf{\mathrm{t}}_{k}]\) is the useful signal term, and \(|c_{k}(\mathbb{T})|^{2}:=\sum_{j=1}^{K}\mathsf{E}[|\mathsf{h}^{\mathsf{H}}_{k} \mathbf{\mathrm{t}}_{j}|^{2}]+1\) is the interference plus noise power term. Furthermore, we rearrange the SINR constraints using simple algebraic manipulations as \[(\forall k\in\mathcal{K})\ |c_{k}(\mathbb{T})|-\nu_{k}|b_{k}(\mathbb{T})|\leq 0, \tag{10}\] where \(\nu_{k}:=\sqrt{1+1/\gamma_{k}}\). More specifically, we have the following simple property, provided without proof: **Lemma 1**.: _For all \(\mathbb{T}\in\mathcal{T}\) and \(k\in\mathcal{K}\),_ \[\mathrm{SINR}^{\mathrm{DL}}_{k}(\mathbb{T})\geq\gamma_{k}\iff|c_{k}(\mathbb{T })|-\nu_{k}|b_{k}(\mathbb{T})|\leq 0.\] A Lagrangian dual problem to (8) is then given by \[\underset{(\mathbf{\lambda},\mathbf{\mu})\in\mathbb{R}^{K}_{+}\times\mathbb{R}^{K}_{ +}}{\text{maximize}}\ d(\mathbf{\lambda},\mathbf{\mu}), \tag{11}\] where (recalling (7)) we define the dual function \(d(\mathbf{\lambda},\mathbf{\mu}):=\inf_{\mathbb{T}\in\mathcal{T}}\sum_{k=1}^{K} \mathsf{E}[\|\mathbf{\mathrm{t}}_{k}\|^{2}_{\mathbf{1}+\lambda}]-\sum_{l=1}^{L}\lambda _{l}P_{l}+\sum_{k=1}^{K}\mu_{k}(|c_{k}(\mathbb{T})|-\nu_{k}|b_{k}(\mathbb{T})|).\) Since the primal problem in (8) is nonconvex, by Proposition 1 we can only guarantee (a-priori) weak duality. Furthermore, guaranteeing existence of a solution is not immediate. However, the following important result holds. **Proposition 2**.: _Problem (8) admits a solution \(\mathbb{T}^{\star}\in\mathcal{T}\). Furthermore, denote by \(p^{\star}\) and \(d^{\star}\) the optimum of the primal problem (8) and of the dual problem (11), respectively. Strong duality holds, i.e., \(d^{\star}=p^{\star}\), and there exist Lagrangian multipliers \((\mathbf{\lambda}^{\star},\mathbf{\mu}^{\star})\in\mathbb{R}^{L}_{+}\times\mathbb{R}^{ K}_{+}\) solving Problem (11)._ Proof.: The proof follows a similar idea as in [21, 30], and it is based on establishing connections between Problem (8) and a convex reformulation obtained by replacing \(|b_{k}(\mathbb{T})|\) in (10) with \(\Re(b_{k}(\mathbb{T}))\). For additional details, see Appendix A. We conclude this section by stating a useful consequence of Proposition 2 that will be instrumental for proving our main results on uplink-downlink duality. In particular, we provide an alternative version of Proposition 2 based on a _partial_ dual problem, obtained by keeping the SINR constraints implicit. **Lemma 2**.: _Given the subset \(\mathcal{T}_{\gamma}:=\{\mathbb{T}\in\mathcal{T}\mid(\forall k\in\mathcal{K}) \ \mathrm{SINR}^{\mathrm{DL}}_{k}(\mathbb{T})\geq\gamma_{k}\}\) of precoders satisfying the SINR constraints in Problem (8), define the partial dual problem_ \[\underset{\mathbf{\lambda}\in\mathbb{R}^{K}_{+}}{\text{maximize}}\ \tilde{d}(\mathbf{\lambda}):=\inf_{\mathbb{T}\in\mathcal{T}_{\gamma}}\sum_{k=1}^{ K}\mathsf{E}[\|\mathbf{\mathrm{t}}_{k}\|^{2}_{\mathbf{1}+\mathbf{\lambda}}]-\sum_{l=1}^{L} \lambda_{l}P_{l}. \tag{12}\] _Strong duality holds, i.e., Problem (8) and Problem (12) have the same optimum \(p^{\star}\). Furthermore, there exist Lagrangian multipliers \(\mathbf{\lambda}^{\star}\) solving Problem (12)._ Proof.: Consider the alternative optimization problem \[\underset{\mathbf{\pi}\in\mathcal{T}}{\text{minimize}} \sum_{k=1}^{K}\mathsf{E}[\|\mathbf{\mathrm{t}}_{k}\|^{2}]+\gamma( \mathbb{T})\] (13) subject to \[(\forall l\in\mathcal{L})\ \sum_{k=1}^{K}\mathsf{E}[\|\mathbf{\mathrm{t}}_{l,k}\|^{2} ]\leq P_{l},\] where \(\gamma(\mathbb{T})=0\) if \(\mathbb{T}\) belongs to the set \(\mathcal{T}_{\gamma}\), and \(\gamma(\mathbb{T})=+\infty\) otherwise. Problem (13) is equivalent to Problem (8), in the sense that it has the same optimum \(p^{\star}\) and set of optimal solutions. Its Lagrangian dual problem can be written as (12). By applying weak duality (see Proposition 1) to the term \(\inf_{\mathbb{T}\in\mathcal{T}_{\gamma}}\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{\mathrm{ t}}_{k}\|^{2}_{\mathbf{1}+\mathbf{\lambda}}]\) for any fixed \(\mathbf{\lambda}\), and by rewriting \(\mathcal{T}_{\gamma}\) according to Lemma 1, we obtain \(\sup_{\mathbf{\mu}\in\mathbb{R}^{K}_{+}}d(\mathbf{\lambda},\mathbf{\mu})\leq\tilde{d}(\mathbf{ \lambda})\). Taking the supremum over \(\mathbf{\lambda}\) on both sides gives \[p^{\star}=\sup_{\mathbf{\lambda}\in\mathbb{R}^{K}_{+}}\sup_{\mathbf{\mu}\in\mathbb{R}^{ K}_{+}}d(\mathbf{\lambda},\mathbf{\mu})\leq\sup_{\mathbf{\lambda}\in\mathbb{R}^{K}_{+}} \tilde{d}(\mathbf{\lambda})\leq p^{\star},\] where the first equality follows from Proposition 2 and the last inequality follows from weak duality applied to Problem (13). The second part of the statement follows from the existence of a solution \((\mathbf{\lambda}^{\star},\mathbf{\mu}^{\star})\) to Problem (11), and \(p^{\star}=\sup_{\mathbf{\mu}\in\mathbb{R}^{K}_{+}}d(\mathbf{\lambda}^{\star},\mathbf{\mu}) \leq\tilde{d}(\mathbf{\lambda}^{\star})\leq p^{\star}\). ### _Primal-dual solution methods_ A key aspect of Lagrangian optimization is the possibility of recovering a primal solution from a dual solution. However, we emphasize that this is not always possible even if strong duality holds. Nevertheless, the following proposition ensures that a primal solution \(\mathbb{T}^{\star}\) to Problem (8) can be indeed recovered from a solution to the partial dual problem (12). **Proposition 3**.: _Let \(\mathbf{\lambda}^{\star}\) be a solution to Problem (12). Then, a solution to Problem (8) is given by any solution to_ \[\underset{\mathbf{\pi}\in\mathcal{T}_{\gamma}}{\text{minimize}}\ \sum_{k=1}^{K}\mathsf{E}[\|\mathbf{ \mathrm{t}}_{k}\|^{2}_{\mathbf{1}+\mathbf{\lambda}^{\star}}]. \tag{14}\] Proof.: The proof is given in Appendix B, and it is based on the same convex reformulation of the SINR constraints adopted in the proof of Proposition 2. Starting from Proposition 3, and in particular by studying and solving Problem (14), in the next section we derive structural properties for optimal joint precoding. However, before moving to the next section, we first complete the discussion on recovering a primal solution from a dual solution by illustrating a simple algorithm for solving Problem (12), which is a concave maximization problem. In particular, we consider a standard primal-dual iterative algorithm based on the projected subgradient method [31, 32]. **Lemma 3**.: _Choose \(\mathbf{\lambda}^{(1)}\in\mathbb{R}^{L}_{+}\) and a sequence \(\{\alpha^{(i)}\}_{i\in\mathbb{N}}\) such that_ \[(\forall i\in\mathbb{N})\ \alpha^{(i)}\in\mathbb{R}_{++},\quad\lim_{i\to\infty} \alpha^{(i)}=0,\quad\sum_{i\in\mathbb{N}}\alpha^{(i)}=\infty.\] _Define the sequence \((\mathbf{\lambda}^{(i)})_{i\in\mathbb{N}}\) generated via_ \[(\forall i\in\mathbb{N})\ \mathbf{\lambda}^{(i+1)}=\max\left\{\mathbf{\lambda}^{(i)}+\frac{ \alpha^{(i)}}{\|\mathbf{g}(\mathbf{\lambda}^{(i)})\|}\mathbf{g}(\mathbf{\lambda}^{(i)}),\mathbf{0} \right\},\] _where the \(l\)th entry of \(\mathbf{g}(\mathbf{\lambda})\) is given by \((\forall\mathbf{\lambda}\in\mathbb{R}_{+}^{L})(\forall l\in\mathcal{L})\),_ \[g_{l}(\mathbf{\lambda}):=\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{l,k}\|^{2}]-P_{l}, \quad\mathbb{T}\in\arg\min_{\mathcal{T}_{\mathbf{\gamma}}}\sum_{k=1}^{K}\mathsf{E} \big{[}\|\mathbf{t}_{k}\|_{1+\mathbf{\lambda}}^{2}].\] _Then, the subsequence of \((\mathbf{\lambda}^{(i)})_{i\in\mathbb{N}}\) corresponding to the best objective after \(n\) iterations \(\max_{i=1,\ldots,n}\tilde{d}(\mathbf{\lambda}^{(i)})\) converges to a solution \(\mathbf{\lambda}^{\star}\) to Problem (12)._ Proof.: The proof is given in Appendix **C**. Note that the above algorithm requires a method for solving Problem (14) for arbitrary Lagrangian multipliers. This is provided in the following section. ## V Uplink-downlink duality Building on the above analysis based on Lagrangian duality, in this section we present our main result, which states that the problem of optimal joint precoding design under per-AP power and information constraints can be reformulated as a joint combining design and long-term power control problem in a dual uplink channel with properly designed noise vector \(\mathbf{\sigma}\) (see Section III-C). More precisely, we show later in Proposition 4 that an optimal solution to Problem (8) can be recovered from a solution to \[\begin{split}\underset{\forall\in\mathcal{T},\mathbf{p}\in \mathbb{R}_{+}^{K}}{\text{minimize}}&\quad\sum_{k=1}^{K}p_{k}\\ \text{subject to}&\quad(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{\mathrm{UL}}(\mathbf{v}_{k},\mathbf{p},\mathbf{\sigma})\geq\gamma_{k},\\ &\quad(\forall k\in\mathcal{K})\ \mathsf{E}\big{[}\|\mathbf{v}_{k}\|_{\mathbf{ \sigma}}^{2}\big{]}=1,\end{split} \tag{15}\] for some \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\), and where \(\mathbb{V}:=[\mathbf{v}_{1},\ldots,\mathbf{v}_{k}]\). In addition, we present an efficient numerical method that solves the above problem. **Remark 2**.: _Our derivation differs significantly from the derivation of the related result in [21]. Specifically, [21] exploits the peculiar structure of optimal centralized precoding in deterministic channels, its relation to a certain Rayleigh quotient, and a series of properties from the theory of semidefinite programming. Unfortunately, these arguments do not seem applicable to our setup, which covers distributed precoding and random channels. To address this limitation, we follow a different path and replace the above arguments by a variation of well-known uplink-downlink duality results under a sum power constraint, reviewed, e.g., in [19, 33]._ ### _Joint precoding optimization over a dual uplink channel_ The desired connection between the downlink channel and its dual uplink channel is established by studying Problem (14), the solutions of which are optimal joint precoders solving Problem (8). The key idea lies in interpreting \(\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{1+\mathbf{\lambda}^{\star}}^{2}]\) as an unconventional weighted definition of the average sum transmit power. To keep the discussion general and, for instance, applicable to the algorithm given by Lemma 3, we consider arbitrary Lagrangian multipliers, i.e., we consider the following problem: \[(\forall\mathbf{\sigma}\in\mathbb{R}_{++}^{L})\] (16) \[\begin{split}\underset{\mathbb{T}\in\mathcal{T}}{\text{minimize}} &\quad\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{ \sigma}}^{2}]\\ \text{subject to}&\quad(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{ \mathrm{DL}}(\mathbb{T})\geq\gamma_{k}.\end{split}\] Since the SINR constraints are feasible by assumption, following the same arguments as in the proof of Lemma 3, we observe that the above problem always admits a solution. **Proposition 4**.: _For all \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\), Problem (16) and Problem (15) have the same optimum. Furthermore, a solution to Problem (16) is given by_ \[(\forall k\in\mathcal{K})\ \mathbf{t}_{k}^{\star}=\sqrt{q_{k}^{\star}} \mathbf{v}_{k}^{\star},\] _where \((\mathbb{V}^{\star},\mathbf{p}^{\star})\in\mathcal{T}\times\mathbb{R}_{++}^{K}\) is a solution to Problem (15), and \(\mathbf{q}^{\star}:=(q_{1}^{\star},\ldots,q_{K}^{\star})\in\mathbb{R}_{++}^{K}\) is given by_ \[\mathbf{q}^{\star}=(\mathbf{D}-\mathbf{B})^{-1}(\mathbf{D}-\mathbf{B}^{\mathsf{T}})\mathbf{p}^{\star},\] _where_ \[\mathbf{B}:=\begin{bmatrix}\mathsf{E}[\|\mathbf{h}_{1}^{\mathsf{H}} \mathbf{v}_{1}^{\star}|^{2}]&\ldots&\mathsf{E}[|\mathbf{h}_{1}^{\mathsf{H}} \mathbf{v}_{K}^{\star}|^{2}]\\ \vdots&\ddots&\vdots\\ \mathsf{E}[\|\mathbf{h}_{K}^{\mathsf{H}}\mathbf{v}_{1}^{\star}|^{2}]&\ldots& \mathsf{E}[|\mathbf{h}_{K}^{\mathsf{H}}\mathbf{v}_{K}^{\star}|^{2}]\end{bmatrix}, \quad\mathbf{D}:=\mathrm{diag}(\mathbf{d}),\] \[\mathbf{d}:=\begin{pmatrix}(1+\gamma_{1}^{-1})|\mathsf{E}[\mathbf{h}_ {1}^{\mathsf{H}}\mathbf{v}_{1}]|^{2},\ldots,(1+\gamma_{K}^{-1})|\mathsf{E}[ \mathbf{h}_{K}^{\mathsf{H}}\mathbf{v}_{K}^{\star}|^{2}]\end{pmatrix}.\] Proof.: The function \(\mathsf{E}[\|\cdot\|_{\mathbf{\sigma}}^{2}]\) is a valid norm in \(\mathcal{T}_{k}\). Hence, we can rewrite \(\inf_{\mathbb{T}\in\mathcal{T}_{\mathbf{\gamma}}}\sum_{k=1}^{K}\mathsf{E}[\| \mathbf{t}_{k}\|_{\mathbf{\sigma}}^{2}]\) in a normalized form as the following optimization problem: \[\begin{split}\underset{\mathbb{V}\in\mathcal{T},\mathbf{q}\in \mathbb{R}_{+}^{K}}{\text{minimize}}&\quad\sum_{k=1}^{K}q_{k}\\ \text{subject to}&\quad(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{ \mathrm{DL}}(\mathbb{V}\mathrm{diag}(\mathbf{q})^{\frac{1}{2}})\geq\gamma_{k},\\ &\quad(\forall k\in\mathcal{K})\ \mathsf{E}[\|\mathbf{v}_{k}\|_{\mathbf{ \sigma}}^{2}]=1,\end{split} \tag{17}\] where we used the change of variables \((\forall k\in\mathcal{K})\ \mathbf{t}_{k}=:\sqrt{q_{k}}\mathbf{v}_{k}\). The vector \(\mathbf{q}\) can be interpreted as a downlink power control vector, by (unconventionally) measuring the power of each \(\mathbf{t}_{k}\) in terms of its norm \(\mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{\sigma}}^{2}]=q_{k}\). For any choice of \(\mathbb{V}\in\mathcal{T}\) with normalized columns, i.e., such that \((\forall k\in\mathcal{K})\ \mathsf{E}[\|\mathbf{v}_{k}\|_{\mathbf{\sigma}}^{2}]=1\), consider now the following downlink power control problem: \[\begin{split}\underset{\mathbf{q}\in\mathbb{R}_{+}^{K}}{\text{minimize}}& \quad\sum_{k=1}^{K}q_{k}\\ \text{subject to}&\quad(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{ \mathrm{DL}}(\mathbb{V}\mathrm{diag}(\mathbf{q})^{\frac{1}{2}})\geq\gamma_{k}. \end{split} \tag{18}\] From known sum power duality arguments in the power control literature (reviewed, e.g., in [19, 33]), it follows that Problem (18) is feasible if and only if the following uplink power control problem is feasible, for the same choice of \(\mathbb{V}\): \[\begin{split}\underset{\mathbf{p}\in\mathbb{R}_{+}^{K}}{\text{minimize}}& \quad\sum_{k=1}^{K}p_{k}\\ \text{subject to}&\quad(\forall k\in\mathcal{K})\ \mathrm{SINR}_{k}^{ \mathrm{UL}}(\mathbf{v}_{k},\mathbf{p},\mathbf{\sigma})\geq\gamma_{k}.\end{split} \tag{19}\] When feasible, Problem (18) and Problem (19) are known to have unique and positive solutions meeting the SINR constraints with equality, and to attain the same optimum. The solutions are related by rearranging the constraints as full rank linear systems \((\mathbf{D}-\mathbf{B})\mathbf{q}^{*}=\mathbf{1}\) and \((\mathbf{D}-\mathbf{B}^{\mathsf{T}})\mathbf{p}^{*}=\mathbf{1}\), respectively. When not feasible, we say that the two optima equal \(\infty\). By taking the infimum of both optima over the set of \(\mathbb{V}\in\mathcal{T}\) such that \((\forall k\in\mathcal{K})\ \mathsf{E}[\|\forall_{k}\|_{\mathbf{\sigma}}^{2}]=1\), we obtain that Problem (15) and Problem (16) have the same optimum. Finally, the proof is completed by recalling that Problem (16) always has a solution. The key message of the uplink-downlink duality principle in Proposition 4 is that optimal joint precoders solving Problem (8) are given by a scaled version of optimal joint combiners solving the dual uplink problem (15) with noise vector \(\mathbf{\sigma}=\mathbf{1}+\mathbf{\lambda}^{*}\), where \(\mathbf{\lambda}^{*}\) are Lagrangian multipliers solving Problem (12). ### _Dual uplink power control with implicit optimal combining_ We now focus on the solution to the dual uplink problem (15). By exploiting the property that the uplink SINR constraints are only coupled via the power vector \(\mathbf{p}\), we observe that the optimum to Problem (15) is equivalently given by the optimum to the following power control problem: \[\underset{\mathbf{p}\in\mathbb{R}_{+}^{K}}{\text{minimize}}\ \sum_{k=1}^{K}p_{k}\ \text{ subject to}\ (\forall k\in\mathcal{K})\ u_{k}(\mathbf{p},\mathbf{\sigma})\geq\gamma_{k}, \tag{20}\] where the optimization of the combiners is left implicit in the definition of \((\forall k\in\mathcal{K})(\forall\mathbf{p}\in\mathbb{R}_{+}^{K})(\forall\mathbf{ \sigma}\in\mathbb{R}_{++}^{L})\) \[u_{k}(\mathbf{p},\mathbf{\sigma}):=\sup_{\begin{subarray}{c}\forall k\in\mathcal{T}_{ k}\\ \mathsf{E}\left[\|\forall_{k}\|_{\mathbf{\sigma}}^{2}\right]\neq 0\end{subarray}}\mathrm{ SINR}_{k}^{\mathrm{UL}}(\forall_{k},\mathbf{p},\mathbf{\sigma}). \tag{21}\] A solution \((\mathbb{V}^{*},\mathbf{p}^{*})\) to Problem (15) is related to a solution to Problem (20) in the sense that \(\mathbf{p}^{*}\) is also a solution to Problem (20), and that the columns of \(\mathbb{V}^{*}\) attain the suprema in (21) for \(\mathbf{p}=\mathbf{p}^{*}\). Note that, without loss of generality and for mathematical convenience in later steps, we removed in (21) the unit-norm constraint, since the value of \(\mathsf{E}[\|\forall_{k}\|_{\mathbf{\sigma}}^{2}]\) does not change the uplink SINR, as long as it is non-zero. The main implication of the above discussion is that optimal joint combiners (and hence, by Proposition 4, optimal joint precoders) solving Problem (15) can be obtained by solving a set of disjoint uplink SINR maximization problems under per-AP information constraints, i.e., by evaluating all \(u_{k}(\mathbf{p},\mathbf{\sigma})\) in (21) for some coefficients \((\mathbf{p},\mathbf{\sigma})\in\mathbb{R}_{++}^{K}\times\mathbb{R}_{++}^{L}\). The next sections discuss challenges and solution approaches related to this crucial step, and reveals a useful solution structure. However, before moving to the next section, we first present a numerical method for computing \(\mathbf{p}^{*}\), i.e., for solving Problem (20), assuming that \(u_{k}(\mathbf{p},\mathbf{\sigma})\) in (21) can be indeed evaluated. Specifically, we apply the celebrated framework of interference calculus for power control [34, 35], and obtain: **Lemma 4**.: _Fix \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\). For every \(\mathbf{p}^{(1)}\in\mathbb{R}_{++}^{K}\), the sequence \((\mathbf{p}^{(i)})_{i\in\mathbb{N}}\) generated via the fixed-point iterations \((\forall i\in\mathsf{N})\ \mathbf{p}^{(i+1)}=T_{\mathbf{\sigma}}(\mathbf{p}^{(i)})\), where_ \[(\forall\mathbf{p}\in\mathbb{R}_{++}^{K})\ T_{\mathbf{\sigma}}(\mathbf{p}):=\left[\tfrac{ \gamma_{1}p_{1}}{u_{1}(\mathbf{p},\mathbf{\sigma})}\quad\ldots\quad\tfrac{\gamma_{K}p _{K}}{u_{K}(\mathbf{p},\mathbf{\sigma})}\right]^{\mathsf{T}}, \tag{22}\] _converges in norm to the unique solution \(\mathbf{p}^{*}\) to Problem (20)._ Proof.: (Sketch) A simple contradiction argument proves that a solution \(\mathbf{p}^{*}\) must satisfy \((\forall k\in\mathcal{K})\ u_{k}(\mathbf{p}^{*},\mathbf{\sigma})=\gamma_{k}\). By trivially extending the arguments in [36, Proposition 3] for \(\mathbf{\sigma}=\mathbf{1}\) to an arbitrary \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\), it follows that this condition can be equivalently expressed as the fixed point equation \(\mathbf{p}^{*}=[\gamma_{1}f_{1,\mathbf{\sigma}}(\mathbf{p}^{*}),\ldots,\gamma_{K}f_{K,\mathbf{ \sigma}}(\mathbf{p}^{*})]^{\mathsf{T}}\), where \((\forall k\in\mathcal{K})\ f_{k,\mathbf{\sigma}}:\mathbb{R}_{++}^{K}\to\mathbb{R} _{++}\) is a given _standard interference function_ (see, e.g., [36, Definition 2]) satisfying \((\forall k\in\mathcal{K})(\forall\mathbf{p}\in\mathbb{R}_{+}^{K})\ u_{k}(\mathbf{p}, \mathbf{\sigma})=p_{k}/f_{k,\mathbf{\sigma}}(\mathbf{p})\). For the above arguments to hold, an important requirement is the property \((\forall k\in\mathcal{K})(\forall\mathbf{p}\in\mathbb{R}_{++}^{K})\ u_{k}(\mathbf{p}, \mathbf{\sigma})>0\). However, this property is guaranteed by recalling that Problem (15) always has a solution, which implies the existence of \(\mathbb{V}\in\mathcal{T}\) such that all uplink SINRs are strictly positive for any \(\mathbf{p}\in\mathbb{R}_{++}^{K}\). The proof is concluded by invoking known properties of fixed points of standard interference mappings [34]. In particular, existence and uniqueness of a fixed point \(\mathbf{p}^{*}\) is guaranteed, and fixed-point iterations converge in norm to the unique solution. The above lemma shows that optimal joint combiners solving Problem (15) can be obtained via the iterative evaluation of \(T_{\mathbf{\sigma}}(\mathbf{p})\) in (22), which in turn involves solving the aforementioned uplink SINR maximization problems in (21). ## VI Optimal joint precoding structure By exploiting the obtained uplink-downlink duality principle, we now derive the structure of an optimal solution to Problem (8). Specifically, we show that it suffices to consider properly scaled and regularized variations of so-called _team_ MMSE precoders [23], parametrized by a set of coefficients \((\mathbf{p},\mathbf{\sigma})\in\mathbb{R}_{++}^{K}\times\mathbb{R}_{++}^{L}\). ### _MMSE precoding under information constraints_ As discussed in the previous sections, an optimal solution to Problem (8) can be obtained by solving a set of uplink SINR maximization problems of the type in (21). Solving these problems appears quite challenging for the following reasons: (i) the non-convex fractional utility involving expectations in both the numerator and denominator; (ii) the information constraints. However, the next proposition shows that (i) is not the main challenge, because we can consider an alternative and simpler convex utility, in the same spirit of the known relation between SINR and MMSE in deterministic channels [37]. **Proposition 5**.: _For given \(k\in\mathcal{K}\), \(\mathbf{p}\in\mathbb{R}_{++}^{K}\), and \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\), consider the optimization problem_ \[\underset{\nu_{k}\in\mathcal{T}_{k}}{\text{minimize}}\ \mathsf{E}\left[\|\mathbf{P}^{\frac{1}{2}} \mathbb{H}^{\mathsf{H}}\nu_{k}-\mathbf{e}_{k}\|^{2}\right]+\mathsf{E}\left[\|\forall_{ k}\|_{\mathbf{\sigma}}^{2}\right], \tag{23}\] _where \(\mathbf{P}:=\mathrm{diag}(\mathbf{p})\), and \(\mathbb{H}:=[\mathbb{h}_{1},\ldots,\mathbb{h}_{K}]\). Problem (23) has a unique solution \(\forall_{k}^{*}\in\mathcal{T}_{k}\), and this solution satisfies_ \[u_{k}(\mathbf{p},\mathbf{\sigma})=\mathrm{SINR}_{k}\left(\nu_{k}^{*},\mathbf{p},\mathbf{\sigma} \right).\] Proof.: The proof follows readily by extending the arguments in [36, Proposition 4] for \(\mathbf{\sigma}=\mathbf{1}\) to an arbitrary \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\). Note that the assumption \(u_{k}(\mathbf{p},\mathbf{\sigma})>0\), required for the proof, is always satisfied in this work, as already discussed in the proof of Lemma 4. **Remark 3**.: _Together with Proposition 4 and the discussion in Section V-B, Proposition 5 shows that optimal joint precoders solving Problem (8) can be obtained as solutions to MMSE problems under information constraints, i.e., they are given by (properly scaled) solutions to Problem (23), for some parameters \((\mathbf{p},\mathbf{\sigma})\in\mathbb{R}_{++}^{K}\times\mathbb{R}_{++}^{L}\)_ In the particular case where all APs have complete knowledge of the channel \(\mathbb{H}\), and fully share UEs' data, a solution to Problem (23) is simply given by a variation of the well-known regularized zero-forcing solution \[\mathbb{V}=\left(\mathbb{H}P\mathbb{H}^{\mathsf{H}}+\mathbf{\Sigma}\right)^{-1} \mathbb{H}P^{\frac{1}{2}}, \tag{24}\] where, similar to [21], the typical regularization of the matrix inversion stage via a scaled identity matrix is replaced by a more general diagonal regularization matrix \(\mathbf{\Sigma}:=\mathrm{diag}(\sigma_{1}\mathbf{I}_{N},\ldots,\sigma_{L}\mathbf{I}_{N})\) parametrized by \(\mathbf{\sigma}\) which takes into account the per-AP power constraints. **Remark 4**.: _A major difference between the solution in [21] and (24) is that the former considers short-term optimization of its parameters, i.e., for every channel realization. In contrast, our work considers long-term optimization of the parameters \((\mathbf{p},\mathbf{\sigma})\in\mathbb{R}_{++}^{K}\times\mathbb{R}_{++}^{L}\) based on channel statistics._ For more general and nontrivial information constraints, the solution to Problem (23) can be interpreted as the best distributed approximation of regularized channel inversion, and it can be obtained via a minor variation of the recently developed _team_ MMSE precoding method given by [23]. In the next sections we discuss this aspect in detail. ### _Uplink channel estimation and CSI sharing_ To provide concrete examples of solutions to Problem (23), we first slightly restrict the model in Section III, while still covering most scenarios studied in the (cell-free) massive MIMO literature. For all AP \(l\in\mathcal{L}\) and UE \(k\in\mathcal{K}\), we let each sub-vector \(\mathbb{h}_{l,k}\) of \(\mathbb{h}_{k}^{\mathsf{H}}=:[\mathbb{h}_{1,k}^{\mathsf{H}},\ldots\mathbb{h}_{ L,k}^{\mathsf{H}}]\) be independently distributed as \(\mathbb{h}_{l,k}\sim\mathcal{CN}\left(\mathbf{\mu}_{l,k},\mathbf{K}_{l,k}\right)\) for some channel mean \(\mathbf{\mu}_{l,k}\in\mathbb{C}^{N}\) and covariance matrix \(\mathbf{K}_{l,k}\in\mathbb{C}^{N\times N}\). Independence can be easily motivated by assuming that UEs and APs are not colocated. As customary in the literature, we further assume that the APs (or, more generally, the processing units controlling the APs) perform pilot based over-the-uplink MMSE channel estimation, based on the channel reciprocity property of time division duplex systems. Specifically, we assume each AP \(l\in\mathcal{L}\) to acquire local estimates \(\hat{\mathbb{H}}_{l}:=[\hat{\mathbb{h}}_{l,1},\ldots,\hat{\mathbb{h}}_{l,K}]\) of the local channel \(\mathbb{H}_{l}:=[\mathbb{h}_{l,1},\ldots,\mathbb{h}_{l,K}]\) with error \(\mathbb{Z}_{l}:=[\mathbb{z}_{l,1},\ldots,\mathbb{z}_{l,K}]:=\mathbb{H}_{l}- \hat{\mathbb{H}}_{l}\) independent from \(\mathbb{H}_{l}\), and satisfying \((\forall l\in\mathcal{L})(\forall k\in\mathcal{K})\)\(\exists_{l,k}\sim\mathcal{CN}(\mathbf{0},\mathbf{\Psi}_{l,k})\) for some error covariance \(\mathbf{\Psi}_{l,k}\in\mathbb{C}^{N\times N}\). Moreover, again motivated by the geographical separation of the APs, we assume that \((\hat{\mathbb{H}}_{l},\mathbb{Z}_{l})\) is independent from \((\hat{\mathbb{H}}_{j},\mathbb{Z}_{j})\) for all \((l,j)\in\mathcal{L}^{2}\) such that \(l\neq j\). After this local channel estimation step, we assume that the APs may acquire additional information of the global channel matrix \(\mathbb{H}\) via some CSI sharing step. More specifically, we assume that each AP \(l\in\mathcal{L}\) must form its precoders based on some side information \(S_{l}:=(\hat{\mathbb{H}}_{l},\hat{S}_{l})\), where \(\bar{S}_{l}\) denotes additional channel information defined by the chosen CSI sharing pattern. Overall, we model this two steps channel acquisition scheme (local channel estimation followed by CSI sharing) by assuming the following Markov chain \(\mathbb{H}_{l}\rightarrow\hat{\mathbb{H}}_{l}\to S_{l}\to S_{j} \rightarrow\hat{\mathbb{H}}_{j}\) for all \((l,j)\in\mathcal{L}^{2}\). Essentially, this Markov chain ensures that each AP \(l\in\mathcal{L}\) can only acquire a perfect or degraded version of the local estimate \(\hat{\mathbb{H}}_{j}\) of \(\mathbb{H}_{j}\) available at another AP \(j\neq l\). We map the above assumptions to information constraints \(\mathcal{T}_{1},\ldots,\mathcal{T}_{K}\) in Problem (8) by letting the information subfield \(\Sigma_{l}\) of each AP \(l\in\mathcal{L}\), defining the subspace \(\mathcal{H}_{l}^{N}\) in (4), be the sub-\(\sigma\)-algebra generated by its available CSI \(S_{l}\) on \(\Omega\). ### _Centralized precoding with per-AP power constraints_ A similar expression to (24) covering user-centric network clustering and channel estimation errors can be easily obtained, provided that imperfect channel estimates are perfectly shared within each cluster of AP. This corresponds to a variation of the known centralized MMSE precoding solution in [8, 20]. **Proposition 6**.: _For given \(k\in\mathcal{K}\), \(\mathbf{p}\in\mathbb{R}_{++}^{K}\), and \(\mathbf{\sigma}\in\mathbb{R}_{++}^{L}\), and under the model in Section VI-B with \((\forall l\in\mathcal{L})\)\(S_{l}=(\hat{\mathbb{H}}_{1},\ldots,\hat{\mathbb{H}}_{L})\) (centralized CSI), the unique solution to Problem (23) is given by_ \[\mathtt{v}_{k}=\left(\mathbf{C}_{k}\hat{\mathbb{H}}P\hat{\mathbb{H}}^{\mathsf{H}} \mathbf{C}_{k}+\mathbf{C}_{k}\mathbf{\Psi}\mathbf{C}_{k}+\mathbf{\Sigma}\right)^{-1}\hat{\mathbb{H }}^{\mathsf{H}}\mathbf{C}_{k}\mathbf{P}^{\frac{1}{2}}, \tag{25}\] _where \(\hat{\mathbb{H}}^{\mathsf{H}}:=[\hat{\mathbb{H}}_{1}^{\mathsf{H}},\ldots,\hat{ \mathbb{H}}_{L}^{\mathsf{H}}]\), \(\mathbf{\Sigma}:=\mathrm{diag}(\sigma_{1}\mathbf{I}_{N},\ldots,\sigma_{L}\mathbf{I}_{N})\), \(\mathbf{\Psi}:=\sum_{k=1}^{K}p_{k}\mathrm{diag}(\mathbf{\Psi}_{1,k},\ldots,\mathbf{\Psi}_{L,k})\), and \(\mathbf{C}_{k}:=\mathrm{diag}(\mathbf{C}_{1,k},\ldots,\mathbf{C}_{L,k})\) is a block-diagonal matrix satisfying_ \[(\forall l\in\mathcal{L})\ \mathbf{C}_{l,k}=\begin{cases}\mathbf{I}_{N}&\text{if }\in \mathcal{L}_{k},\\ \mathbf{0}_{N\times N}&\text{otherwise}.\end{cases}\] Proof.: (Sketch) We equivalently model user-centric network clustering by replacing \(\mathtt{v}_{k}\) with \(\mathbf{C}_{k}\mathtt{v}_{k}\) as in [8], instead of modifying the sets \(\mathcal{T}_{k}\) in (4) as described in Section III-B. Then, since all APs share the same CSI \(\hat{\mathbb{H}}\), optimal precoders can be obtained as solutions to disjoint, unconstrained, and finite dimensional conditional MMSE problems, one for each CSI realization: \[\mathtt{v}_{k}=\arg\min_{\mathbf{v}_{k}\in\mathbb{C}^{L_{N}}}\mathsf{E}\left[\|\mathbf{P} ^{\frac{1}{2}}\mathbb{H}^{\mathsf{H}}\mathbf{C}_{k}\mathbf{v}_{k}-\mathbf{e}_{k}\|^{2}+\| \mathbf{C}_{k}\mathbf{v}_{k}\|_{\mathbf{o}}^{2}\Big{|}\hat{\mathbb{H}}\right]. \tag{26}\] The rest of the proof follows by evaluating the conditional expectations using the CSI error model, and by standard results on unconstrained minimization of quadratic forms. Although derived by assuming full CSI sharing, we observe that the computation of the optimal precoder (25) for UE \(k\in\mathcal{K}\) only requires knowledge of \((\hat{\mathbb{H}}_{l})_{l\in\mathcal{L}_{k}}\), i.e., only the channel estimates of the APs serving UE \(k\). Furthermore, similarly to the discussion in [20], the computation of the inverse in (25) only involves the inversion of a submatrix of size \(N|\mathcal{L}_{k}|\). **Remark 5**.: _The special case of (25) with \(\mathbf{\sigma}=\mathbf{1}\) was already proposed in [8, 20] as a good heuristic under a sum power constraint. In particular, it was motivated by first observing that (25) maximizes a coherent uplink rate bound, different than (5), and then by invoking uplink-downlink duality between (5) and (1) under a sum power constraint. Although [8, 20] observed that (25) also solves Problem (26), the connection with the maximization of (5) given by Proposition 5, and hence the formal optimality of (25) in terms of downlink rates in (1), was not reported._ ### _Distributed precoding with per-AP power constraints_ When the APs have different CSI, it is not possible to decompose the problem into disjoint conditional MMSE problems as in the proof of (25), and more advanced methods must be used [23]. In this section we illustrate the extension of some key results in [23] to the case of per-AP power constraints. We start with the following general result: **Proposition 7**.: _For given \(k\in\mathcal{K}\), \(\mathbf{p}\in\mathbb{R}^{K}_{++}\), and \(\mathbf{\sigma}\in\mathbb{R}^{L}_{++}\), and under the model in Section VI-B, the unique solution to Problem (23) is also the unique \(\upnu_{k}\in\mathcal{T}_{k}\) satisfying \((\forall l\in\mathcal{L}_{k})\)_ \[\upnu_{l,k}=\mathbb{V}_{l}\left(\mathbf{e}_{k}-\sum_{j\in\mathcal{L}_{k}\backslash l }\mathbf{P}^{\frac{1}{2}}\mathbb{E}\left[\hat{\mathbb{H}}_{j}^{\mathsf{H}}\upnu_{ j,k}\Big{|}S_{l}\right]\right)\quad\mathrm{a.s.}, \tag{27}\] _where \(\mathbb{V}_{l}:=\left(\hat{\mathbb{H}}_{l}\mathbf{P}\hat{\mathbb{H}}_{l}^{\mathsf{ H}}+\sum_{k\in\mathcal{K}}p_{k}\mathbf{\Psi}_{l,k}+\sigma_{l}\mathbf{I}_{N}\right)^{-1} \hat{\mathbb{H}}_{l}\mathbf{P}^{\frac{1}{2}}\)._ Proof.: The proof follows readily by replacing \(\mathbf{\sigma}=\mathbf{1}\) with an arbitrary \(\mathbf{\sigma}\in\mathbb{R}^{L}_{++}\) in the proofs of [23, Lemma 2] and [24, Lemma 1]. Informally, (27) is obtained by minimizing the objective in (23) with respect to a subvector \(\upnu_{l,k}\), by conditioning on \(S_{l}\), and by fixing the other subvectors \(\upnu_{j,k}\) for \(j\neq l\). This readily gives a set of necessary optimality conditions. The key step of the proof shows that these conditions are also sufficient. Proposition 7 shows that optimal distributed precoding is composed by a local MMSE precoding stage, followed by a corrective stage taking into account the possibly unknown effect of the other APs based on the available CSI. From an optimization point of view, Proposition 7 states that the solution to (23) is the unique solution to an infinite dimensional linear feasibility problem, that can be solved in closed form for many nontrivial yet relevant setups, or via efficient approximation techniques when the expectations in (27) cannot be easily evaluated. In particular, we remark that Proposition 7 applies to fairly general CSI sharing patterns (see Remark 1 and the discussions in [23, 24]). Due to space limitations, in the remainder of this work we focus on the relatively simple case of local precoding with user-centric network clustering, and leave the study of more complex setups for future work. **Proposition 8**.: _For given \(k\in\mathcal{K}\), \(\mathbf{p}\in\mathbb{R}^{K}_{++}\), and \(\mathbf{\sigma}\in\mathbb{R}^{L}_{++}\), and under the model in Section VI-B with \((\forall l\in\mathcal{L})\)\(S_{l}=\mathbb{H}_{l}\) (local CSI), the unique solution to Problem (23) is given by_ \[(\forall l\in\mathcal{L})\upnu_{l,k}=\mathbb{V}_{l}\mathbf{c}_{l,k}, \tag{28}\] _where \(\mathbb{V}_{l}\) is a local MMSE stage as in Proposition 7, and \(\mathbf{c}_{l,k}\in\mathbb{C}^{K}\) is a statistical precoding stage given by the unique solution to the linear system of equations_ \[\begin{cases}\mathbf{c}_{l,k}+\sum_{j\in\mathcal{L}_{k}\backslash l }\mathbf{\Pi}_{j}\mathbf{c}_{j,k}=\mathbf{e}_{k}&\forall l\in\mathcal{L}_{k},\\ \mathbf{c}_{l,k}=\mathbf{0}_{K\times 1}&\text{otherwise,}\end{cases}\] _where \((\forall l\in\mathcal{L})\)\(\mathbf{\Pi}_{l}:=\mathbb{E}\left[\mathbf{P}^{\frac{1}{2}}\hat{\mathbb{H}}_{l}^{ \mathsf{H}}\mathbb{V}_{l}\right]\)._ Proof.: The proof follows by verifying that (28) satisfies the optimality coditions (27). The details are similar to the full data sharing and sum power constraint case in [23, Theorem 4], hence they are omitted. The role of the statistical precoding stage is to optimally enhance the local MMSE precoding decisions by exploiting known statistical features of the channel and CSI of the other APs. Interestingly, we observe that the matrix \(\mathbf{\Pi}_{l}\) for \(l\in\mathcal{L}\) takes the form of a \(K\times K\) covariance matrix that can be locally estimated at the \(l\)th AP using local CSI only, and then shared on a long-term basis for the computation of the statistical precoding stages. As observed in [23] for the simpler case of a sum power constraint, (28) may provide significant performance gain over the standard local MMSE solution (where, for each \(l\in\mathcal{L}\) and \(k\in\mathcal{K}\), \(\mathbf{c}_{l,k}\) is replaced by \(c_{l,k}\mathbf{e}_{k}\) for a single power scaling coefficient \(c_{l,k}\in\mathbb{R}_{+}\)[20]) for non-zero mean channel models and/or in the presence of pilot contamination. ## VII Numerical examples In this section we illustrate some applications of our results by simulating the performance of optimal joint precoding in a simple example of user-centric cell-free massive MIMO network under either centralized or local CSI. Note that due to the theoretical nature and space limitations of this paper, the numerical results that follow are for illustrative purposes only and should by no means be understood as an exhaustive evaluation of the performance of cell-free networks. ### _Simulation setup_ We consider the downlink of the network depicted in Figure 1, where \(K=16\) UEs are independently and uniformly distributed within a squared service area of size \(1\times 1\) km\({}^{2}\), and served by \(L=16\) regularly spaced APs with \(N=4\) antennas each. For all \((l,k)\in\mathcal{L}\times\mathcal{K}\), we assume for simplicity zero mean uncorrelated channels, i.e., \(\mathbf{\mu}_{l,k}=\mathbf{0}\) and \(\mathbf{K}_{l,k}=\upi_{l,k}\mathbf{I}_{N}\), where \(\kappa_{l,k}\) denotes the channel gain between AP \(l\) and UE \(k\). We adopt the following 3GPP-like channel gain model, suitable for an urban microcell scenario at \(3.7\) GHz carrier frequency [38, Table 7.4.1-1] \[\kappa_{l,k}=-35.3\log_{10}\left(\Delta_{l,k}/1\ \mathrm{m}\right)-34.5+Z_{l,k}-P_{ \mathrm{noise}}\quad\text{[dB]},\] where \(\Delta_{l,k}\) is the distance between AP \(l\) and UE \(k\) including a difference in height of \(10\) m, and \(Z_{l,k}\sim\mathcal{CN}(0,\rho^{2})\) [dB] are shadow fading terms with deviation \(\rho=7.82\). The shadow fading is correlated as \(\mathbb{E}[Z_{l,k}Z_{j,i}]=\rho^{2}-\frac{d_{k,i}}{13\text{m}}\) for all \(l=j\) and zero otherwise, where \(\delta_{k,i}\) is the distance between UE \(k\) and UE \(i\). The noise power is \(P_{\mathrm{noise}}=-174+10\log_{10}(B)+F\) [dBm], where \(B=100\) MHz is the bandwidth, and \(F=7\) dB is the noise figure. The per-AP power constraints are set to \((\forall l\in\mathcal{L})\)\(P_{l}=30\) dBm. We assume that each UE \(k\in\mathcal{K}\) is served by its \(Q=4\) strongest APs only, i.e., by the subset of APs indexed by \(\mathcal{L}_{k}\subseteq\mathcal{L}\), where each set \(\mathcal{L}_{k}\) is formed by ordering \(\mathcal{L}\) w.r.t. decreasing \(\kappa_{l,k}\) and by keeping only the first \(Q\) elements. Finally, we assume that the APs estimate the small-scale fading channel coefficients of the served UEs only. In particular, we consider the following simple model where the channel coefficients are either perfectly known or completely unknown: \[(\forall k\in\mathcal{K})\;\hat{\mathrm{h}}_{l,k}:=\begin{cases}\mathds{h}_{l,k}&\text{if }l\in\mathcal{L}_{k},\\ \mathds{E}[\mathds{h}_{l,k}]&\text{otherwise}.\end{cases}\] ### _Probability of feasibility_ Figure 2 plots the probability that the feasible set of Problem (8) is nonempty, by letting \((\forall k\in\mathcal{K})\;\gamma_{k}=\gamma\) for different choices of \(\gamma\), corresponding to minimum rate requirements within 1-4 b/s/Hz for all UEs. We consider both centralized precoding as in Proposition 6 and local precoding as in Proposition 8. As a baseline, we also consider the corresponding optimal precoders subject to a sum power constraint \(P_{\text{sum}}=\sum_{l=1}^{L}P_{l}\). The probability of feasibility is approximated by solving \(100\) instances of Problem (8) for \(100\) independent UE drops. Each instance is solved using the numerical methods given by Lemma 3 and Lemma 4, combined as nested loops. In particular, building on Proposition 4, the subgradients \(\mathbf{g}(\mathbf{\lambda})\) in Lemma 3 are computed by solving the downlink problem \(\inf_{\mathcal{T}_{\gamma}}\sum_{k=1}^{K}\mathbb{E}[\|\mathds{t}_{k}\|_{ \mathbf{1}+\mathbf{\lambda}}^{2}]\) through its dual uplink problem (15) with \(\mathbf{\sigma}=\mathbf{1}+\mathbf{\lambda}\), using the fixed-point algorithm in Lemma 4. For each UEs drop, we draw a sample set of \(100\) independent CSI realizations for approximating the expectations via empirical averages. Although developed for producing a solution to Problem (8) under the assumption of strict feasibility, we use the same algorithm for testing feasibility. Note that, for the considered setup, the event of having a feasible set with empty interior has zero probability mass, hence we test feasibility by detecting the events of strict feasibility and infeasibility only. Additional details on how the algorithm performs this test are given in Section VII-C. ### _Comments on numerical implementation_ In this section we give some additional details on the numerical implementation of the algorithm producing Figure 2. We first test feasibility under a sum power constraint, i.e., we test if \(\inf_{\mathcal{T}_{\gamma}}\sum_{k=1}^{K}\mathbb{E}[\|\mathds{t}_{k}\|^{2}] \leq P_{\text{sum}}\) holds. To this end, we initialize the outer loop with \(\mathbf{\lambda}^{(1)}=\mathbf{0}\), and the inner loop with \((\forall k\in\mathcal{K})\;p_{k}^{(1)}=\gamma_{k}/\sum_{l\in\mathcal{L}} \kappa_{l,k}\). This choice ensures that \(\mathbf{p}^{(1)}\leq T_{1}(\mathbf{p}^{(1)})\) holds, and hence, by known properties of the considered fixed-point algorithm [39, Fact 4], \((\sum_{k\in\mathcal{K}}p_{k}^{(i)})_{i\in\mathsf{N}}\) is a monotonically increasing sequence converging to \(\inf_{\mathcal{T}_{\gamma}}\sum_{k=1}^{K}\mathbb{E}[\|\mathds{t}_{k}\|^{2}]\) if \(\mathcal{T}_{\gamma}\neq\emptyset\), or diverging to \(+\infty\) if \(\mathcal{T}_{\gamma}=\emptyset\) (unfeasible SINR requirements). The inner loop is terminated at some step \(i\in\mathsf{N}\) if no significant progress is observed, in which case we declare the problem feasible under a sum power constraint, or if the early stop condition \(\sum_{k\in\mathcal{K}}p_{k}^{(i)}>P_{\text{sum}}\) is met, in which case we declare infeasibility under a sum power constraint. If this initial feasibility test is passed, we continue with the outer loop, using the step size rule \((\forall i\in\mathsf{N})\;\alpha^{(i)}=\alpha/\sqrt{i}\), \(\alpha=10\). Since the condition \(\mathcal{T}_{\gamma}\neq\emptyset\) was already detected, the inner loops are guaranteed to compute the partial dual function \(\tilde{d}(\mathbf{\lambda})=\inf_{\mathcal{T}_{\gamma}}\sum_{k=1}^{K}\mathbb{E}[\| \mathds{t}_{k}\|_{\mathbf{1}+\mathbf{\lambda}}^{2}]-\sum_{l=1}^{L}\lambda_{l}P_{l}\) and its subgradient \(\mathbf{g}(\mathbf{\lambda})\) for all \(\mathbf{\lambda}\in\mathbb{R}_{+}^{L}\), up to some numerical tolerance. The outer loop is terminated at some step \(i\in\mathsf{N}\) if no significant progress is observed, or if the early stop condition \(\tilde{d}(\mathbf{\lambda}^{(i)})>P_{\text{sum}}\) is met. In the former case, and if the obtained solution is feasible up to some numerical Fig. 1: Pictorial representation of the simulated setup: \(K=16\) UEs uniformly distributed within a squared service area of size \(1\times 1\) km\({}^{2}\), and \(L=16\) regularly spaced APs with \(N=4\) antennas each. Each UE is jointly served by a cluster of \(Q=4\) APs offering the strongest channel gains. Fig. 2: Probability of feasibility of different minimum rate requirements, under different information constraints and a per-AP power constraint \(P=30\) dBm. Remarkably, our feasibility analysis covers optimal joint precoding under user-centric network clustering and either local CSI (local precoding) or centralized CSI (centralized precoding). The performance of the corresponding solutions under a sum power constraint \(\sum_{l=1}^{L}P_{l}\) are also evaluated. As expected, due to the more restrictive information constraint, local precoding offers worse performance than centralized precoding. Similarly, a per-AP power constraints offers worse performance than a sum power constraint. Nevertheless, we observe that the local precoding with per-AP power constraint still robustly supports fairly high rates around \(2.5\) b/s/Hz to all UEs. tolerance, we declare feasibility. In all other cases, we declare infeasibility. In fact, if feasible, the obtained precoders are also a solution to Problem (8), i.e., a minimum sum power solution. In practice, when the interest is to test only feasibility, we terminate the algorithm as soon as a feasible solution is detected, i.e., if the additional early stop condition \(\mathbf{g}(\mathbf{\lambda}^{(i)})\leq\mathbf{0}\) is met at some step \(i\in\mathsf{N}\). Mostly because of the heuristic step size constant \(\alpha\), we remark that the algorithm might have slow convergence for some UE drops (see Figure 3 for an illustration of the convergence behavior for a particular UE drop). Hence, we also introduce a maximum number of iterations and a maximum number of algorithm restarts with different constants \(\alpha\). If a conclusion is not reached within these thresholds, we remove the corresponding UEs drop. ## VIII Conclusion and future directions This study marks a step forward in the process of extending known analytic tools for deterministic channels and instantaneous rate expressions to fading channels and ergodic rate expressions. Such extensions are often advocated in the cellular and cell-free massive MIMO literature because they allow for a more refined analysis of modern networks covering practical aspects such as imperfect CSI and system optimization based on long-term channel statistics. Just as coding over fading realizations is an efficient way to manage fading dips, system optimization based on the long-term perspective typically lead, in our opinion, to more efficient solutions. As first main result, this study advances the current understanding of distributed cell-free networks by showing that the recently introduced team MMSE precoding method provides joint precoders that are optimal not only under a sum power constraint, as stated in [23], but also under a per-AP power constraint. For illustration purposes, this study derives the structure of optimal local precoding under a per-AP power constraint. Although not explicitly covered in this study, an optimal structure can be derived also for other examples of distributed precoding such as the ones based on sequential information sharing over radio stripes [23, 24]. An interesting future direction is thus to revisit the available studies on performance evaluation of distributed cell-free downlink implementations, in light of the above results. As second main result, this study provides an alternative tool to [21] for designing centralized cell-free networks subject to per-AP power constraints, by demonstrating optimality of centralized MMSE precoding with parameters tuned only once for many channel realizations, instead of for each channel realization as in the studies based on [21]. This is a consequence of considering the hardening bound as figure of merit. Although the hardening bound generally gives more pessimistic capacity estimates than alternative ergodic rate bounds based on coherent or semi-coherent decoding [40], in our opinion this drawback is counterbalanced by a significantly enhanced treatability for precoding optimization. A key aspect of our results is the compact parametrization of optimal joint precoding in terms of a set of coefficients \((\mathbf{p},\mathbf{\sigma})\), interpreted as virtual uplink transmit and noise powers. One limitation of this study is that the presented algorithm for tuning these parameters can be quite slow, and it is designed to solve SINR feasibility problems only (more specifically, Problem (8)). Thus, another interesting future direction is the development of efficient algorithms, perhaps based on heuristics such as neural networks or other statistical learning tools, for tuning \((\mathbf{p},\mathbf{\sigma})\) in network utility maximization problems such as sum rate or minimum rate maximization. Another subtle limitation of this work lies in the assumption of strict feasibility for the constraints of Problem (8), which is used for ensuring Slater's condition to hold in Proposition 1. Using an information-theoretical perspective, this means that our results do not cover optimal joint precoding for rate tuples lying on the boundary of the achivable rate region, but only on its interior. However, from an engineering perspective, this limitation is practically irrelevant, since rate tuples arbitrarily close to the boundary are still covered. Nevertheless, resolving this limitation is an interesting yet involved future direction for theoretical analysis. Fig. 3: Example of convergence behavior of the proposed algorithm for different step size constants \(\alpha\). We consider an arbitrary UE drop, local precoding, and a minimum rate requirement of \(3.5\) b/s/Hz. For each iteration \(i\in\mathsf{N}\) of the outer loop, we plot (a) the dual objective \(\mathcal{A}(\Lambda^{(i)})\), and (b) the maximum transmit power over all APs \(\max_{i\in\mathcal{E}}g(\Lambda^{(i)})+P_{i}\). The non-monotonic convergence is a common feature of projected subgradient methods. We observe that, despite a seemingly slower convergence in the first iterations, the more aggressive step size choice \(\alpha=17\) produces a feasible solution satisfying the per-AP power constraint (\(\forall l\in\mathcal{L}\)) \(P_{l}=30\) dBm after \(20\) iterations only. This is enough to declare feasibility, since, for each outer iteration, the inner loops ensure that the SINR constraints are always satisfied. ### _Convex reformulation_ Consider the reformulation of Problem (8) obtained by replacing each (rearranged) SINR constraint in (10) with \[(\forall k\in\mathcal{K})\ |c_{k}(\mathbb{T})|-\nu_{k}\Re\left(b_{k}(\mathbb{T}) \right)\leq 0. \tag{29}\] More precisely, consider \[\begin{array}{ll}\underset{\mathbb{T}\in\mathcal{T}}{\text{minimize}}&\sum _{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}\|^{2}]\\ \text{subject to}&(\forall k\in\mathcal{K})\ |c_{k}(\mathbb{T})|-\nu_{k}\Re \left(b_{k}(\mathbb{T})\right)\leq 0\\ &(\forall l\in\mathcal{L})\ \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}] \leq P_{l}\end{array} \tag{30}\] and its Lagrangian dual problem \[\underset{(\boldsymbol{\lambda},\boldsymbol{\mu})\in\mathbb{R}_{+}^{\mathsf{ R}_{+}}\times\mathbb{R}_{+}^{\mathsf{K}}}{\text{maximize}}\ d^{\prime}(\boldsymbol{\lambda},\boldsymbol{\mu}), \tag{31}\] where we define the dual function \(d^{\prime}(\boldsymbol{\lambda},\boldsymbol{\mu}):=\inf_{\pi\in\mathcal{T}} \varLambda^{\prime}(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu})\) and the Lagrangian \(\varLambda^{\prime}(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu}):= \sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}\|_{\boldsymbol{1}+\boldsymbol{ \lambda}}^{2}]-\sum_{l=1}^{L}\lambda_{l}P_{l}+\sum_{k=1}^{K}\mu_{k}(|c_{k}( \mathbb{T})|-\nu_{k}\Re(b_{k}(\mathbb{T}))).\) The main advantage of the above reformulation is that it gives a convex optimization problem, as shown next. **Lemma 5**.: _The objective and all constraints of Problem (30) are proper convex functions._ Proof.: Consider the norm \(\|\cdot\|_{*}:=\sqrt{\langle\cdot,\cdot\rangle}\) on \(\mathcal{H}^{K+1}\) induced by the inner product \((\forall\mathsf{x},\mathsf{y}\in\mathcal{H}^{K+1})\ \langle\mathsf{x},\mathsf{y}\rangle:=\Re \{\mathsf{E}[\mathsf{y}^{\mathsf{H}}\mathsf{x}]\}\). We note that \(|c_{k}(\mathbb{T})|=\sqrt{\sum_{k=1}^{K}\mathsf{E}[|\mathfrak{h}_{k}^{\mathsf{ H}}\mathfrak{t}_{j}|^{2}]+1}\) is given by the composition of \(\|\cdot\|_{*}\) with an affine map \(\mathcal{T}\to\mathcal{H}^{K+1}:\mathbb{T}\mapsto[\mathfrak{h}_{k}^{\mathsf{ H}}\mathfrak{t}_{1},\ldots,\mathfrak{h}_{k}^{\mathsf{H}}\mathfrak{k},L]^{ \mathsf{T}}\), hence it is convex. Furthermore, since \(\Re\left(b_{k}(\mathbb{T})\right)=\Re\left(\mathsf{E}[\mathfrak{h}_{k}^{ \mathsf{H}}\mathfrak{t}_{k}]\right)\) is linear, convexity of the reformulated SINR constraints readily follows. We omit the proof for the convexity of the objective and power constraints, since it is trivial. Finally, repeated applications of Cauchy-Schwarz inequality prove that all the aforementioned functions are also proper functions. The next simple lemma can be used to relate Problem (8) to Problem (30), following a similar idea in [21, 30]. **Lemma 6**.: _Consider an arbitrary \(\mathbb{T}\in\mathcal{T}\). Then, there exists \(\mathbb{T}^{\prime}\in\mathcal{T}\) such that \((\forall k\in\mathcal{K})(\forall l\in\mathcal{L})\)_ \[\begin{array}{l}|b_{k}(\mathbb{T})|=\Re\left(b_{k}(\mathbb{T}^{\prime}) \right),\\ |c_{k}(\mathbb{T})|=|c_{k}(\mathbb{T}^{\prime})|,\\ \mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}]=\mathsf{E}[\|\mathfrak{t}_{l,k}^{ \prime}\|^{2}].\end{array} \tag{32}\] Proof.: Observe that, \((\forall k\in\mathcal{K})(\forall l\in\mathcal{L})\), the terms \(|b_{k}(\mathbb{T})|\), \(|c_{k}(\mathbb{T})|\), and \(\mathsf{E}[\|\mathfrak{t}_{l,k}\|^{2}]\) are invariant to columnwise phase rotations of the argument, i.e., they do not vary if we replace \(\mathbb{T}\) with \([\mathfrak{t}_{1}e^{j\theta_{1}},\ldots,\mathfrak{t}_{K}e^{j\theta_{K}}]\) for any \((\theta_{1},\ldots,\theta_{K})\in[0,2\pi]^{K}\). In particular, we can always pick \((\theta_{1},\ldots,\theta_{K})\in[0,2\pi]^{K}\) such that \(|b_{k}(\mathbb{T})|=\Re(b_{k}(\mathbb{T}))\) holds. **Lemma 7**.: _Let \(p^{*}\) and \(r^{\star}\) be the optimum of Problem (8) and Problem (30), respectively. The two problems have the same optimum, i.e., \(p^{*}=r^{\star}\)._ Proof.: The simple property \((\forall x\in\mathbb{C})\ \Re(x)\leq|x|\) shows that \((\forall k\in\mathcal{K})(\forall\mathbb{T}\ \in\mathcal{T})\) \[|c_{k}(\mathbb{T})|-\nu_{k}|b_{k}(\mathbb{T})|\leq|c_{k}(\mathbb{T})|-\nu_{k} \Re\left(b_{k}(\mathbb{T})\right), \tag{33}\] from which the inequality \(r^{\star}\geq p^{\star}\) readily follows (recall also Lemma 1). Then, consider a minimizing sequence for Problem (8), i.e., a (not necessarily convergent) sequence \((\mathbb{T}^{(n)})_{n\in\mathsf{N}}\) such that \((\forall n\in\mathsf{N})\ \mathbb{T}^{(n)}\) satisfies all constraints of Problem (8), and \(\lim_{n\to\infty}\sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}^{(n)}\|^{2}]=p^{\star}\)[41, Definition 1.8]. By Lemma 6, we can define another sequence \((\mathbb{T}^{(n^{\prime})})_{n\in\mathsf{N}}\) such that \((\forall n\in\mathsf{N})\ \mathbb{T}^{(n^{\prime})}\) satisfies all constraints of Problem (30), attains the same objective \(\sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}^{(n)}\|^{2}]=\sum_{k=1}^{K}\mathsf{E} [\|\mathfrak{t}_{k}^{(n)}\|^{2}]\geq r^{\star}\), and hence satisfies \(p^{\star}=\lim_{n\to\infty}\sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}^{(n)}\|^{2}] \geq r^{\star}\). Combining both inequalities \(p^{\star}\leq r^{\star}\) and \(p^{\star}\geq r^{\star}\) completes the proof. **Lemma 8**.: _The dual functions \(d(\boldsymbol{\lambda},\boldsymbol{\mu})\) and \(d^{\prime}(\boldsymbol{\lambda},\boldsymbol{\mu})\) in Problem (11) and Problem (31), respectively, satisfy \((\forall(\boldsymbol{\lambda},\boldsymbol{\mu})\in\mathbb{R}_{+}^{L}\times \mathbb{R}_{+}^{K})\ d(\boldsymbol{\lambda},\boldsymbol{\mu})=d^{\prime}( \boldsymbol{\lambda},\boldsymbol{\mu})\)._ Proof.: Fix \((\boldsymbol{\lambda},\boldsymbol{\mu})\in\mathbb{R}_{+}^{L}\times\mathbb{R}_{+}^ {K}\) and define the Lagrangian of Problem (8) \(\varLambda(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu}):=\) \[\sum_{k=1}^{K}\mathsf{E}[\|\mathfrak{t}_{k}\|_{\boldsymbol{1}+\boldsymbol{ \lambda}}^{2}]-\sum_{l=1}^{L}\lambda_{l}P_{l}+\sum_{k=1}^{K}\mu_{k}(|c_{k}( \mathbb{T})|-\nu_{k}|b_{k}(\mathbb{T})|),\] which satisfies \(d(\boldsymbol{\lambda},\boldsymbol{\mu})=\inf_{\mathbb{T}\in\mathcal{T}} \varLambda(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu})\). The property \((\forall x\in\mathbb{C})\ \Re(x)\leq|x|\) readily gives \(\varLambda(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu})\leq\varLambda^{ \prime}(\mathbb{T},\boldsymbol{\lambda},\boldsymbol{\mu})\), and hence \(d(\boldsymbol{\lambda},\boldsymbol{\mu})\leq d^{\prime}(\boldsymbol{\lambda}, \boldsymbol{\mu})\). Then, consider a minimizing sequence \((\mathbb{T}^{(n)})_{n\in\mathsf{N}}\) such that \((\forall n\in\mathsf{N})\ \mathbb{T}^{(n)}\in\mathcal{T}\) and \(\lim_{n\to\infty}\varLambda(\mathbb{T}^{(n)},\boldsymbol{\lambda},\boldsymbol{\mu})=d( \boldsymbol{\lambda},\boldsymbol{\mu})\). By Lemma 6, we can define another sequence \((\mathbb{T}^{(n^{\prime})})_{n\in\mathsf{N}}\) such that \((\forall n\in\mathsf{N})\ \mathbb{T}^{(n^{\prime})}\in\mathcal{T}\) and \(\varLambda(\mathbb{T}^{(n^{\prime})},\boldsymbol{\lambda},\boldsymbol{\mu})= \varLambda(\mathbb{T}^{(n)},\boldsymbol{\lambda},\boldsymbol{\mu})\geq d^{ \prime}(\boldsymbol{\lambda},\boldsymbol{\mu})\), hence satisfying \(d(\ The proof of Proposition 2 is readily given by combining Lemma 7, Lemma 8, Lemma 9, and by noticing that the unique solution \(\mathbb{T}^{\prime}\) to Problem (30) is also a solution to Problem (8) (note: the converse does not hold in general). ### _Recovering a primal solution from a dual solution_ Starting from the strong duality property in Lemma 2, we obtain that a primal-dual pair \((\mathbb{T}^{\star},\boldsymbol{\lambda}^{\star})\) jointly solving Problem (8) and Problem (12) must satisfy \[p^{\star} =\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{ k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{1}+\boldsymbol{\lambda}^{ \star}}^{2}]-\sum_{l=1}^{L}\lambda_{l}^{\star}P_{l}\] \[\leq\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}^{\star}\|_{\mathbf{ 1}+\boldsymbol{\lambda}^{\star}}^{2}]-\sum_{l=1}^{L}\lambda_{l}^{\star}P_{l}\] \[=\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}^{\star}\|^{2}]+\sum_{ l=1}^{L}\lambda_{l}^{\star}\left(\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{l,k}^{ \star}\|^{2}]-P_{l}\right)\] \[\leq\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}^{\star}\|^{2}]=p^{ \star},\] where the first inequality follows by the definition of infimum, and the last inequality follows since \((\mathbb{T}^{\star},\boldsymbol{\lambda}^{\star})\) satisfy the primal and dual constraints. The above chain of inequalities shows that \(\mathbb{T}^{\star}\) attains the infimum. If there was a unique \(\mathbb{T}\in\mathcal{T}_{\boldsymbol{\gamma}}\) attaining the infimum, then it would also be the unique solution \(\mathbb{T}^{\star}\) to Problem (8). However, a similar property does not hold in general whenever the infimum is attained by multiple elements of \(\mathcal{T}_{\boldsymbol{\gamma}}\). In particular, the infimum may be attained not only by \(\mathbb{T}^{\star}\), but also by some other \(\mathbb{T}\in\mathcal{T}_{\boldsymbol{\gamma}}\) violating the power constraints. Nevertheless, we now prove that this case can be here excluded, by using the next two lemmas. For convenience, we define the set \(\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}\subseteq\mathcal{T}\) of precoders satisfying the reformulated SINR constraints (29). **Lemma 10**.: _For every \(\boldsymbol{\sigma}\in\mathbb{R}_{++}^{L}\), there exists a unique \(\mathbb{T}^{\prime}\in\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}\) attaining \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}}\sum_{k=1}^ {K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\boldsymbol{\sigma}}^{2}]\)._ Proof.: The proof follows by the Hilbert projection theorem (see, e.g., [42]), since the objective \(\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{2}}^{2}]\) is the norm induced by the inner product \((\forall\mathbb{T},\mathbb{V}\in\mathcal{T})\)\(\langle\mathbb{T},\mathbb{V}\rangle:=\sum_{k=1}^{K}\sum_{l=1}^{L}\sigma_{l} \Re\{\mathsf{E}[v_{l,k}^{\dagger}\mathbf{t}_{l,k}]\}\) on \(\mathcal{T}\), and the constraints define a nonempty closed convex subset of \(\mathcal{T}\). Specifically, in this Hilbert space, the infimum is attained by the projection of the zero vector onto the closed convex set \(\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}\). **Lemma 11**.: _For every \(\boldsymbol{\sigma}\in\mathbb{R}_{++}^{L}\), if \(\mathbb{T}\) attains \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K} \mathsf{E}[\|\mathbf{t}_{k}\|_{\boldsymbol{\sigma}}^{2}]\), then \(\exists\mathbb{T}^{\prime}\) attaining \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}}\sum_{k=1}^ {K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\boldsymbol{\sigma}}^{2}]\) with the same power consumption._ Proof.: Following similar lines as for Lemma 7, we can show that \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K} \mathsf{E}[\|\mathbf{t}_{k}\|_{\sigma}^{2}]=\inf_{\overline{\tau}\in\mathcal{ T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\sigma}^{2}]\) holds. The rest of the proof is a simple application of Lemma 6. Now, since the optimum \(\mathbb{T}^{\star}\) to the original Problem (8) attains \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K} \mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{1}+\boldsymbol{\lambda}^{\star}}^{2}]\) and satisfies the power constraints, then, by Lemma 11, \(\exists\mathbb{T}^{\prime}\) attaining \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}^{\prime}}\sum_{k=1}^ {K}\mathsf{E}[\|\mathbf{t}_{k}\|_{\mathbf{1}+\boldsymbol{\lambda}^{\star}}^{2}]\) and satisfying the power constraints. Since, by Lemma 10, this \(\mathbb{T}^{\prime}\) must be unique, there cannot be some \(\mathbb{T}\neq\mathbb{T}^{\star}\) attaining \(\inf_{\mathbb{T}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K}\mathsf{E}[ \|\mathbf{t}_{k}\|_{\mathbf{1}+\boldsymbol{\lambda}^{\star}}^{2}]\) and violating the power constraints. ### _Convergence of the projected subgradient method_ We apply [32, Algorithm 3.2.8] to the the minimization of \(-\tilde{d}\) over \(\boldsymbol{\lambda}\in\mathbb{R}_{+}^{L}\). From known subgradient calculus rules, since \(-\tilde{d}\) is the supremum of a family of affine functions indexed by \(\mathbb{T}\in\mathcal{T}_{\boldsymbol{\gamma}}\), a subgradient at \(\boldsymbol{\lambda}\in\mathbb{R}^{L}\) is given by the gradient of any of these function attaining the supremum. This leads to the proposed algorithm. For all \(\boldsymbol{\lambda}\in\mathbb{R}_{+}^{L}\), nonemptiness of \(\arg\min_{\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K}\mathsf{E}[\| \mathbf{t}_{k}\|_{\boldsymbol{\sigma}}^{2}]\) follows by combining: (i) \(\inf_{\overline{\tau}\in\mathcal{T}_{\boldsymbol{\gamma}}}\sum_{k=1}^{K} \mathsf{E}[\|\mathbf{t}_{k}\|_{\sigma}^{2}]=\inf_{\overline{\tau}\in\mathcal{T}_ {\boldsymbol{\gamma}}^{\prime}}\sum_{k=1}^{K}\mathsf{E}[\|\mathbf{t}_{k}\|_{ \boldsymbol{\sigma}}^{2}]\), similarly to Lemma 7; (ii) Lemma 10; (iii) \(\mathcal{T}_{\boldsymbol{\gamma}}\subseteq\mathcal{T}_{\boldsymbol{\gamma}}\). Convergence of the best objective to the optimum \(\tilde{d}(\boldsymbol{\lambda}^{\star})\) for \(n\to\infty\) follows from [32, Lemma 3.2.1] and the proof of [32, Theorem 3.2.2], without using the Lipschitz continuity assumption. Furthermore, convergence of the corresponding argument follows since \(\tilde{d}:\mathbb{R}^{L}\to\mathbb{R}\) is concave and hence continuous.
2304.03286
Semantic Information in a model of Resource Gathering Agents
We explore the application of a new theory of Semantic Information to the well-motivated problem of a resource foraging agent. Semantic information is defined as the subset of correlations, measured via the transfer entropy, between agent $A$ and environment $E$ that is necessary for the agent to maintain its viability $V$. Viability, in turn, is endogenously defined as opposed to the use of exogenous quantities like utility functions. In our model, the forager's movements are determined by its ability to measure, via a sensor, the presence of an individual unit of resource, while the viability function is its expected lifetime. Through counterfactual interventions -- scrambling the correlations between agent and environment via noising the sensor -- we demonstrate the presence of a critical value of the noise parameter, $\eta_c$, above which the forager's expected lifetime is dramatically reduced. On the other hand, for $\eta < \eta_c$ there is little-to-no effect on its ability to survive. We refer to this boundary as the semantic threshold, quantifying the subset of agent-environment correlations that the agent actually needs to maintain its desired state of staying alive. Each bit of information affects the agent's ability to persist both above and below the semantic threshold. Modeling the viability curve and its semantic threshold via forager/environment parameters, we show how the correlations are instantiated. Our work provides a useful model for studies of established agents in terms of semantic information. It also shows that such semantic thresholds may prove useful for understanding the role information plays in allowing systems to become autonomous agents.
Damian R Sowinski, Jonathan Carroll-Nellenback, Robert N Markwick, Jordi Piñero, Marcelo Gleiser, Artemy Kolchinsky, Gourab Ghoshal, Adam Frank
2023-04-06T17:59:59Z
http://arxiv.org/abs/2304.03286v2
# Semantic information in a model of resource gathering agents ###### Abstract We explore the application of a new theory of _Semantic Information_ to the well-motivated problem of a resource foraging agent. Semantic information is defined as the subset of correlations, measured via the transfer entropy, between agent \(A\) and environment \(E\) that is necessary for the agent to maintain its viability \(V\). Viability, in turn, is endogenously defined as opposed to the use of exogenous quantities like utility functions. In our model, the forager's movements are determined by its ability to measure, via a sensor, the presence of an individual unit of resource, while the viability function is its expected lifetime. Through counterfactual interventions--scrambling the correlations between agent and environment via noising the sensor--we demonstrate the presence of a critical value of the noise parameter, \(\eta_{c}\), above which the forager's expected lifetime is dramatically reduced. On the other hand, for \(\eta<\eta_{c}\) there is little-to-no effect on its ability to survive. We refer to this boundary as the _semantic threshold_, quantifying the subset of agent-environment correlations that the agent actually needs to maintain its desired state of staying alive. Each bit of information affects the agent's ability to persist both above and below the semantic threshold. Modeling the viability curve and its semantic threshold via forager/environment parameters, we show how the correlations are instantiated. Our work provides a useful model for studies of established agents in terms of semantic information. It also shows that such semantic thresholds may prove useful for understanding the role information plays in allowing systems to become autonomous agents. ## I Introduction Questions about the role of information in the physics of life extend as far back as Schrodinger's seminal 1944 work "What is Life" [1]. Four years later, Shannon published his seminal work on information theory [2], shortly followed by the discovery that DNA serves as a code for living organisms [3]. These developments have led to a deep interest in the relationship between information, physics, and biology [4]. Since then, the applications of information theory to biology have grown exponentially [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15], allowing researchers to unpack the ways organisms store and process data about the environment and their own internal states [16; 17; 18]. One difficulty with applications of Shannon's information theory to biological systems is its "syntactic" nature. That is, the kinds of extant measures employed in information theory capture statistical correlations between systems without any consideration of the relevance or meaning of those correlations. Living systems, however, act as agents for whom information is intrinsically meaningful in the most basic sense; that is, whether it can be useful for its _self-production_ and _self-maintenance_[19; 20]. Life, as a driven, non-linear, and far-from-equilibrium system, is always in a precarious position and must gather information about the state of the environment and its internal state to endure [21]. Some of this information will be useful for this purpose and some will be irrelevant. In this setting, relevance and meaning can be considered synonymous. Unlike the well-developed field of syntactic information theory, there exists no widely accepted or applied formal theory of semantic information (some attempts at developing such a theory include Refs. [22; 23; 24; 25; 26; 27; 28]). A goal of a mathematical theory would be to provide an operational definition useful for characterizing nonlinear far-from-equilibrium systems which can be identified as agents (e.g., organisms or robots). Recently, Kolchinsky and Wolpert [29] (henceforth KW18) developed an explicit formalism for semantic information based on the use of counterfactuals and a notion of _viability_. Their formulation uses the state spaces and probability distributions for an agent \(A\) and its environment \(E\) to characterize the mutual information between the two, while the persistence of \(A\) (its ability to maintain a desired state) is measured through a viability function \(V\). The concept of meaning here is thus taken in the most basic sense of being related to an agent's continued existence. By running _intervened_ versions of the system dynamics in which some fraction of the mutual information between agent and environment is scrambled, a formal working definition of the semantic information was characterized in terms of the response of the viability function to such interventions. Importantly, the viability is determined by the inherent coupled dynamics of the system and the environment [29; 30], rather than through exogenous utility, cost, error, or loss functions (as is sometimes done when studying the value of information in statistics or in engineering applications [31; 32; 33; 15; 34]). There are several classes of models that describe specific attributes of living systems such as synchronization [35; 36], pattern formation [37; 38], competition between species for resources [39], stability of ecosystems [40], simple models of metabolism [41] as well as synthetic cells [42]. Each of these models are settings where one can test the theoretical framework of semantic information. Indeed, there has been a recent study on semantic information in the context of synthetic cells [43]. Our approach here, is three-fold: First, we apply the semantic information framework to the well-motivated problem of resource foraging agents. Second, we develop a detailed mathematical and numerical implementation of the original KW18 formalism to explore what simplifying approximations lead to a clear connection between semantic information and the viability of the foraging agents. Finally, we demonstrate the efficacy of our approach in uncovering new insights on the general features of agent/environment dynamics. Section II focuses on a class of models that jointly address the interrelation between exploration and resource-consumption. In such _forager models_[44; 45; 46; 47], an agent navigates its environment (using various exploration strategies) in search of resources (food) that it then consumes to maintain its desired internal state (staying alive). The resources are either at fixed locations along the spatial extent of the environment or are replenished at a constant rate in random locations. These dynamics are described in detail in section II.1. In such a setting, a natural choice for a viability function is the agent's lifetime, while the environment is simply the field of resources. The correlations between the agent and its environment, which are contingent on the agent's sensorial capability, carry meaningful information about the location of the resources. These correlations are then scrambled by tuning the fidelity of the sensor and then measuring its effect on the viability function, the quantitative details of which make up section II.2. We demonstrate in section III the existence of a plateau in the viability, capturing the subset of correlations that has no effect on the agent's ability to stay alive. Below this threshold the lifetime of the agent monotonically decays with increased scrambling of the mutual information. We introduce the concept of _viability-per-bit_ that captures the degree to which each bit of the agent's information on its environment is relevant. The quantity peaks at the boundary separating the plateau from the decaying region indicating the existence of a _semantic threshold_. Information above the threshold has little-to-no effect on the agent's viability, while below it, each bit becomes crucial for the agent to stay alive. We show that our results are agnostic to the exploration strategy (that is a random walker or ballistic foraging) or the choice of the particular viability function, as long as it reflects the general ability of the agent to remain alive. We end in section IV with a discussion of the implications of our findings and possible future directions. Three appendices clarify a few technical aspects of our model and its approximations. ## II Foraging in a replenishing environment ### Definition of the model Consider a system \(A\times E\), defined to be a foraging agent, \(A\), exploring an environment, \(E\). The agent has a fixed metabolic rate \(\mu\) powered by an event-dwindling fuel reserve, and a sensor allowing it to detect resources within some finite circular range \(R\). Moving at constant speed \(v\), the agent changes its direction \(\hat{\mathbf{n}}\) only in response to detected resources. Any resource that falls within the collection radius \(r\leq R\) of the agent can be harvested, its energy content added to the fuel supply of the agent. We note that this ballistic variant of the forager model is different from the ones considered for instance in [46; 47] where the forager conducts a random walk. In such a formulation, only when the forager detects a resource does it move toward it ballistically, after which it resumes its random walk exploration strategy. As we will show later, the qualitative nature of the results are agnostic to whether the forager is ballistic or diffusive. The state of the agent is described by the tuple \(a=(s,\tau,\hat{\mathbf{n}},\mathbf{x})\). \(s\in[0,S]\) is the agent's stored fuel supply; when \(s=0\) the agent is no longer functional, otherwise it is considered _alive_. The parameter \(\tau\in\{0,1\}\) is Boolean and describes if the agent has locked on to a target. The agent will not change its direction of motion--indicated by the unit vector \(\hat{\mathbf{n}}\)--if it has a target. Consuming a resource resets \(\tau\mapsto 0\). Finally, \(\mathbf{x}\in[0,L]^{2}\) is the position of the agent in the environment, defined as a square of side length \(L\gg R\). The state of the environment is specified by the set of locations of a fluctuating number of resources, \(e=\{\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{N}\}\) with \(\mathbf{y}_{n}\in[0,L]^{2},\ \forall n\). This model differs from extant models in that resources are renewable--there is a source of energy flux per unit area into the environment, \(\Gamma\), resulting in the growth of fixed energy \(\epsilon\) resources across it, which, in turn, decay at a rate \(\gamma\). In the absence of an agent, the two processes lead to an equilibrium average resource density \(n_{\text{eq}}=\Gamma/\epsilon\gamma\) with \(\mathcal{O}(L^{-1}\sqrt{\Gamma/\epsilon\gamma})\) fluctuations, so that the average distance between resources is \(\ell_{Re}=\sqrt{e\gamma/\Gamma}\) (See Appendix A). Large inhomogeneities in the resource density, such as those which are created by a foraging agent, are repopulated at a rate \(\gamma\). We work in the small back-reaction regime where forager changes to the equilibrium spacing are on the order \(\delta\ell_{Re}/\ell_{Re}\sim\mathcal{O}(R^{2}/L^{2})\ll 1\), so that the agent is effectively uncorrelated with the environmental degrees of freedom on distances larger than some \(\mathcal{O}(1)\) multiple of \(R\). Table 1 summarizes the parameters that define the model. As the agent moves through the environment its collection diameter \(2r\) acts as a cross-section, sweeping out an area for harvest proportional to its speed. Interesting regimes of the model happen when the speed falls within the following bounds \[v_{\star}<v<v^{\star}\ \ \text{where}\ \ v_{\star}=\frac{\mu\ell_{Re}^{2}}{2rS}= \frac{\epsilon}{S}v^{\star}. \tag{1}\] For \(v<v_{\star}\), resources are spread so thin that the foraging area on a full tank has an expected harvest less than 1, leading to agents with rather short lives. For \(v>v^{\star}\), the harvest area fueled by a single resource has an expected harvest greater than 1, the agent is coupled to an environment of abundance, and is effectively immortal. In between, the agent's probability of survival decays exponentially at long lifetimes, while still forming significant correlations with the environment. This is the regime interesting for exploration. In a strong back-reaction setting these bounds are dynamic: agent harvesting increases the average spacing between resources thereby increasing both \(v_{\star}\) and \(v^{\star}\). Again, we will work in the small back-reaction limit, where the forager doesn't affect the environment. Fig. 1 provides a schematic of the agent-centric dynamics. ### The viability function and interventions While there are a number of ways of quantifying the viability of the agent, an obvious choice is the expected lifetime \(V=\mathds{E}[T]\); here \(\mathds{E}\) is an expectation value over an ensemble of agents, and \(T\) is the time at which the agent first reaches the dead state (\(s=0\)) (as we will later show, our results are robust to other variants of the function). This is easily measured by running an ensemble of agents as described in Fig. 1, and characterizing the distribution of lifetimes. The viability of the agent depends crucially on it correctly setting its trajectory \(\hat{\mathbf{n}}\) when a resource is targeted, which amounts to having a properly working sensor. A broken sensor results in an agent moving in directions that do not correlate with the location of a resource, increasing the chance of starvation and thus death. To escape the vagaries of chance, the agent needs a sensor that correlates target direction with resource location. Let \(\rho_{AE}(a,e)\) be the joint distribution over states of the agent and environment. We use a standard notation where the subscript identifies the state space that the arguments are drawn from. Translational symmetry allows us to eliminate \(\mathbf{x}\) from \(a\), and imposes a natural partial ordering, via a permutation of the resource indices \(\pi\), of the environment state \(e\mapsto e^{\prime}=(\mathbf{y}_{1}^{\prime},\mathbf{y}_{2}^{\prime},\ldots,\mathbf{y}_{N}^ {\prime})\), where the effective forager-resource displacement is \(\mathbf{y}_{\pi(n)}^{\prime}=\mathbf{y}_{n}-\mathbf{x}\) such that if \(\pi(n)<\pi(m)\) then \(|\mathbf{y}_{\pi(n)}^{\prime}|\leq|\mathbf{y}_{\pi(m)}^{\prime}|\). Henceforth we work in this agent-centric coordinate system and drop the primes. To hone in on the specific correlation between sensor and environment, we next marginalize over all but the closest resource, \(\mathbf{y}=\mathbf{y}_{1}\), and condition on the agent being alive, leading to the simpler joint distribution \(\rho_{AE}(\hat{\mathbf{n}},\tau,\mathbf{y}|s>0)\). Rather than computing the correlation between all the agent and environmental degrees of freedom, we now have the simpler task of finding the correlations between the subsets \(\{\hat{\mathbf{n}},\tau\}\) and \(\{\mathbf{y}\}\). Denote the living agent distribution \(\rho_{A}=\rho_{A}(\hat{\mathbf{n}},\tau)\) and the environment distribution \(\rho_{E}=\rho_{E}(\mathbf{y})\). Correlations between agent and environment prevent a factorization of the joint distribution so that \begin{table} \begin{tabular}{l c c} \hline _Parameter_ & _Symbol_ & _Units_ \\ \hline Collection Radius & \(r\) & L \\ Detection Radius & \(R\) & L \\ Speed & \(v\) & LT\({}^{-1}\) \\ Metabolic Rate & \(\mu\) & ML\({}^{2}\)T\({}^{-3}\) \\ Maximum Stored Energy & \(S\) & ML\({}^{2}\)T\({}^{-2}\) \\ \hline Resource Energy & \(\epsilon\) & ML\({}^{2}\)T\({}^{-2}\) \\ Resource Decay Rate & \(\gamma\) & T\({}^{-1}\) \\ Energy Influx & \(\Gamma\) & MT\({}^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Model Parameters \(\rho_{AE}\neq\rho_{A}\rho_{E}\). (The distribution factorizes in the case of a non-functional agent, given that the lack of an interaction pathway rapidly suppresses correlations between the agent and environment.) While alive, interactions transfer information between the two, generating the correlations which are then responsible for prolonging the agent's viability. Consequently, we examine the flow of information between the environment and the agent during a detection event. Consider a resource detection and locking event during the time interval \(\Delta t\), when a resource is detected at a distance \(r<|\mathbf{y}|\leq R\). The agent's orientation evolves from \(\hat{\mathbf{n}}\mapsto\hat{\mathbf{n}}^{\prime}\) while its target degree of freedom flips from \(\tau=0\rightarrow\tau^{\prime}=1\). For an agent with a perfect sensor, the transition probability for this detection event, conditioned on the environment state, \(e\), is \[\rho_{A\to A^{\prime}|e}(\hat{\mathbf{n}}^{\prime},\tau^{\prime}|\hat{\mathbf{n}},0, \mathbf{y})=\delta_{\tau^{\prime}}^{1}\delta\left(\theta(\hat{\mathbf{n}}^{\prime},\bm {y})\right), \tag{2}\] where \(\theta(\mathbf{a},\mathbf{b})\) is the angle between vectors \(\mathbf{a}\) and \(\mathbf{b}\), \(\delta_{\tau^{\prime}}^{1}\) is the Kronecker delta, and \(\delta(\theta)\) is the Dirac delta function. By the rotational symmetry of the environment, we also have that \(\rho_{A\to A^{\prime}}(\hat{\mathbf{n}}^{\prime},\tau^{\prime}|\hat{\mathbf{n}},0)= \delta_{\tau^{\prime}}^{1}(2\pi)\)-1, which is not equal to Eq. 2 indicating that it is likely that \(\rho_{AE}\neq\rho_{A}\rho_{E}\). To investigate the extent these correlations are necessary to insure the agent's longevity, we can compromise the agent's ability to find resources by adding noise to its sensor. This will impact the agent's viability function \(V\). We thus add a small amount of noise \(\eta\) to the agent's ideal sensor, which affects the transition probability for detection defined in Eq. 2 as \(\rho_{A\to A^{\prime}|e}\equiv\rho_{A\to A^{\prime}|e}^{\eta=0} \mapsto\rho_{A\to A^{\prime}|e}^{\eta}\), with \(\rho_{A\to A^{\prime}|e}^{1}=\rho_{A\to A^{\prime}}\). We then measure its effect on the agent's viability as \(\eta\) is increased. Adding noise changes the Dirac delta of Eq.2 into a uniform distribution of width \(2\pi\eta\), \(\delta\mapsto\delta_{\eta}\), where \[\delta_{\eta}(\theta)=\begin{cases}\frac{1}{2\pi\eta}&\theta\in[\text{-}\pi \eta,\pi\eta]\\ 0&\text{otherwise}\end{cases}.\] As the noise parameter \(\eta\to 0\), the perfect sensor is recovered, while \(\eta\to 1\) gives the uniform distribution. The information flowing from the environment to the agent during the detection event is quantified Figure 1: An overview of the forager model. The agent moves continuously on a plane at constant speed in a direction \(\hat{\mathbf{n}}\), sensing at a distance \(R\) and collecting resources within a radius \(r\). Resources are continuously produced by the environment. Agent movement is powered by a metabolism that drains the agents energy reserves; collecting resources replenishes them. When a resource comes within sensor range, the agent targets it (red circle \(\tau\)) and orients towards it. The agent then moves until the resource falls within collection range and is consumed. using the transfer entropy [48], \[\mathcal{T}_{E\to A}^{\eta} =\mathds{E}_{\rho_{AE}}[\log_{2}\frac{\rho_{A\to A^{\prime}|e}}{ \rho_{A\to A^{\prime}}}]\] \[=\log_{2}\frac{1}{\eta}. \tag{3}\] The information gathered during detection approaches \(0\) as \(\eta\to 1\), preventing the formation of correlations between agent and environment. It diverges as \(\eta\to 0\), which reflects the infinite precision needed to specify a direction in space, and is discussed in [49]. ## III Results Returning to the notion of viability, we extend our definition to the noised sensor channel by defining \(V=V_{0}\mapsto V_{\eta}=\mathds{E}_{\eta}[T]\), where the expectation is now taken over an ensemble of agents with noisy sensors. We interpret the addition of noise as an intervention that scrambles the information transfer between an agent with a perfect sensor and its environment. We extract \(V_{\eta}\) by simulating agents with noised sensors within an equilibrated environment at half of full health and measuring how long they survive. We do this \(10^{4}\) times for each of \(200\) values of \(\eta\in[0,1]\), the first \(100\) equally spaced in \([0,0.2]\) and the second hundred in \((0.2,1]\). In the upper panel of Fig. 2 we plot the distribution of lifetimes for three different values of the noise parameter \(\eta\). The solid black line corresponds to the case of a perfect sensor (no scrambling, \(\eta=0\)) and sets the baseline distribution of lifetimes which is peaked near the expected lifetime (vertical dashed dark red-line) with a broadly decaying tail. The thinner lines are for larger values of \(\eta\) and one can see a much lower value for \(\mathds{E}_{\eta}[T]\) once a critical value of noise, \(\eta_{c}\), is surpassed. The actual viability \(V_{\mathrm{actual}}\) is defined as the expected lifetime of an agent with noiseless sensor. As predicted [29], the viability plateaus near the actual viability \(V_{\mathrm{actual}}\) even as noise degrades the information flow during detection. However, once the sensor efficiency has been sufficiently degraded, the viability begins to drop dramatically, as seen in the top panel of Figure 3. This rapid decline in viability is interpreted as a semantic threshold--it is the minimal information acquired in a sensing event responsible for maintaining agents close to their actual viability. To better characterize this critical information, we introduce the differential measure of viability per bit (VpB), as seen in the lower panel of Figure 3. The peak in VpB illustrates what an agent requires from its interactions with the environment. Very noisy sensors provide little VpB, even with increased accuracy. However, once a certain level of accuracy is achieved, the VpB grows dramatically and the sensor provides the agent with meaningful information about its environment, resulting in the agent's improved survivability. Too much accuracy, however, is wasted, and the VpB drops again once the sensor goes above and beyond the constraints dictated by the physical nature of the agent within the environment. It should be pointed out that the results in Figure 3 don't depend on using the expected lifetime as a viability function; any percentile of the lifetime distribution will do. The light grey line shows the median of the distribution, while the grey regions represent jumps by \(\pm 10\%\) from the median. The semantic content is robust to all of these, which can easily be understood by considering how scrambling the sensor relates to the foraging efficacy of the agent. Since the collection radius, \(r\), acts as an impact parameter, and targets are set when they enter within the Figure 2: (Top) The distribution of lifetimes extracted from the simulations. The dark curve is for a noiseless (ideal) agent sensor (\(\eta=0\)). The nearly indistinguishable distribution is from near critical scrambling (\(\eta\approx\eta_{c}\)), while the leftmost distribution is for above critical scrambling (\(\eta>\eta_{c}\)). (Bottom) The probability of being alive for an unscrambled agent (dark line) and two agents whose sensor correlations with the environment have been scrambled: one has been scrambled up to just across the semantic threshhold, while the other has been scrambled well beyond it. Note the exponential decay of the tails. sensing radius \(R\), there is a critical noise parameter above which the agent will sometimes miss their target, and thus affect its viability. This critical value, \(\eta_{c}=\sin^{-1}(r/R)/\pi\), points to a semantic threshold of \[\mathcal{T}^{c}_{E\to A}=\log_{2}\pi-\log_{2}\sin^{\text{-1}}\frac{r}{R}. \tag{4}\] (See Appendix B for details.) There is low VpB when the information transferred during a detection event is well above \(\mathcal{T}^{c}_{E\to A}\), and high VpB as \(\mathcal{T}^{\eta}_{E\to A}\rightarrow\mathcal{T}^{c}_{E\to A}\). The minute change in the viability plateau also has a geometric origin. Due to the circular shape of both the resource collection and sensing zones, a sufficiently scrambled sensor will force agents to travel farther ever so slightly to collect a targeted resource. Assume an agent targets a resource a distance \(y=|\mathbf{y}|\leq R\) away, but the sensor is mistaken by an angle \(\theta\) as to the direction of the resource. Rather than having to travel \(y-r\) to collect its target, the agent now has to travel \(y\cos\theta-\sqrt{r^{2}-y^{2}\sin^{2}\theta}\). Given an information transfer of \(\mathcal{T}\) during detection, averaging over all angles we see that resources a distance \(y\) from the agent are effectively a distance \((1+\lambda)y\) with \[\lambda =\frac{r}{y}\left(\text{1}\!-\!\frac{2^{\mathcal{T}}}{\pi}E\left( \frac{\pi}{2^{\mathcal{T}}}\Big{|}\frac{y^{2}}{r^{2}}\right)\right)\!-\!\left( \text{1}\!-\!\frac{2^{\mathcal{T}}}{\pi}\sin\!\frac{\pi}{2^{\mathcal{T}}}\right)\] \[\underset{y\to 0}{\sim}\frac{\pi^{2}}{6}\frac{y-r}{r}4^{- \mathcal{T}}\] \[\underset{y\to r^{+}}{\sim}\frac{y-r}{r}\left(\frac{\tanh^{ \text{-1}}\tan\frac{\pi}{2}2^{\mathcal{T}}}{\frac{\pi}{2}2^{\mathcal{T}}}-1 \right). \tag{5}\] Here \(E(\varphi|k^{2})\) is the indefinite elliptic integral of the second kind with modulus \(k=y/r\). (See Appendix C for details of the derivation.) The expression for the dilation factor \(\lambda\), indicates that sub-critically scrambled agents can be thought of as unscrambled agents making their way through a distorted world. As the information acquisition from the environment is restored (\(\mathcal{T}\rightarrow\infty\)), the distortion decays exponentially as expected. Of course, within the extraction radius there is no dilation as the scrambled agent still _knows_ how far it can reach, \(y=r\), even if it cannot judge properly distances \(y>r\). Maximal dilation \(\lambda^{\text{max}}\), occurs at \(y=R\) in Eq. (5). This constitutes an upper bound enabling the determination of a lower bound on the viability of a scrambled agent, given the actual viability of an unscrambled agent. We scale down the length scale of the scrambled agent by \((1+\lambda^{\text{max}})\) so that it matches the length scale of the unscrambled agent. Unfortunately, this means the speeds of the two no longer match; to restore equality we must scale down the timescale by the same amount. Numerical simulations indicate that effects of this rescaling does not affect the environmental variables. Focusing therefore only on the agent variables, the rescaled viability is \[V_{\eta}=\mathds{E}_{\eta}[T] =\mathds{E}_{0}[(1+\lambda^{\text{max}})^{\text{-1}}T]\] \[=(1+\lambda^{\text{max}})^{\text{-1}}V_{\text{actual}}. \tag{6}\] Expanding this expression for \(\lambda\ll 1\), above the semantic threshold, the viability as a function of the transfer entropy is bounded from below by \[\frac{V(\mathcal{T})}{V_{\text{actual}}} \gtrsim\ 2-\frac{r}{R}+\frac{2^{\mathcal{T}-\mathcal{T}^{c}}}{\sin^{ \text{-1}}\frac{r}{R}}\] \[\times\!\!\left[\frac{r}{R}E\left(\frac{\sin^{\text{-1}}\frac{r}{ R}}{2^{\mathcal{T}-\mathcal{T}^{c}}}\right|\frac{R^{2}}{r^{2}}\right)\!-\!\sin\left( \frac{\sin^{\text{-1}}\frac{r}{R}}{2^{\mathcal{T}-\mathcal{T}^{c}}}\right) \right]. \tag{7}\] The expression is plotted for both the ballistic forager described here, as well as the more common diffusive Figure 3: (Top) The viability curve (thick black line) as a function of the transfer entropy, \(\mathcal{T}^{\eta}_{E\to A}\). The actual viability \(V_{\text{actual}}\) is the expected lifetime with no scrambling. The light grey line is the median, with the shaded regions representing intervals of 10% above and below the median. (Bottom) The viability per bit. The dots are actual data, and the line is a smoothed and interpolated curve added to better visualize the semantic region. forager, in the right panels of Figure 4. As the figure indicates, the agreement with numerical simulations is excellent until one reaches the semantic threshold. Thus the viability factorizes into a plateau that depends implicitly on both environment and agent, and a scaling term that depends only on properties of the agent. Furthermore, below the semantic threshold the viability curve of the ballistic forager asymptotes to the diffusive forager. This is fairly straightforward to understand. With an increasingly noisy sensor, the agent is as likely to move toward a resource as they are to move away, which correponds to an ordinary random walk. One can estimate the minimal viability, \(V(0)=\mathrm{E}_{1}[T]\), by noting that resources encountered by such an agent follow a Poisson distribution with rate \(2vr/\ell_{Re}^{2}\). The total fuel processed by the agent will therefore be \(s_{0}+\epsilon\mathrm{E}_{1}[T]\), where \(s_{0}\) is the initial stored fuel. But this total fuel divided by the metabolic rate is precisely the expected lifetime; a little algebra reveals \[V(0)=\frac{s_{0}/\mu}{1-v/v^{\star}},\] where \(v^{\star}\) is the upper velocity limit defined in Eq. (1). As mentioned earlier, the agent approaches immortality when its velocity approaches this limit, even if it has a completely random sensor. ## IV Discussion and Conclusion In this work, we explored the application of Semantic Information to the well-motivated problem of a resource foraging agent. Semantic information is defined as the subset of correlations--measured here via the transfer entropy \(\mathcal{T}_{E\to A}^{\eta}\)-- between agent \(A\) and environment \(E\) that is necessary for the agent to maintain its viability \(V\). Viability, in turn, is endogenously defined as opposed to the use of exogenous quantities like utility functions. The semantic information content in a particular agent-environment system is determined by rerunning the system evolution for intervened versions of the dynamics. Such interventions involve "scrambling" the information transfer between the agent and environment in a specified way. Tracking changes in agent viability for a distribution of intervened system trajectories allows the semantic information content to be determined. Applying this procedure to our forager model required finding appropriate approximations to the fully specified state space of the original KW18 formalism. For realistic systems with many degrees of freedom, the full specification of the joint probability distribution \(\rho_{AE}\) over the full state space, as required in the initial formalism, may prove computationally expensive or even intractable. For our model we adopted a phenomenological perspective which traced over those degrees of freedom not essential to specification of the viability. This left us with a reduced state space in which intervened trajectories could be simulated and the subsequent transfer entropy could be calculated. In our model, the forager's movements were determined by its ability to sense the presence of an individual unit of resource. Once detected (sensing was limited to within a radius R) the forager moved towards the resource. The transfer entropy was scrambled by adding noise to the forager's sensor via a parameter \(\eta\) where \(\eta=1\) implied a complete loss of forager's ability to sense the direction of the resource. Our results, expressed in terms of a viability function defined as the expectation value of the forager's life-time, clearly showed the effect of adding noise to the sensor. For \(\eta>\eta_{c}\), where \(\eta_{c}\) is a critical value set by the sensing radius, the forager's expected lifetime was dramatically reduced (Fig. 2). We refer to this as the semantic threshold. This result by itself represents an important extension of previous work on forager dynamics [44; 45; 46; 47]. What is novel in our work was the way in which casting the problem in terms of semantic information reveals useful aspects of the model dynamics. The transfer entropy represents correlations established between the forager (an agent) and the environment via the agent's sensor. A blind forager \(\eta=1\) would be fully decoupled from the environment. By tracking how the forager's viability changes as these correlations are either increased or decreased, we gain some understanding about the role they play in the forager's ability to persist. In particular the upper panel of Fig. 3 which shows the viability curve \(V(\mathcal{T})\), reveals two essential ways to understand the role of such correlations for the forager (or for any agent). Above the semantic threshold we find a plateau of high viability. Moving from right to left in this region we are removing correlations between agent and environment, however this does not effect the agents ability to maintain its existence. Thus the correlations that are being removed are not essential to the couplings between agent and environment that maintain viability. Below the semantic threshold each bit of information, affects the agents ability to persist. Once this threshold is passed we see the viability monotonically decrease to zero, corresponding to a dead agent. Thus casting the forager/environment system into the semantic information formalism allows us to see exactly how much information matters. In addition our ability to model the shape of the \(V(\mathcal{T})\) curve and the location of the semantic threshold in terms of the forager/environment parameters (\(R,r,\mu,\eta\)) allows us to see how the correlations are instantiated. Thus our work may provide a useful model for others who want to cast studies of established agents in terms of semantic information in order to better understand the underlying nature of correlations and information dynamics. Coming from the left in Fig. 3 leads us to a different perspective which may prove useful in using semantic information to understand how agents arise in the first place. Beginning with the low viability region on the left we see that adding correlations, initially has little effect. The forager dies quickly and adding an additional bit of correlation with the environment does not change that outcome. As more bits of information are acquired, the agent's viability slowly rises. However, it is only near the semantic threshold that the slope of the curve accelerates and _viability per bit_ (VpB) peaks. Thus, it is possible that this threshold may prove useful for understanding the role information plays in allowing systems to become autonomous agents. Both hurricanes and cells are non-linear, driven, far-from-equilibrium systems, but only cells are considered agents. Future work could explore the relationship between the accumulation of semantic information and the emergence of agent-like behavior while also considering the thermodynamic cost of that accumulation. ## Acknowledgements This project was partly made possible through the support of Grant 62417 from the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation. AK thanks Sosuke Ito for support and encouragement. JP is supported by "Maria de Maeztu" fellowship MDM-2014-0370-17-2. ## Appendix A Equilibrium resource density Consider a large patch of the environment, \(A\subset\mathds{R}^{2}\). The number of resources in the patch is \[N(t)=\int_{A}\mathrm{d}^{2}x\ n(\mathbf{x},t), \tag{10}\] where \(n\) is the resource density. Resource do not move, but they can be created or destroyed. Creation is due to an energy flux impinging on \(A\), denoted \(\Gamma(\mathbf{x},t)\); there exists some mechanism in the environment that converts this flux into localized resource deposits of energy \(\epsilon\). Destruction is due to another mechanism by which resources decay, their energy lost as heat, denoted \(\gamma(\mathbf{x},t)\). Note that the former rate is independent of the number of resources, the latter is not. The rate of change of resources in this large patch is \[\frac{dN}{dt}=\int_{A}\mathrm{d}^{2}x\ \left(\frac{\Gamma(\mathbf{x},t)}{ \epsilon}-\gamma(\mathbf{x},t)n(\mathbf{x},t)\right). \tag{11}\] Figure 4: A numerical analysis of the viability lower bound for both our model (the Ballistic Forager, BF), and another model popular in the literature (the Diffusive Forager, DF). The larger inset figure is the same as the top panel of Fig. 3, with the viability plateau highlighted for each type of forager, and expanded in the figures on the right. The semantic threshold is the dashed blue vertical line, while the pink line in both insets is the expression in Eq. (7). The expression is in good agreement with the simulations, until one reaches the semantic threshold. Taking \(\Gamma\) and \(\gamma\) both to be static, the resource density satisfies \[\frac{\partial n}{\partial t} = \frac{\Gamma}{\epsilon}-\gamma n\] \[\Downarrow\] \[n(t) = n_{0}e^{-\gamma t}+n_{eq}(1-e^{-\gamma t}). \tag{10}\] where the equilibrium density is \(n_{eq}=\Gamma/\epsilon\gamma\). The average area occupied by a single resource is the reciprocal of this, \(\epsilon\gamma/\Gamma\), so that the average spacing between resources at equilibrium is \[\ell_{R_{e}}=\sqrt{\frac{\epsilon\gamma}{\Gamma}}. \tag{11}\] If instead we coarse grain over the positions of resources and care only about how many their are, then the transition probabilities for a resource being generated or destroyed in a time \(\Delta t\to 0\) are \(p(N\to N+1)=\Gamma A\Delta t/\epsilon\) and \(p(N\to N-1)=\gamma N\Delta t\). The transitions between these coarse grained states define an \(M/M/\infty\) queue -- a well known stochastic process with a stationary distribution that is Poisson: \[p(N)=\frac{1}{N!}\left(\frac{\Gamma A}{\epsilon\gamma}\right)^{N}e^{-\frac{ \Gamma A}{\epsilon\gamma}}. \tag{12}\] The expected number of resources is \(\langle N\rangle=\frac{\Gamma A}{\epsilon\gamma}\), which under the assumption of a homogeneous distribution, gives an equilibrium number density that matches \(n_{eq}\) from the preceding paragraphs. The variance in the number of resources is \(\delta N^{2}=n_{eq}A\), giving a fluctuation in the number density of \(\delta n=\ell_{Re}^{1}A^{-1/2}\). Thus when considering large environments, \(A\gg\ell_{Re}^{2}\), the relative fluctuations around the equilibrium density are negligible, \(\delta n/n\to 0\). ## Appendix B The Semantic Threshold Consider an agent for which a resource has just entered sensor range, and a targeting event has occurred. We scramble the information relayed by the event so that the agent reorients itself with a target that is misaligned with the resource. The agent begins its journey towards the target, as shown in Fig 5. For an agent moving towards a target any resource that falls within the grey region will be collected. The maximal misalignment angle between target and resource is \(\pi\eta\). From the diagram once can infer a critical level of scrambling, \(\eta_{c}\), satisfying \[\sin\pi\eta_{c}=\frac{r}{R}, \tag{13}\] below which the agent will always collect the resource. Above it, the agent will sometimes miss the resource - an event that could potentially result in starvation. Combining Eq.13 with Eq.3 gives the semantic threshold, Eq. 4. ## Appendix C The Viability Plateau Let's examine a case below the critical scrambling strength, so that every targetted resource is still collected. Once again we consider a scrambled targeting event, but assume it occurs for a resource a distance \(y<R\) away from the agent. This type of event occurs regularly after the agent has collected a resource and there is at least one other resource in sensor range. The agent begins to move towards the target as depicted in Fig. 6. Had the sensor not been scrambled, the agent would traverse a distance \(y-r\) to collect the resource. Meanwhile, the scrambled agent needs to travel the distance from \(p_{2}\) to the resource for collection to occur; a little geometry and trigonometry on the diagram give us this distance as \(y\cos\theta-\sqrt{r^{2}-y^{2}\sin^{2}\theta}\). Since \(p_{2}\) to \(p_{3}\) is also a distance \(y-r\), denote the remaining distance \(\lambda(\theta)y\) where \(\lambda(\theta)\) is referred to as a _dilation_ factor. It will become clear why in what follows. A little algebra gets us the angular dependence of Figure 5: The geometry of an agent (mis)targeting a resource at the edge of sensor range. the dilation factor, \[\lambda(\theta)= \frac{r}{y}-1+\cos\theta-\sqrt{\frac{r^{2}}{y^{2}}-\sin^{2}\theta}. \tag{10}\] Of course this factor will be different each time a targeting event occurs, depending on distance to resource and the particular mismatch angle of the target. However, for a given \(\eta\) we know the distribution of mismatch angles - it is uniform over \([-\pi\eta,\pi\eta]\) - so, for fixed \(y\), the expected dilation is \[\mathds{E}_{\eta}[\lambda]=\frac{1}{2\pi\eta}\int_{\neg\pi\eta}^{\pi\eta}\!\! \!d\theta\ \lambda(\theta).\] Examining the functional form of \(\lambda(\theta)\) we see that the first two terms are trivial and the third term easily integrates to \(\sin\pi\eta/\pi\eta\), while the fourth term requires a little massaging. After factoring out a \(-r/y\pi\eta\) and using the evenness of the integrand, the remaining factor is massaged into the form \[\int_{0}^{\pi\eta}\!\!\!d\theta\ \sqrt{1-\frac{y^{2}}{r^{2}}\sin^{2}\theta}.\] The integral is not elementary, but turns out to be an incomplete elliptic integral of the second kind, \[E(\varphi|k^{2})=\int_{0}^{\varphi}\!\!\!d\theta\ \sqrt{1-k^{2}\sin^{2}\theta}, \tag{11}\] with modulus \(k\). There's a slight nuance since normally \(0<k^{2}<1\), while in our case \(y>r\) gives a modulus greater than unity. This bound ensures the integrand remains real for all values of \(\varphi\). Fortunately, since we're considering cases below critical scrambling, \(y\sin\theta<r\) is automatically satisfied and there's no need to worry about which branch we're on. With this minutiae out of the way, we write the expected dilation for fixed \(y\) \[\mathds{E}_{\eta}[\lambda]=\frac{r}{y}-1+\frac{\sin\pi\eta}{\pi\eta}-\frac{r} {y}\frac{E(\pi\eta\|\frac{y^{2}}{r^{2}})}{\pi\eta}, \tag{12}\] which when combined with Eq. 3 and 4, yields the first line of Eq. 5. To simplify the notation, we simply denote this as \(\lambda\) in the main text.
2306.14183
Doubly commuting and dual doubly commuting semigroups of isometries
Structures of commuting semigroups of isometries under certain additional assumptions like double commutativity or dual double commutativity are found.
Tirthankar Bhattacharyya, Shubham Rastogi, Vijaya Kumar U
2023-06-25T09:35:37Z
http://arxiv.org/abs/2306.14183v1
# Doubly commuting and dual doubly commuting semigroups of isometries ###### Abstract. Structures of commuting semigroups of isometries under certain additional assumptions like double commutativity or dual double commutativity are found. MSC: Primary: 47D03, 47A65. Secondary: 47B91. Keywords: The shift semigroup, Isometric semigroups, Doubly commuting, Dual doubly commuting. ## 1. Introduction One of the building blocks of \(C_{0}\)-semigroups of isometries is the _right-shift-semigroup_\(\mathcal{S}^{\mathcal{F}}=(\mathcal{S}^{\mathcal{F}}_{t})_{t\geq 0}\) on \(L^{2}(\mathbb{R}^{+},\mathcal{F})\) for any Hilbert space \(\mathcal{F}.\) It is defined as \[(\mathcal{S}^{\mathcal{F}}_{t}f)x=\begin{cases}f(x-t)&\text{if }x\geq t,\\ 0&\text{otherwise},\end{cases}\] for \(f\in L^{2}(\mathbb{R}_{+},\mathcal{F})\) and it has been shown by Cooper in [5] to be one of the direct summands in the structure theorem for any \(C_{0}\)-semigroup of isometries. We shall denote \(\mathcal{S}^{\mathcal{C}}_{t}\) by just \(\mathcal{S}_{t}.\) Sometimes we shall use the identification of \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) with \(L^{2}(\mathbb{R}_{+})\otimes\mathcal{F}\) which transforms \(\mathcal{S}^{\mathcal{F}}_{t}\) into \(\mathcal{S}_{t}\otimes I_{\mathcal{F}}.\) All semigroups in this note are \(C_{0}\)-semigroups (i.e., strongly continuous one parameter semigroups). Two such semigroups \(V_{1}=(V_{1,t})_{t\geq 0}\) and \(V_{2}=(V_{2,s})_{s\geq 0}\) on a Hilbert space \(\mathcal{H}\) are said to be 1. _commuting_ if \[V_{1,t}V_{2,s}=V_{2,s}V_{1,t}\text{ for all }t,s\geq 0.\] 2. _doubly commuting_ if they are commuting and \[V_{1,t}V_{2,s}^{*}=V_{2,s}^{*}V_{1,t}\text{ for all }t,s\geq 0.\] We completely characterize pairs of doubly commuting semigroups of isometries in terms of concrete models in Theorem 3.5 and Corollary 3.7. See [2] and [3] for a preliminary result of Cooper type decomposition obtained by Binzar and Lazureanu. This can be done in two ways. To begin with, we use the identification of the right-shift-semigroup with a semigroup of certain multiplication operators on a vector valued Hardy space given in [1, Theorem 3.4]. This gives rise to certain partial isometries. We find commutants of these partial isometries. This gives a proof of Theorem 3.5. On the other hand, the result [1, Lemma 12.1] of Berger-Coburn-Lebow is also useful for proving Theorem 3.5. The minimal unitary extension of a pair of commuting semigroups of isometries is known due to [14, 11]. Therefore, the dual can be defined for a pair of commuting ###### Abstract. We study the existence of a Hilbert space \(\mathcal{H}\) with a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). We also prove that the existence of a Hilbert space \(H^{2}(\mathbb{D},\mathcal{F})\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\) is a unique semigroup \(\mathcal{S}^{\mathcal{F}}\) on \(L^{2}(\mathbb{R}_{+},\mathcal{F})\). _and for \(0\leq s<1,\)\(E^{\mathcal{F}}_{0,s},E^{\mathcal{F}}_{1,s}\) are the partial isometries in \(\mathcal{B}(L^{2}([0,1],\mathcal{F}))\) given by_ \[(E^{\mathcal{F}}_{0,s}f)(x)=\begin{cases}0&\text{if }x<s,\\ f(x-s)&\text{if }s\leq x\leq 1,\end{cases} \tag{2.2}\] _and_ \[(E^{\mathcal{F}}_{1,s}f)(x)=\begin{cases}f(1-s+x)&\text{if }x\leq s,\\ 0&\text{if }s<x\leq 1.\end{cases} \tag{2.3}\] Proof.: Consider the unitary \(W:L^{2}(\mathbb{R}_{+},\mathcal{F})\to H^{2}(\mathbb{D},L^{2}([0,1], \mathcal{F}))\) defined by \[Wf=\sum_{n=0}^{\infty}f_{n}z^{n} \tag{2.4}\] where \(f_{n}(\alpha):=f(n+\alpha)\) for \(\alpha\in[0,1]\) and \(n\in\mathbb{Z}_{+}.\) Let \(0\leq t<1\) and \(g\in L^{2}([0,1],\mathcal{F}).\) \[W\mathcal{S}^{\mathcal{F}}_{t}W^{*}(gz^{n})=W\mathcal{S}^{\mathcal{F}}_{t}(h) =(E^{\mathcal{F}}_{0,t}g)z^{n}+(E^{\mathcal{F}}_{1,t}g)z^{n+1}=M_{\varphi^{ \mathcal{F}}_{t}}(gz^{n}),\] where \(h\in L^{2}(\mathbb{R}_{+},\mathcal{F})\) is given by \[h(\alpha)=\begin{cases}g(\alpha-n)&\text{if }n\leq\alpha\leq n+1,\\ 0&\text{else}.\end{cases}\] It is evident that \(W\mathcal{S}^{\mathcal{F}}_{1}W^{*}=M_{\varphi^{\mathcal{F}}_{t}}=M_{z}^{L^{2 }([0,1],\mathcal{F})},\) the multiplication operator by \(z\) on \(H^{2}(\mathbb{D},L^{2}([0,1],\mathcal{F})).\) Hence \(W\mathcal{S}^{\mathcal{F}}_{t}W^{*}=M_{\varphi^{\mathcal{F}}_{t}}\) for \(0\leq t\leq 1.\) For \(n\in\mathbb{N}\) and \(n\leq t<n+1,\) since \(\mathcal{S}^{\mathcal{F}}_{t}=(\mathcal{S}^{\mathcal{F}}_{1})^{n}\mathcal{S} ^{\mathcal{F}}_{t-n},\) we get \[W\mathcal{S}^{\mathcal{F}}_{t}W^{*} =W(\mathcal{S}^{\mathcal{F}}_{1})^{n}W^{*}W\mathcal{S}^{\mathcal{ F}}_{t-n}W^{*}\] \[=(W\mathcal{S}^{\mathcal{F}}_{1}W^{*})^{n}W\mathcal{S}^{\mathcal{ F}}_{t-n}W^{*}\] \[=(M_{z}^{L^{2}([0,1],\mathcal{F}))^{n}}M_{\varphi^{\mathcal{F}}_ {t-n}}\] \[=M_{z^{n}\varphi^{\mathcal{F}}_{t-n}}\] \[=M_{\varphi^{\mathcal{F}}_{t}}.\] This completes the proof. **Lemma 2.2**.: _Let \(\mathcal{F}\) be a separable Hilbert space and \(\Lambda:L^{2}[0,1]\otimes\mathcal{F}\to L^{2}([0,1],\mathcal{F})\) be the natural unitary isomorphism. Suppose \(B\in\mathcal{B}(L^{2}([0,1],\mathcal{F}))\) commutes with \(E^{\mathcal{F}}_{0,s}\) and \(E^{\mathcal{F}}_{1,s}\) for all \(0\leq s<1.\) Then, \(B=\Lambda(I_{L^{2}[0,1]}\otimes C)\Lambda^{*}\) for some \(C\in\mathcal{B}(\mathcal{F}).\)_ Proof.: For any \(0\leq s<1,\) note that \[\operatorname{ran}E^{\mathcal{F}}_{0,s}=\{f:\operatorname{supp}(f)\subseteq[s,1]\}\ and\ \operatorname{ran}E^{\mathcal{F}}_{1,s}=\{f:\operatorname{supp}(f)\subseteq[0, s]\}.\] Hence \(L^{2}([0,1],\mathcal{F})=\operatorname{ran}E^{\mathcal{F}}_{0,s}\oplus \operatorname{ran}E^{\mathcal{F}}_{1,s}\) for any \(0\leq s<1.\) Since \(B\) commutes with \(E^{\mathcal{F}}_{0,s}\) and \(E^{\mathcal{F}}_{1,s}\) for all \(0\leq s<1,\) the ranges of \(E^{\mathcal{F}}_{0,s}\) and the ranges of \(E^{\mathcal{F}}_{1,s}\) are reduced by \(B.\) For any \(x\in\mathcal{F}\) and an interval \(I\subset[0,1],\) let \(\mathbf{1}_{I}\cdot x:[0,1]\to\mathcal{F}\) denotes the function \[(\mathbf{1}_{I}\cdot x)(\alpha)=\begin{cases}x&\text{if }\alpha\in I,\\ 0&\text{else}.\end{cases}\] Let \(f_{x}:=B(\mathbf{1}_{[0,1]}\cdot x).\) Note that \[B(\mathbf{1}_{[0,1]}\cdot x)=B(\mathbf{1}_{[0,c]}\cdot x)+B(\mathbf{1}_{(c,d)} \cdot x)+B(\mathbf{1}_{[d,1]}\cdot x)=f_{x}\] As \(B\) reduces the ranges of \(E^{\mathcal{F}}_{1,c}\) and \(E^{\mathcal{F}}_{0,d},\) we have \[B(\mathbf{1}_{(c,d)}\cdot x)(\alpha)=\begin{cases}f_{x}(\alpha)&\text{if } \alpha\in(c,d),\\ 0&\text{else}.\end{cases}\] Hence \(B(\mathbf{1}_{(c,d)}\cdot x)=\mathbf{1}_{(c,d)}f_{x}\) for all \(0\leq c<d\leq 1.\) With a little more work, \(f_{x}\) can be shown to be a bounded measurable function (and not just in \(L^{2}([0,1],\mathcal{F})\)), but we do not need this fact. Since \(B\) commutes with \(E^{\mathcal{F}}_{0,s}\) and \(E^{\mathcal{F}}_{1,s}\) for \(0\leq s<1,\) we have \[B(\mathbf{1}_{[s,1]}\cdot x)=E^{\mathcal{F}}_{0,s}(f_{x})\quad\text{and}\quad B (\mathbf{1}_{[0,s]}\cdot x)=E^{\mathcal{F}}_{1,s}(f_{x}).\] Therefore, \[B(\mathbf{1}_{[0,1]}\cdot x)=E^{\mathcal{F}}_{0,s}(f_{x})+E^{\mathcal{F}}_{1, s}(f_{x})=f_{x}\text{ for }0\leq s<1. \tag{2.5}\] Extend \(f_{x}\) on \(\mathbb{R}\) periodically with period \(1,\) say \(\tilde{f_{x}}.\) Note that the translation of \(\tilde{f_{x}}\) by any \(0\leq s<1\) equals \(\tilde{f_{x}}\) almost everywhere by Eq. (2.5). This implies that \(f_{x}=\mathbf{1}_{[0,1]}\cdot y\) for some \(y\in\mathcal{F}.\) See also [12, Problem 4, Chap. 7]. Let \(\Theta:\mathcal{F}\to L^{2}([0,1],\mathcal{F})\) be the isometric embedding \(\Theta x=\mathbf{1}_{[0,1]}\cdot x.\) Use \(\Theta\) to define \(C\) by \(C=\Theta^{*}B\Theta.\) Let \(\Lambda:L^{2}[0,1]\otimes\mathcal{F}\to L^{2}([0,1],\mathcal{F})\) be the natural unitary. Then \[B(\mathbf{1}_{(c,d)}\cdot x) =\mathbf{1}_{(c,d)}B(\Theta(x))\] \[=\mathbf{1}_{(c,d)}\cdot\Theta^{*}B\Theta(x)\] \[=\mathbf{1}_{(c,d)}\cdot Cx\] \[=\Lambda(I_{L^{2}[0,1]}\otimes C)\Lambda^{*}(\mathbf{1}_{(c,d)} \cdot x).\] Since the characteristic functions \(\mathbf{1}_{(c,d)}\) for \(0\leq c<d\leq 1\) are total in \(L^{2}[0,1],\) we get \(B=\Lambda(I_{L^{2}[0,1]}\otimes C)\Lambda^{*}.\) This completes the proof. ## 3. Doubly commuting semigroups of isometries **Example 3.1**.: _For \(j=1,2\) define \(\mathcal{S}_{j,t}:L^{2}(\mathbb{R}_{+}^{2})\to L^{2}(\mathbb{R}_{+}^{2})\) by_ \[(\mathcal{S}_{1,t}f)(x_{1},x_{2})=\begin{cases}f(x_{1}-t,x_{2})&\text{if }x_{1} \geq t\\ 0&\text{else}\end{cases},(\mathcal{S}_{2,t}f)(x_{1},x_{2})=\begin{cases}f(x_{ 1},x_{2}-t)&\text{if }x_{2}\geq t\\ 0&\text{else}.\end{cases}\] _Then \(\mathcal{S}_{1}:=(\mathcal{S}_{1,t})_{t\geq 0}\) and \(\mathcal{S}_{2}:=(\mathcal{S}_{2,t})_{t\geq 0}\) are doubly commuting semigroups of isometries on \(L^{2}(\mathbb{R}_{+}^{2}).\)_ A pair of semigroups \((V_{1},V_{2})\) on \(\mathcal{H}\) is said to be _jointly unitarily equivalent_ to \((W_{1},W_{2})\) on \(\mathcal{K}\) if there is a unitary \(U:\mathcal{H}\to\mathcal{K}\) such that \[W_{j,t}=UV_{j,t}U^{*}\text{ for all }t\geq 0,j=1,2.\] In the natural isomorphism of \(L^{2}(\mathbb{R}_{+}^{2})\) and \(L^{2}(\mathbb{R}_{+})\otimes L^{2}(\mathbb{R}_{+}),\) we can see that the pair of semigroups of isometries \((\mathcal{S}_{1},\mathcal{S}_{2})\) is jointly unitarily equivalent to the pair of semigroups of isometries \((\mathcal{S}\otimes I_{L^{2}(\mathbb{R}_{+})},I_{L^{2}(\mathbb{R}_{+})} \otimes\mathcal{S}),\) where \(\mathcal{S}\) is the right-shift-semigroup on \(L^{2}(\mathbb{R}_{+}).\) **Definition 3.2**.: _A pair \((V_{1},V_{2})\) of semigroups is called a bishift-semigroup if \((V_{1},V_{2})\) is jointly unitarily equivalent to \((\mathcal{S}_{1}\otimes I_{\mathcal{F}},\mathcal{S}_{2}\otimes I_{\mathcal{F}})\) on \(L^{2}(\mathbb{R}_{+}^{2})\otimes\mathcal{F}\) for some Hilbert space \(\mathcal{F}\)._ The following theorem is known due to Binzar and Lazureanu [2, 3]. **Theorem 3.3** ([3]).: _Let \(V_{1}\) and \(V_{2}\) be doubly commuting semigroups of isometries on \(\mathcal{H},\) then there exists a decomposition_ \[\mathcal{H}=\mathcal{H}_{p,p}\oplus\mathcal{H}_{p,u}\oplus\mathcal{H}_{u,p} \oplus\mathcal{H}_{u,u}\] _where \(\mathcal{H}_{i,j}\) reduces both \(V_{1}\) and \(V_{2}\) for all \(i,j\in\{p,u\}\) such that (1) both \(V_{1}\) and \(V_{2}\) are c.n.u semigroups on \(\mathcal{H}_{p,p},\) (2) \(V_{1}\) is a c.n.u semigroup, \(V_{2}\) is a unitary semigroup on \(\mathcal{H}_{p,u},\) (3) \(V_{1}\) is a unitary semigroup, \(V_{2}\) is a c.n.u semigroup on \(\mathcal{H}_{u,p}\) and (4) both \(V_{1}\) and \(V_{2}\) are unitary semigroups on \(\mathcal{H}_{u,u}.\)_ In the rest of this section we shall give models for \(\mathcal{H}_{p,p},\mathcal{H}_{p,u}\) and \(\mathcal{H}_{u,p}\) parts in the theorem above. The following lemma describes the operators doubly commuting with the shift. Let \(M_{z}\) denote the multiplication by \(z\) on the Hardy space of the unit disc \(H^{2}(\mathbb{D}).\) **Lemma 3.4**.: _If \(N\) is a bounded operator on \(H^{2}(\mathbb{D})\otimes\mathcal{F}\) doubly commuting with \(M_{z}\otimes I_{\mathcal{F}},\) then \(N=I_{H^{2}(\mathbb{D})}\otimes\omega\) for some bounded operator \(\omega\) on \(\mathcal{F}.\) In particular, any normal operator \(N\) commuting with \(M_{z}\otimes I_{\mathcal{F}},\) is of the form \(N=I_{H^{2}(\mathbb{D})}\otimes\omega\) for some normal operator \(\omega\) on \(\mathcal{F}.\)_ Proof.: A proof of the first part of the lemma can be found in [6, page 38]. The part involving normality is then straightforward from Fuglede's theorem ([7]). **Theorem 3.5**.: _Let \(V_{1}\) and \(V_{2}\) be two semigroups on \(\mathcal{H}\). Then \(V_{1}\) and \(V_{2}\) are doubly commuting c.n.u semigroups of isometries if and only if \((V_{1},V_{2})\) is a bishift-semigroup._ Proof.: Let \(\mathcal{A}=C^{*}\{V_{2,s}:s\geq 0\}\subseteq\mathcal{B}(\mathcal{H})\) be the \(C^{*}\)-algebra generated by \(\{V_{2,s}:s\geq 0\}.\) Suppose \(V_{1}\) and \(V_{2}\) are doubly commuting, every element of \(\mathcal{A}\) commutes with \(V_{1,t}\) for all \(t\geq 0.\) Therefore, by [1, Lemma 12.1] there is a Hilbert space \(\mathcal{G}\) and a unitary \(U:\mathcal{H}\to L^{2}(\mathbb{R}_{+})\otimes\mathcal{G}\) such that \[UV_{1,t}U^{*}=\mathcal{S}_{t}\otimes I_{\mathcal{G}}\text{ and }\quad UV_{2,t}U^{*}=I_{L^{2}(\mathbb{R}_{+})}\otimes B_{t}\] for all \(t\geq 0,\) where \(B_{t}\in\mathcal{B}(\mathcal{G}).\) As \((V_{2,t})_{t\geq 0}\) is a c.n.u semigroup of isometries, we must have \((B_{t})_{t\geq 0}\) to be a c.n.u semigroup of isometries on \(\mathcal{G}.\) Therefore, by Theorem 1.1, there is a Hilbert space \(\mathcal{F}\) and a unitary \(W:\mathcal{G}\to L^{2}(\mathbb{R}_{+})\otimes\mathcal{F}\) such that \[WB_{t}W^{*}=\mathcal{S}_{t}\otimes I_{\mathcal{F}}\text{ for }t\geq 0.\] Let \(Z:\mathcal{H}\to L^{2}(\mathbb{R}_{+})\otimes L^{2}(\mathbb{R}_{+})\otimes \mathcal{F}\) be the unitary \(Z=(I\otimes W)U.\) Then \[ZV_{1,t}Z^{*} =(I_{L^{2}(\mathbb{R}_{+})}\otimes W)(\mathcal{S}_{t}\otimes I_{ \mathcal{G}})(I_{L^{2}(\mathbb{R}_{+})}\otimes W^{*})=\mathcal{S}_{t}\otimes I _{L^{2}(\mathbb{R}_{+})}\otimes I_{\mathcal{F}}\text{ and }\] \[ZV_{2,t}Z^{*} =(I_{L^{2}(\mathbb{R}_{+})}\otimes W)(I_{L^{2}(\mathbb{R}_{+})} \otimes B_{t})(I_{L^{2}(\mathbb{R}_{+})}\otimes W^{*})=I_{L^{2}(\mathbb{R}_{+} )}\otimes\mathcal{S}_{t}\otimes I_{\mathcal{F}}\] for all \(t\geq 0.\) This shows that \((V_{1},V_{2})\) is a bishift-semigroup. We provide another proof of Theorem 3.5 using Lemma 2.2 and Lemma 3.4 when \(\mathcal{H}\) is separable. Another proof of Theorem 3.5.: Since \(V_{1}\) is a c.n.u semigroup, by Theorem 1.1 and Theorem 2.1, there exists a separable Hilbert space \(\mathcal{F}\) and a unitary \(U:\mathcal{H}\to H^{2}(\mathbb{D},L^{2}([0,1],\mathcal{F}))\) such that \(UV_{1,t}U^{*}=M_{\varphi_{t}^{\mathcal{F}}}\) where \(\varphi_{t}^{\mathcal{F}}\) is as given in Eq. (2.1). Then \(UV_{2,t}U^{*}\) doubly commutes with \(M_{\varphi_{1}^{\mathcal{F}}}=M_{z}^{L^{2}([0,1],\mathcal{F})}\) for all \(t\geq 0.\) Therefore, by Lemma 3.4, \(UV_{2,t}U^{*}=M_{\psi_{t}}\) where \(\psi_{t}\) is the constant function given by \(\psi_{t}(z)=B_{t}\) for all \(z\in\mathbb{D}.\) This shows that \((V_{1},V_{2})\) is jointly unitarily equivalent to \(((M_{\varphi_{t}^{\mathcal{F}}})_{t},(M_{\psi_{t}})_{t})\) on \(H^{2}(\mathbb{D},L^{2}([0,1],\mathcal{F})).\) Note that \(B=(B_{t})_{t\geq 0}\) is a c.n.u semigroup of isometries on \(L^{2}([0,1],\mathcal{F}).\) Since \(\psi_{t}\) commutes with \(\varphi_{s}^{\mathcal{F}}\), \(B_{t}\) commutes with \(E_{0,s}^{\mathcal{F}}\) and \(E_{1,s}^{\mathcal{F}}\) for all \(t\geq 0\) and \(0\leq s<1.\) Therefore by Lemma 2.2, \(\Lambda^{*}\psi_{t}(z)\Lambda=I_{L^{2}([0,1])}\otimes C_{t}\) for all \(z\in\mathbb{D},\) where \(C=(C_{t})\) is a c.n.u semigroup of isometries on \(\mathcal{F}\) and \(\Lambda:L^{2}[0,1]\otimes\mathcal{F}\to L^{2}([0,1],\mathcal{F})\) is the natural unitary. Note also that \(\Lambda^{*}\varphi_{t}^{\mathcal{F}}(z)\Lambda=\varphi_{t}^{\mathcal{C}}(z) \otimes I_{\mathcal{F}}\) for all \(z\in\mathbb{D}\) and \(t\geq 0,\) since \(\Lambda^{*}E_{j,s}^{\mathcal{F}}\Lambda=E_{j,s}^{\mathcal{C}}\otimes I_{ \mathcal{F}}\) for \(j=0,1\) and for all \(0\leq s<1.\) Therefore, \(((M_{\varphi_{t}^{\mathcal{F}}})_{t},(M_{\psi_{t}})_{t})\) is jointly unitarily equivalent to \((\mathcal{S}\otimes I_{\mathcal{F}},I_{L^{2}(\mathbb{R}_{+})}\otimes C_{t})\) by Theorem 2.1, and this is jointly unitarily equivalent to \((\mathcal{S}\otimes I_{L^{2}(\mathbb{R}_{+})}\otimes I_{\mathcal{K}},I_{L^{2} (\mathbb{R}_{+})}\otimes\mathcal{S}\otimes I_{\mathcal{K}})\) for some Hilbert space \(\mathcal{K},\) by Theorem 1.1, as \((C_{t})\) is a c.n.u semigroup of isometries. This completes the proof. We observe the following result which is analogous to Lemma 3.4. The proof follows from [1, Lemma 12.1] and Fuglede's theorem [7]. We remark that an elementary proof can be given using the notion of cogenerators and Lemma 3.4 instead of invoking [1, Lemma 12.1]. We leave that to the reader. **Theorem 3.6**.: _Let \(A=(A_{t})_{t\geq 0}\) be a semigroup of bounded operators on \(L^{2}(\mathbb{R}_{+})\otimes\mathcal{F}.\) Then, \(A\) doubly commutes with \(\mathcal{S}\otimes I_{\mathcal{F}}\) if and only if there exists a semigroup \((B_{t})_{t\geq 0}\) on \(\mathcal{F}\) such that \(A_{t}=I\otimes B_{t}\) for all \(t\geq 0.\) In particular, a normal semigroup \(A\) commutes with \(\mathcal{S}\otimes I_{\mathcal{F}}\) if and only if there exists a normal semigroup \((B_{t})_{t\geq 0}\) on \(\mathcal{F}\) such that \(A_{t}=I\otimes B_{t}\) for all \(t\geq 0.\)_ As a corollary to the above theorem we get the model for the \(\mathcal{H}_{p,u}\) (equivalently, for the \(\mathcal{H}_{u,p}\)) part. **Corollary 3.7**.: _Let \(V_{1}\) be a c.n.u semigroup of isometries commuting with a unitary semigroup \(V_{2}\) on \(\mathcal{H}.\) Then, there is a Hilbert space \(\mathcal{F}\) and a unitary isomorphism between \(\mathcal{H}\) and \(L^{2}(\mathbb{R}_{+})\otimes\mathcal{F}\) so that under the unitary isomorphism, \((V_{1},V_{2})\) is jointly equivalent to \((\mathcal{S}\otimes I_{\mathcal{F}},I_{L^{2}(\mathbb{R}_{+})}\otimes U)\) for some unitary semigroup \(U\) on \(\mathcal{F},\) where \(\mathcal{S}\) is the right-shift-semigroup on \(L^{2}(\mathbb{R}_{+}).\)_ ## 4. Dual doubly commuting semigroups of isometries Let \((V_{1},V_{2})\) be a pair of commuting semigroups of isometries on \(\mathcal{H},\) where \(V_{j}=(V_{j,t})_{t\geq 0}\) for \(j=1,2.\) Let \((\overline{V_{1}},\overline{V_{2}})\) on \(\overline{\mathcal{H}}\) be the minimal unitary extension of the pair \((V_{1},V_{2}),\) (this extension exists due to [14, 11]). So, \(\overline{V_{1}}=(\overline{V_{1,t}})_{t\in\mathbb{R}}\) and \(\overline{V_{2}}=(\overline{V_{2,s}})_{s\in\mathbb{R}}\) are commuting unitary groups on \(\overline{\mathcal{H}}\) such that \(V_{1,t}V_{2,s}=P_{\mathcal{H}}\overline{V_{1,t}\overline{V_{2,s}}}|_{\mathcal{H}}\) for \(t,s\geq 0\) and \[\overline{\mathcal{H}}=\overline{\operatorname{span}}\{\overline{V_{1,t}} \overline{V_{2,s}}(\mathcal{H}):s,t\in\mathbb{R}\}.\] It is clear that \(\overline{\mathcal{H}}\ominus\mathcal{H}\) is an invariant subspace for \((\overline{V_{i,t}})^{*}\) for all \(t\geq 0,i=1,2.\) Let \(\widetilde{V_{i,t}}:=(\overline{V_{i,t}})^{*}|_{\overline{\mathcal{H}}\ominus \mathcal{H}}\) for \(i=1,2,t\geq 0.\) Then \((\widetilde{V}_{1},\widetilde{V}_{2})\) is a pair of commuting semigroups of isometries on \(\widetilde{\mathcal{H}}:=\overline{\mathcal{H}}\ominus\mathcal{H},\) where \(\widetilde{V}_{1}=(\widetilde{V_{1,t}})_{t\geq 0}\) and \(\widetilde{V}_{2}=(\widetilde{V_{2,s}})_{s\geq 0}.\) The pair \((\widetilde{V}_{1},\widetilde{V}_{2})\) is called the _dual_ of \((V_{1},V_{2}).\) The pair \((V_{1},V_{2})\) is said to be _dual doubly commuting_ if the dual \((\widetilde{V}_{1},\widetilde{V}_{2})\) is doubly commuting. In the sequel, for a semigroup \(W=(W_{t})_{t\geq 0}\) the notation \(W^{*}\) denotes the adjoint semigroup \((W_{t}^{*})_{t\geq 0}.\) **Example 4.1**.: _Let \(\mathcal{H}=L^{2}(\mathbb{R}^{2}\setminus\mathbb{R}_{+}^{2})\) and for \(f\in\mathcal{H},\) define_ \[(\mathcal{M}_{1,t}f)(x_{1},x_{2}) =\begin{cases}0&\text{if $x_{2}\geq 0$ and $x_{1}\geq-t,$}\\ f(x_{1}+t,x_{2})&\text{else,}\end{cases}\] \[(\mathcal{M}_{2,t}f)(x_{1},x_{2}) =\begin{cases}0&\text{if $x_{1}\geq 0$ and $x_{2}\geq-t,$}\\ f(x_{1},x_{2}+t)&\text{else.}\end{cases}\] _Then \((\mathcal{M}_{1},\mathcal{M}_{2})\) is a pair of commuting semigroups of isometries on \(\mathcal{H},\) where \(\mathcal{M}_{1}=(\mathcal{M}_{1,t})_{t\geq 0}\) and \(\mathcal{M}_{2}=(\mathcal{M}_{2,s})_{s\geq 0}.\) Let \(\overline{\mathcal{S}_{j,t}}:L^{2}(\mathbb{R}^{2})\to L^{2}(\mathbb{R}^{2})\) for \(j=1,2,t\in\mathbb{R}\) be defined by_ \[(\overline{\mathcal{S}_{1,t}}f)(x_{1},x_{2})=f(x_{1}-t,x_{2}),\quad(\overline {\mathcal{S}_{2,t}}f)(x_{1},x_{2})=f(x_{1},x_{2}-t).\] _for \(f\in L^{2}(\mathbb{R}^{2})\) and \((x_{1},x_{2})\in\mathbb{R}^{2}.\) Note that \(((\overline{\mathcal{S}_{1}})^{*},(\overline{\mathcal{S}_{2}})^{*})\) on \(L^{2}(\mathbb{R}^{2})\) is the minimal unitary extension of \((\mathcal{M}_{1},\mathcal{M}_{2}).\) Therefore the dual of \((\mathcal{M}_{1},\mathcal{M}_{2})\) is \((\mathcal{S}_{1},\mathcal{S}_{2})\) on \(L^{2}(\mathbb{R}_{+}^{2}).\) Hence \((\mathcal{M}_{1},\mathcal{M}_{2})\) is dual doubly commuting._ A pair \((V_{1},V_{2})\) of commuting semigroups of isometries on \(\mathcal{H}\) is called _completely nonunitary (c.n.u)_ if there is no non-zero reducing subspace \(\mathcal{H}_{0}\) of \(\mathcal{H}\) for both \(V_{1}\) and \(V_{2}\) so that \(V_{i}|_{\mathcal{H}_{0}}\) is a unitary semigroup for \(i=1,2.\) **Remark 4.2**.: 1. _Theorem_ 1.1 _for a semigroup_ \(V=(V_{t})_{t\geq 0}\) _of isometries, in particular, gives us the Wold decomposition of_ \(V_{t}\) _for each_ \(t>0.\)__ 2. _Let_ \((v_{1},v_{2})\) _be a pair of commuting isometries. Let_ \(\mathcal{H}_{u}\) _be the unitary part in the Wold decomposition of_ \(v=v_{1}v_{2}.\) _Then_ \(\mathcal{H}_{u}\) _is reducing for both_ \(v_{1}\) _and_ \(v_{2}.\)__ 3. _Using the above, it is easy to notice that_ \((V_{1},V_{2})\) _is a c.n.u pair of semigroups of isometries if and only if the product semigroup_ \(V=V_{1}V_{2}=(V_{1,t}V_{2,t})_{t\geq 0}\) _is c.n.u._ This section uses the ideas from [9] extensively. **Lemma 4.3**.: _The dual pair \((\widetilde{V}_{1},\widetilde{V}_{2})\) is always c.n.u._ Proof.: We have \(\overline{V_{i,t}}=\begin{pmatrix}V_{i,t}&\star\\ 0&(\widetilde{V_{i,t}})^{*}\end{pmatrix}\) on \(\overline{\mathcal{H}}=\mathcal{H}\oplus\widetilde{\mathcal{H}}\) for \(t\geq 0,i=1,2.\) By minimality, \(\overline{\mathcal{H}}=\overline{\operatorname{span}\{V_{1,t}\overline{V_{2,s}}( \mathcal{H}):t,s\in\mathbb{R}\}}.\) Let \(\widetilde{V}=\widetilde{V}_{1}\widetilde{V}_{2},\) that is, \(\widetilde{V}_{t}=\widetilde{V_{1,t}\widetilde{V}_{2,t}}\) for all \(t\geq 0.\) Let \(\mathcal{H}_{0}\) be the unitary part in the Cooper's decomposition (Theorem 1.1) of \(\widetilde{V}.\) Suppose \(\widetilde{V}\) is not c.n.u. Then \(\mathcal{H}_{0}\) is a non-zero subspace of \(\widetilde{\mathcal{H}}\) and \(\mathcal{H}_{0}\) is reducing for \(\widetilde{V}_{i,t}\) for all \(t\geq 0,i=1,2\) (from Remark 4.2). Since \((\overline{V_{i,t}})^{*}|_{\mathcal{H}_{0}}=\widetilde{V_{i,t}}|_{\mathcal{H}_{0}},\) note that \(\mathcal{H}_{0}\) is a reducing subspace for \(\overline{V_{i,t}}\) for all \(t\geq 0\) and \(i=1,2.\) Therefore, \[\overline{\operatorname{span}}\{\overline{V_{1,t}}\overline{V_{2,s}}(\mathcal{ H}):t,s\in\mathbb{R}\}\perp\mathcal{H}_{0}.\] This is a contradiction to the minimality. **Theorem 4.4**.: _Let \((V_{1},V_{2})\) be a c.n.u pair of commuting semigroups of isometries on \(\mathcal{H}.\) If the pair \((\overline{V_{1}},\overline{V_{2}})\) on \(\overline{\mathcal{H}}\) is the minimal unitary extension of \((V_{1},V_{2}),\) then the pair \(((\overline{V_{1}})^{*},(\overline{V_{2}})^{*})\) is the minimal unitary extension of the dual \((\widetilde{V_{1}},\widetilde{V_{2}}).\) In particular, \((\overset{\approx}{\widetilde{V_{1}}},\overset{\approx}{\widetilde{V_{2}}})= (V_{1},V_{2}),\) where \((\overset{\approx}{\widetilde{V_{1}}},\overset{\approx}{\widetilde{V_{2}}})\) is the dual of \((\widetilde{V_{1}},\widetilde{V_{2}}).\)_ Proof.: Let \(\widetilde{\mathcal{H}}=\overline{\mathcal{H}}\ominus\mathcal{H}.\) Only minimality of the extension needs to be proved, i.e., \[\overline{\operatorname{span}}\{\overline{V_{1,t}}\overline{V_{2,s}}(\widetilde {\mathcal{H}}):t,s\in\mathbb{R}\}=\overline{\mathcal{H}}.\] To that end, let \(x\in\overline{\mathcal{H}}\) and \(x\perp\overline{\operatorname{span}}\{\overline{V_{1,t}}\overline{V_{2,s}}( \widetilde{\mathcal{H}}):t,s\in\mathbb{R}\}.\) Then \(\langle\overline{V_{1,t}}\overline{V_{2,s}}\tilde{h},x\rangle=0\) for all \(\tilde{h}\in\widetilde{\mathcal{H}}\) and \(t,s\in\mathbb{R}.\) This implies \(\langle\tilde{h},\overline{V_{1,t}}\overline{V_{2,s}}x\rangle=0\) for all \(\tilde{h}\in\widetilde{\mathcal{H}}\) and \(t,s\in\mathbb{R}.\) Hence \(\overline{V_{1,t}}\overline{V_{2,s}}x\in\mathcal{H}\) for all \(t,s\in\mathbb{R}.\) So, \(X=\overline{\operatorname{span}}\{\overline{V_{1,t}}\overline{V_{2,s}}x:t,s\in \mathbb{R}\}\subseteq\mathcal{H}.\) Clearly \(X\) reduces both \(\overline{V_{1}}\) and \(\overline{V_{2}}\) to unitary semigroups. Since \(X\subseteq\mathcal{H}\) we have \(\overline{V_{i,t}}|_{X}=V_{i,t}|_{X}\) for \(i=1,2\) and for all \(t\geq 0.\) This shows that \(X\) reduces both \(V_{1}\) and \(V_{2}\) to unitary semigroups. Now since \((V_{1},V_{2})\) is c.n.u, \(X\) has to be the zero space. In particular, \(x=0.\) This shows the minimality. **Definition 4.5**.: _A c.n.u pair \((V_{1},V_{2})\) of commuting semigroups of isometries is called a modified-bishift-semigroup if the dual \((\widetilde{V_{1}},\widetilde{V_{2}})\) of \((V_{1},V_{2})\) is a bishift-semigroup._ The following is the main result of this section. It gives a Cooper type decomposition for the pairs of dual doubly commuting semigroups of isometries. **Theorem 4.6**.: _Let \((V_{1},V_{2})\) be a pair of commuting semigroups of isometries on \(\mathcal{H}.\) Suppose \((V_{1},V_{2})\) is dual doubly commuting. Then the space \(\mathcal{H}\) decomposes as_ \[\mathcal{H}=\mathcal{H}_{m}\oplus\mathcal{H}_{p,u}\oplus\mathcal{H}_{u,p} \oplus\mathcal{H}_{u,u},\] _so that \(\mathcal{H}_{m},\mathcal{H}_{p,u},\mathcal{H}_{u,p}\) and \(\mathcal{H}_{u,u}\) reduce both \(V_{1}\) and \(V_{2},\) (1) \((V_{1},V_{2})\) is a modified-bishift-semigroup on \(\mathcal{H}_{m},\) (2) \(V_{1}\) is a c.n.u semigroup, \(V_{2}\) is a unitary semigroup on \(\mathcal{H}_{p,u},\) (3) \(V_{1}\) is a unitary semigroup, \(V_{2}\) is a c.n.u semigroup on \(\mathcal{H}_{u,p},\) and (4) both \(V_{1}\) and \(V_{2}\) are unitary semigroups on \(\mathcal{H}_{u,u}.\)_ Proof.: First assume that \((V_{1},V_{2})\) is a c.n.u pair of dual doubly commuting semigroups of isometries. Let \((\overline{V_{1}},\overline{V_{2}})\) on \(\widetilde{\mathcal{H}}\) be the minimal unitary extension of \((V_{1},V_{2})\) and let \((\widetilde{V_{1}},\widetilde{V_{2}})\) be the dual of \((V_{1},V_{2}).\) Since \((V_{1},V_{2})\) is dual doubly commuting, \((\widetilde{V_{1}},\widetilde{V_{2}})\) is doubly commuting. Hence by Theorem 3.3 and Theorem 3.5 we have \(\widetilde{\mathcal{H}}=\widetilde{\mathcal{H}}_{p,p}\oplus\widetilde{\mathcal{ H}}_{p,u}\oplus\widetilde{\mathcal{H}}_{u,p}\oplus\widetilde{\mathcal{H}}_{u,u},\) where \((\widetilde{V_{1}},\widetilde{V_{2}})\) is a bishift-semigroup on \(\widetilde{\mathcal{H}}_{p,p},\)\(\widetilde{V_{1}}\) is a c.n.u semigroup and \(\widetilde{V_{2}}\) is a unitary semigroup on \(\widetilde{\mathcal{H}}_{p,u},\) and \(\widetilde{V_{1}}\) is a unitary semigroup and \(\widetilde{V_{2}}\) is a c.n.u semigroup on \(\widetilde{\mathcal{H}}_{u,p}.\) By Lemma 4.3 we have \(\widetilde{\mathcal{H}}_{u,u}=\{0\}.\) For \(i,j\in\{p,u\}\) and \((i,j)\neq(u,u)\), let \[\hat{\mathcal{H}}_{i,j}=\overline{\operatorname{span}\{\overline{V_{1,t}}\overline {V_{2,s}}(\tilde{\mathcal{H}}_{i,j}):t,s\in\mathbb{R}\}}.\] Then, it is not difficult to show that \(\hat{\mathcal{H}}_{p,p},\hat{\mathcal{H}}_{p,u}\) and \(\hat{\mathcal{H}}_{u,p}\) are pairwise orthogonal and \(\overline{\mathcal{H}}=\hat{\mathcal{H}}_{p,p}\oplus\hat{\mathcal{H}}_{p,u} \oplus\hat{\mathcal{H}}_{u,p}.\) Also clearly \(\hat{\mathcal{H}}_{i,j}\) is a reducing subspace for \(\overline{V_{k,t}}\) for \(k=1,2,t\geq 0\) and \(i,j\in\{p,u\},(i,j)\neq(u,u).\) Let \(\mathcal{H}_{m}=\hat{\mathcal{H}}_{p,p}\ominus\widetilde{\mathcal{H}}_{p,p}\) and \(\mathcal{H}_{i,j}=\hat{\mathcal{H}}_{i,j}\ominus\widetilde{\mathcal{H}}_{i,j}\) for \(i,j\in\{p,u\}\) with \(i\neq j.\) Then \[\mathcal{H}=\mathcal{H}_{m}\oplus\mathcal{H}_{p,u}\oplus\mathcal{H}_{u,p}. \tag{4.1}\] Note that \(\mathcal{H}_{m},\mathcal{H}_{p,u}\) and \(\mathcal{H}_{u,p}\) are invariant subspaces for \(\overline{V_{k,t}}\) for all \(t\geq 0,k=1,2.\) Hence they are invariant subspaces for \(V_{k,t}.\) Thus by Eq. (4.1), they are reducing subspaces for \(V_{k,t}.\) Note that the pair \((\overline{V_{1}}|_{\hat{\mathcal{H}}_{p,p}},\overline{V_{2}}|_{\hat{\mathcal{ H}}_{p,p}})\) on \(\hat{\mathcal{H}}_{p,p}\) is the minimal unitary extension for \((V_{1}|_{\mathcal{H}_{m}},V_{2}|_{\mathcal{H}_{m}})\) on \(\mathcal{H}_{m}\) and the pair \((\overline{V_{1}}|_{\hat{\mathcal{H}}_{i,j}},\overline{V_{2}}|_{\hat{\mathcal{ H}}_{i,j}})\) on \(\hat{\mathcal{H}}_{i,j}\) is the minimal unitary extension for \((V_{1}|_{\mathcal{H}_{i,j}},V_{2}|_{\mathcal{H}_{i,j}})\) on \(\mathcal{H}_{i,j}\) for \(i,j\in\{p,u\}\) and \(i\neq j.\) Now we shall show that \((V_{1}|_{\mathcal{H}_{m}},V_{2}|_{\mathcal{H}_{m}})\) is a modified-bishift-semigroup on \(\mathcal{H}_{m}.\) To show that, we must show that \(((\overline{V_{1}})^{*}|_{\widetilde{\mathcal{H}}_{p,p}},(\overline{V_{2}})^{* }|_{\widetilde{\mathcal{H}}_{p,p}})\) is a bishift-semigroup on \(\widetilde{\mathcal{H}}_{p,p}.\) For this, note that \(((\overline{V_{1}})^{*}|_{\widetilde{\mathcal{H}}_{p,p}},(\overline{V_{2}})^{* }|_{\widetilde{\mathcal{H}}_{p,p}})=(\overline{V_{1}}|_{\widetilde{\mathcal{H} }_{p,p}},\widetilde{V_{2}}|_{\widetilde{\mathcal{H}}_{p,p}})\) and it is a bishift-semigroup. Next we shall show that \(V_{1}|_{\mathcal{H}_{p,u}}\) is a c.n.u semigroup and \(V_{2}|_{\mathcal{H}_{p,u}}\) is a unitary semigroup on \(\mathcal{H}_{p,u}.\) Note that \((\overline{V_{2}})^{*}|_{\hat{\mathcal{H}}_{p,u}}\) is a unitary semigroup on \(\hat{\mathcal{H}}_{p,u}\) and \((\overline{V_{2}})^{*}|_{\hat{\mathcal{H}}_{p,u}}=\widetilde{V_{2}}|_{\widetilde {\mathcal{H}}_{p,u}}\) is a unitary semigroup on \(\widetilde{\mathcal{H}}_{p,u}.\) Hence \(\overline{V_{2}}|_{\mathcal{H}_{p,u}}=V_{2}|_{\mathcal{H}_{p,u}}\) is a unitary semigroup on \(\hat{\mathcal{H}}_{p,u}.\) Note that \[\hat{\mathcal{H}}_{p,u}=\overline{\operatorname{span}\{\overline{V_{1,t}}( \widetilde{\mathcal{H}}_{p,u}):t\in\mathbb{R}\}}. \tag{4.2}\] Since \(\overline{V_{1}}^{*}\) is a unitary extension for the c.n.u semigroup \(\widetilde{V}_{1}|_{\widetilde{\mathcal{H}}_{p,u}}\) on \(\widetilde{\mathcal{H}}_{p,u},\) by Eq. (4.2) one can see that \(\hat{\mathcal{H}}_{p,u}\) is the minimal space for this extension. Therefore, since \(\mathcal{H}_{p,u}=\hat{\mathcal{H}}_{p,u}\ominus\widetilde{\mathcal{H}}_{p,u},\) we see (using the same techniques as in Lemma 4.3) that \(\overline{V_{1}}|_{\mathcal{H}_{p,u}}=V_{1}|_{\mathcal{H}_{p,u}}\) is also a c.n.u semigroup on \(\mathcal{H}_{p,u}.\) Similarly we can show that \(V_{1}|_{\mathcal{H}_{u,p}}\) is a unitary semigroup, and \(V_{2}|_{\mathcal{H}_{u,p}}\) is a c.n.u semigroup on \(\mathcal{H}_{u,p}.\) This proves the theorem for a c.n.u pair \((V_{1},V_{2}).\) Now let \((V_{1},V_{2})\) be any pair (not necessarily c.n.u) of dual doubly commuting semigroups of isometries. Let \(\mathcal{H}_{u,u}\) be the unitary part in Theorem 1.1 for \(V_{1}V_{2}.\) Then \(\mathcal{H}_{u,u}\) reduces both \(V_{1}\) and \(V_{2},\) hence \(V_{1}|_{\mathcal{H}_{u,u}}\) and \(V_{2}|_{\mathcal{H}_{u,u}}\) are unitary semigroups. Let \(\mathcal{H}_{s}=\mathcal{H}\ominus\mathcal{H}_{u,u}.\) Since the dual of \((V_{1},V_{2})\) is same as the dual of \((V_{1}|_{\mathcal{H}_{s}},V_{2}|_{\mathcal{H}_{s}}),\)\((V_{1}|_{\mathcal{H}_{s}},V_{2}|_{\mathcal{H}_{s}})\) is a c.n.u pair of dual doubly commuting semigroups of isometries. Hence by applying the above proof to the c.n.u pair \((V_{1}|_{\mathcal{H}_{s}},V_{2}|_{\mathcal{H}_{s}}),\) we get the proof of the theorem. The converse of this theorem also holds and it is easy to see. Note that the dual of a commuting pair of a c.n.u semigroup and a unitary semigroup is again a commuting pair of a c.n.u semigroup and a unitary semigroup from Corollary 3.7. The next result shows that \((\mathcal{M}_{1},\mathcal{M}_{2})\) of Example 4.1 is a model for the pairs of dual doubly commuting semigroups of isometries. **Theorem 4.7**.: _Let \((V_{1},V_{2})\) be a modified-bishift-semigroup on \(\mathcal{H}.\) Then \((V_{1},V_{2})\) is jointly unitarily equivalent to \((\mathcal{M}_{1}\otimes I_{\mathcal{F}},\mathcal{M}_{2}\otimes I_{\mathcal{F}})\) for some Hilbert space \(\mathcal{F}.\)_ Proof.: Since the dual \((\widetilde{V_{1}},\widetilde{V_{2}})\) of \((V_{1},V_{2})\) is a bishift-semigroup, \((\widetilde{V_{1}},\widetilde{V_{2}})\) is jointly unitarily equivalent to \((\mathcal{S}_{1}\otimes I_{\mathcal{F}},\mathcal{S}_{2}\otimes I_{\mathcal{F}})\) for some Hilbert space \(\mathcal{F}.\) Note that the dual of \((\mathcal{S}_{1}\otimes I_{\mathcal{F}},\mathcal{S}_{2}\otimes I_{\mathcal{F}})\) is \((\mathcal{M}_{1}\otimes I_{\mathcal{F}},\mathcal{M}_{2}\otimes I_{\mathcal{F}}),\) hence by Theorem 4.4, \((V_{1},V_{2})\) is jointly unitarily equivalent to \((\mathcal{M}_{1}\otimes I_{\mathcal{F}},\mathcal{M}_{2}\otimes I_{\mathcal{F}}).\) The forward part of the following corollary follows from Theorem 4.6, Theorem 4.7 and the fact that \((\mathcal{M}_{1},\mathcal{M}_{2})\) is not doubly commuting. The converse part follows from [7] and Theorem 4.6. **Corollary 4.8**.: _Let \((V_{1},V_{2})\) be a pair of commuting semigroups of isometries on \(\mathcal{H}.\) Then \((V_{1},V_{2})\) is simultaneously doubly commuting and dual doubly commuting if and only if \(\mathcal{H}\) decomposes as the direct sum_ \[\mathcal{H}=\mathcal{H}_{p,u}\oplus\mathcal{H}_{u,p}\oplus\mathcal{H}_{u,u}\] _of reducing subspaces \(\mathcal{H}_{p,u},\mathcal{H}_{u,p}\) and \(\mathcal{H}_{u,u}\) for both \(V_{1}\) and \(V_{2},\) such that (1) \(V_{1}\) is a c.n.u semigroup and \(V_{2}\) is a unitary semigroup on \(\mathcal{H}_{p,u},\) (2) \(V_{1}\) is a unitary semigroup and \(V_{2}\) is a c.n.u semigroup on \(\mathcal{H}_{u,p},\) and (3) both \(V_{1}\) and \(V_{2}\) are unitary semigroups on \(\mathcal{H}_{u,u}.\)_ **Acknowledgements** The authors thank the referee for a careful reading and useful suggestions. Research is funded by the J C Bose fellowship JCB/2021/000041, the D S Kothari postdoctoral fellowship MA/20-21/0047 and the DST FIST program - 2021 [TPN - 700661].
2310.09152
Can we explain cosmic birefringence without a new light field beyond Standard Model?
The recent analysis of the Planck 2018 polarization data shows a nonzero isotropic cosmic birefringence (ICB) that is not explained within the $\Lambda$CDM paradigm. We then explore the question of whether the nonzero ICB is interpreted by the framework of the Standard Model Effective Field Theory (SMEFT), or at the energy scales of the cosmic microwave background, the low-energy EFT (LEFT) whose dynamical degrees of freedom are five SM quarks and all neutral and charged leptons. Our systematic study reveals that any operator in the EFT on a cosmological background would not give the reported ICB angle, which is observationally consistent with frequency independence. In particular, we estimate the size of the ICB angle generated by the effect that the cosmic microwave background photons travel through the medium of the cosmic neutrino background with parity-violating neutrino-photon interactions and find that it would be too small to explain the data. If the reported ICB angle should be confirmed, then our result would indicate the existence of a new particle that is lighter than the electroweak scale and feebly interacting with the SM particles.
Yuichiro Nakai, Ryo Namba, Ippei Obata, Yu-Cheng Qiu, Ryo Saito
2023-10-13T14:48:08Z
http://arxiv.org/abs/2310.09152v2
# Can we explain cosmic birefringence without a new light field ###### Abstract The recent analysis of the Planck 2018 polarization data shows a nonzero isotropic cosmic birefringence (ICB) that is not explained within the \(\Lambda\)CDM paradigm. We then explore the question of whether the nonzero ICB is interpreted by the framework of the Standard Model Effective Field Theory (SMEFT), or at the energy scales of the cosmic microwave background, the low-energy EFT (LEFT) whose dynamical degrees of freedom are five SM quarks and all neutral and charged leptons. Our systematic study reveals that any operator in the EFT on a cosmological background would not give the reported ICB angle, which is observationally consistent with frequency independence. In particular, we estimate the size of the ICB angle generated by the effect that the cosmic microwave background photons travel through the medium of the cosmic neutrino background with parity-violating neutrino-photon interactions and find that it would be too small to explain the data. If the reported ICB angle should be confirmed, then our result would indicate the existence of a new particle that is lighter than the electroweak scale and feebly interacting with the SM particles. ## I Introduction Precision measurements of the cosmic microwave background (CMB) radiation play a central role in modern cosmology and enable us to deepen our understanding of the fundamental laws of nature. The Universe looked through the eyes of the WMAP and _Planck_ satellites is well-fitted by the \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) model [1; 2; 3; 4] which has long become a cornerstone of the standard cosmology. However, the recent analysis of CMB polarization data has measured a parity-violating signal [5; 6; 7; 8; 9], called _cosmic birefringence_[10; 11; 12], which may show us a hint of new physics beyond the \(\Lambda\)CDM paradigm. Cosmic birefringence is a phenomenon that rotates the plane of linear polarization of the CMB photons. Its overall rotation angle from the last scattering surface to the present, called isotropic cosmic birefringence (ICB) angle and hereafter denoted by \(\beta\), has been probed in the past CMB measurements [13; 14; 15; 16]. However, the uncertainty of systematic error from the instrumental miscalibration of the polarization angle has strongly limited the determination of \(\beta\). To overcome this issue, the method relying on the polarized Galactic foreground emission to extract the intrinsic effect on \(\beta\) has been developed [17; 18; 19], and ref. [5] has recently reported a nonzero ICB angle of \(\beta=0.35^{\circ}\pm 0.14^{\circ}\) (\(2.4\sigma\)) at the 68% confidence level for nearly full-sky _Planck_ polarization data. The precision of \(\beta\) has been improved and the latest joint analysis of _Planck_/WMAP data has reported \(\beta=0.34^{\circ}\pm 0.09^{\circ}\) (\(3.6\sigma\)) [8]. Moreover, the measured \(\beta\) is consistent with frequency independence and does not favor a possibility of Faraday rotation effect caused by the local magnetic field [7]. Although the contribution to a systematic error in the ICB angle from Galactic foregrounds is not yet well understood [20; 21], we could avoid this problem by developing the method that does not rely on the foreground contribution but reduces the impact of the miscalibration angle in the upcoming CMB observations [22; 23] (see ref. [24] for a detailed review). Therefore, we can expect a solid confirmation of the nonzero ICB angle in the near future, and it is timely to explore its origin. The measured ICB can be caused when the CMB photons pass through a cosmological background of a pseudoscalar field \(\phi\) that is weakly coupled to the photon through a Chern-Simons (CS) term \(\phi F_{\mu\nu}\bar{F}^{\mu\nu}\), where \(F\) and \(\bar{F}\) respectively denote the electromagnetic tensor and its dual [25; 26]. A prevailing candidate of the pseudoscalar field \(\phi\) has been provided by an axion-like particle (ALP), and a possibility that photon's birefringence is caused by a cosmic background of the ALP constitut ing dark energy or dark matter has been developed [27; 28; 29; 30; 31; 32; 33; 34]. Then, after the measurement of the nonzero ICB has been reported, most of the previous studies have focused on the implications for the ALP [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. However, to explain the reported ICB, the ALP mass should be extremely light [35]. The existence of such an ultra-light ALP has significant implications for physics beyond the Standard Model (SM), _e.g.,_ ruling out simple Grand Unified models [52]. Then, one would wonder whether there are other possibilities to generate the ICB or not. In the present paper, as a step to identify new physics behind the reported nonzero ICB, we explicitly show that it requires at least a new light particle other than the SM particles under the standard cosmological evolution. When a new light particle is absent, an operator that induces the ICB should be written only in terms of the SM fields. Then, the SM effective field theory (SMEFT) and low-energy effective field theory (LEFT) provide a powerful tool to systematically list up all such operators in the SM and its extensions. The SMEFT includes all operators of the SM fields that respect the gauge symmetry \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\). This framework includes any possible explanations of the ICB with the SM fields known in the literature including scatterings between photons and fermions [53]. As we will see, it is also convenient to introduce the so-called LEFT, the EFT below the electroweak breaking scale. In the LEFT, we assume that there are no new particles (other than possible light sterile neutrinos) around or below the electroweak scale and the interactions respect the gauge symmetry \(SU(3)_{C}\times U(1)_{\rm EM}\). We report the following two results: (i) only a CS-type effective operator, \(\tilde{\cal O}F_{\mu\nu}\tilde{F}^{\mu\nu}\) with a Lorentz-scalar operator \(\tilde{\cal O}\), is able to induce the frequency-independent ICB in our Universe, and (ii) none of the CS-type effective operators in the SMEFT/LEFT leads to the desired ICB angle. We note that the operator \(\tilde{\cal O}F_{\mu\nu}\tilde{F}^{\mu\nu}\) is distinguished from \(J_{\mu}A_{\nu}\tilde{F}^{\mu\nu}\) with a vector current \(J_{\mu}\). It has been reported that the effective operator \(J_{\mu}A_{\nu}\tilde{F}^{\mu\nu}\) for the neutrino current appears via the loop interactions between photon and neutrino, and leading to the photon's birefringence [54; 55; 56; 57; 58; 59]. However, the operator \(J_{\mu}A_{\nu}\tilde{F}^{\mu\nu}\) for the neutrino current is not solely gauge invariant under \(U(1)_{\rm EM}\) and hence does not appear within SMEFT/LEFT.1 In the study of cosmic birefringence by this operator, a couple of scenarios beyond SMEFT/LEFT have been developed to get a sizable amount of ICB angle [60; 61]. Therefore, if the data should be confirmed, our results would then indicate the breakdown of the SMEFT/LEFT and thus the existence of a new particle lighter than the electroweak scale. Footnote 1: This claim would not be true when a photon has an effective mass in a plasma or two lepton loop diagrams are considered. However, this effect usually gives rise to a very tiny birefringence angle [58] or frequency-dependent birefringence angle [57]. The rest of the paper is organized as follows. Section II shows that only CS-type operators are relevant to the generation of the frequency-independent ICB. In section III, we list up all the possible CS-type operators in the SMEFT/LEFT. Then, section IV discusses whether the listed CS-type operators can induce the reported nonzero ICB with the corresponding cosmological backgrounds. In section V, we extend our arguments to narrow down possible new particles that can explain the reported nonzero ICB. Section VI is devoted to conclusions. Some calculational details are summarized in appendices. ## II Operators relevant to ICB Let us first show that interactions relevant to the frequency-independent ICB are only given by CS-type operators. The ICB requires an effective parity-violating operator quadratic in the photon field \(A_{\mu}\) because it is caused by a difference between the phase velocities of the left- and right-handed photons. In the vacuum, due to the Lorentz and \(U(1)_{\rm EM}\) symmetries, the effective quadratic action of the photon field \(A_{\mu}\) is only described by the operator, \[F_{\mu\nu}F^{\mu\nu}\;, \tag{1}\] with the field-strength tensor \(F_{\mu\nu}\equiv\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}\). The parity-violating CS operator, \[F_{\mu\nu}\tilde{F}^{\mu\nu}\,;\quad\tilde{F}^{\mu\nu}\equiv\epsilon^{\mu\nu \alpha\beta}F_{\alpha\beta}/2\;, \tag{2}\] is a total-derivative term, when entering in the action with a constant coefficient, and thus does not contribute to the local action. Note that \(\epsilon^{\mu\nu\rho\sigma}=\eta^{\mu\nu\rho\sigma}/\sqrt{-g}\), where we choose the convention that the flat-space Levi-Civita symbol takes \(\eta^{0123}=1\). Since there is no parity-violating term in the action, the ICB is not generated in the vacuum: it requires a medium. The isotropy of the measured rotation angle indicates that the medium is homogeneous over the Universe2 and thus made up of stable matter with a cosmological background. Moreover, the charged components can be excluded from our considerations. The stable charged SM particles are only electrons and protons. Their contributions are negligible compared to the neutral ones because the cosmological backgrounds of electrons and protons are suppressed by the small baryon-to-photon ratio. Therefore, we can assume that the medium is homogeneous and neutral. Under the standard cosmological evolution, the candidate matter that constitutes the medium is limited to the following: the Higgs vacuum expectation value (VEV), the quark pair/gluon condensates, the cosmic neutrino background (C\(\nu\)B), and the cosmological magnetic field.3 Footnote 3: We shall omit gravitational effects on ICB in the following considerations. Beyond minimal couplings between gauge fields and gravity, it has been shown that there exists a unique non-minimal coupling without pathology of the form \(\bar{R}_{\mu\nu\rho\sigma}F^{\mu\nu}\bar{F}^{\rho\sigma}\), where \(\bar{R}_{\mu\nu\rho\sigma}\) is the dual of the Riemann tensor [62]. Not only is the curvature tensor of the order of \(H^{2}\) (\(H\): Hubble parameter) on the cosmological background, but this term in fact does not break parity, thus no contribution to ICB. The presence of a cosmological background can break some symmetries in the effective quadratic action of the photon field and allow operators other than \(F_{\mu\nu}F^{\mu\nu}\) when the fields are expanded around the background. From the neutrality of the background, the effective action should respect the \(U(1)_{\rm EM}\) symmetry. Let us also assume that the background respects the cosmological principle: the Universe looks homogeneous and isotropic in the CMB rest frame. This is valid except for the cosmological magnetic field among the candidates. Then, in the CMB rest frame, the effective action should have spatial invariance. From the gauge and spatial invariance, the effective quadratic action should have the following operators: \[c_{EE}|{\bf E}|^{2}+c_{BB}|{\bf B}|^{2}+c_{EB}{\bf E}\cdot{\bf B}\;, \tag{3}\] and operators with derivative(s) on \({\bf E}\) and \({\bf B}\), where the coefficients are functions of the cosmic time \(t_{c}\). Here, the electric field \({\bf E}\) and the magnetic field \({\bf B}\) are defined in the CMB rest frame. The functions \(c_{EE}\) and \(c_{BB}\) correspond to an electric permittivity and a magnetic permeability, which deviate from those in the vacuum due to the presence of the cosmological medium. Now, we also have the parity-violating term \({\bf E}\cdot{\bf B}\) in the action. Since the coefficient \(c_{EB}\) is a function of the cosmic time \(t_{c}\), it is not reduced to the total derivative term. In appendices A and B, we show that the operator \({\bf E}\cdot{\bf B}\) actually induces the ICB. The operators with the derivative(s) give a frequency-dependent ICB angle, which is inconsistent with observations [7]. Therefore, we do not consider them further. This argument indicates that any operator relevant to the reported ICB should be reduced to the \({\bf E}\cdot{\bf B}\) term in a cosmological background. It has not been proved yet that only the CS-type operator leads to the frequency-independent ICB. There might be operators other than \(F_{\mu\nu}\tilde{F}^{\mu\nu}\) that reduce to \({\bf E}\cdot{\bf B}\) in a cosmological background. First, we consider parity-violating operators of the form, \[J_{\alpha\beta\mu\nu}F^{\alpha\beta}\tilde{F}^{\mu\nu}\,, \tag{4}\] where \(J_{\alpha\beta\mu\nu}\) is a tensor with even parity written in terms of the metric and matter fields. From the cosmological principle, the cosmological background of \(J_{\alpha\beta\mu\nu}\) should be written in terms of the metric \(g_{\mu\nu}\) and the unit four-vector \(u_{\mu}\propto\nabla_{\mu}t_{c}\) for the cosmic time \(t_{c}\) because only these two tensors are parity-even and invariant under the spatial rotation in the CMB rest frame where \(u_{\mu}\propto\delta^{0}_{\mu}\). If \(J_{\alpha\beta\mu\nu}\) does not contain \(g_{\mu\nu}\), \(J_{\alpha\beta\mu\nu}\) should be proportional to \(u_{\alpha}u_{\beta}u_{\mu}u_{\nu}\) and the operator (4) vanishes. Therefore, \(J_{\alpha\beta\mu\nu}\) should have the following form, \[J_{\alpha\beta\mu\nu}=\frac{1}{4}(g_{\alpha\mu}J_{\beta\nu}+g_{\beta\nu}J_{ \alpha\mu}-g_{\beta\mu}J_{\alpha\nu}-g_{\alpha\nu}J_{\beta\mu})\,, \tag{5}\] and the operator (4) reduces to \[J_{\alpha\beta}F^{\alpha\mu}\tilde{F}^{\beta}{}_{\mu}\,, \tag{6}\] in the cosmological background. We can show that this operator is rewritten in the CS form using the identity [63], \[F_{\alpha\mu}\tilde{F}^{\beta\mu}=\frac{\delta^{\beta}_{\alpha}}{4}\,F_{\mu \nu}\tilde{F}^{\mu\nu}\;. \tag{7}\] In terms of \(J_{\alpha\beta\mu\nu}\), the resultant CS-type operator is given by \[\tilde{\cal O}F_{\mu\nu}\tilde{F}^{\mu\nu}\;;\quad\tilde{\cal O}=\frac{J_{ \mu}{}^{\mu}}{4}=\frac{J_{\alpha\beta}{}^{\alpha\beta}}{6}\,. \tag{8}\] Therefore, it would be enough to study the CS-type operators for the Lorentz scalar \(\tilde{\cal O}=J_{\alpha\beta}{}^{\alpha\beta}/6\). Another possibility is \[J_{\mu}K^{\mu}\;;\ K^{\mu}\equiv 2A_{\nu}\tilde{F}^{\mu\nu}\;, \tag{9}\] for the CS current \(K^{\mu}\) (\(\nabla_{\mu}K^{\mu}=F_{\mu\nu}\tilde{F}^{\mu\nu}\)) and a matter field \(J_{\mu}\). This operator is reduced to \(c_{\rm EB}{\bf E}\cdot{\bf B}\) with \(\dot{c}_{\rm EB}=J_{0}/4\). However, it does not respect \(U(1)_{\rm EM}\) in general. To achieve the invariance under the gauge transformation \(A_{\mu}\to A_{\mu}+\nabla_{\mu}\alpha\), the current \(J_{\mu}\) should identically satisfy the integrability condition \(\nabla_{[\mu}J_{\nu]}=0\), _i.e.,_\(J_{\mu}=\nabla_{\mu}\tilde{\cal O}\) for a Lorentz scalar \(\tilde{\cal O}\). Then, \(J_{\mu}K^{\mu}\) can be rewritten in the CS form \(\tilde{\cal O}F_{\mu\nu}\tilde{F}^{\mu\nu}\) with a partial integration. Note that \(J_{\mu}K^{\mu}\) can be embedded into the gauge invariant operator \[\bar{\psi}\gamma_{\mu}D_{\nu}\psi\tilde{F}^{\mu\nu}\;, \tag{10}\] with \(J_{\mu}=\bar{\psi}\gamma_{\mu}\psi\) for a fermion field \(\psi\) and \(D_{\mu}\) its covariant derivative. However, the fermion field should be electrically charged and this option is excluded from the neutrality of the background. In general, to get the interaction (9), our model should contain a field playing the role of a Stueckelberg field or Nambu-Goldstone (NG) field such as the phase of an electrically-charged field [60]. In our setup, a candidate for a Stueckelberg field or NG field is absent for \(A_{\mu}\). Therefore, we do not need to consider the interaction (9) (see also the discussion in section V). Finally, we consider an operator in the form \[J_{\mu\nu}F^{\mu\nu}\,. \tag{11}\] This operator vanishes when we replace the matter field \(J^{\mu\nu}\) by the cosmological background, which should be written in terms of \(g_{\mu\nu}\) and \(u_{\mu}\). However, it can affect the propagation of CMB photons when their backreaction to the cosmological background is taken into account. Schematically, we can write the backreaction term as \[\delta J_{\mu\nu}=\hat{K}_{\mu\nu\alpha\beta}F^{\alpha\beta}\,, \tag{12}\] by separating \(J_{\mu\nu}\) into the background and the deviation induced by a propagating photon, \(\delta J_{\mu\nu}\). Here, the response function \(\hat{K}_{\mu\nu\alpha\beta}\) is a non-local operator in general. Then, substituting Eq. (12) to \(J_{\mu\nu}F^{\mu\nu}\), we obtain the effective operator \[F^{\mu\nu}\hat{K}_{\mu\nu\alpha\beta}F^{\alpha\beta}\,. \tag{13}\] If the response function \(\hat{K}_{\mu\nu\alpha\beta}\) has a component \[\hat{K}_{\mu\nu\alpha\beta}\supset\hat{\mathcal{O}}_{\epsilon}\epsilon_{\mu \nu\alpha\beta}/2\,, \tag{14}\] the operator (13) results in an operator \[F_{\mu\nu}\hat{\mathcal{O}}_{\epsilon}\tilde{F}^{\mu\nu}\,. \tag{15}\] In general, \(\hat{\mathcal{O}}_{\epsilon}\) is a non-local operator and thus the operator (15) induces a frequency-dependent ICB angle. In appendix C, we will show that the resultant ICB angle actually depends on the frequency in the framework of SMEFT/LEFT. Therefore, we do not need to consider the interaction (11).4 Footnote 4: Ref. [64] has developed a model of dipole interaction between CMB photon and dark matter and has also shown that it is hard to explain the observed ICB angle. The above discussion is not applied in the presence of a cosmological magnetic field \(\bar{\mathbf{B}}\) because it breaks the isotropy to the axial symmetry along the direction. As we have done in the isotropic case of Eq. (3), it is convenient to work in the CMB rest frame with \(u^{\mu}\propto\delta^{\mu}_{0}\) to write down the most general operators. Here, the homogeneity requires that \(|\bar{\mathbf{B}}|\) should be a function of the cosmic time \(t_{c}\). Then, the kinetic action is given by quadratic terms in the propagating photon fields, \(\mathbf{E}_{\parallel}\), \(\mathbf{E}_{\perp}\), \(\mathbf{B}_{\parallel}\) and \(\mathbf{B}_{\perp}\), where the subscripts \(\parallel\) and \(\perp\) respectively denote the components parallel and orthogonal to the background magnetic field. Since the background magnetic field is an axial vector, the parallel and orthogonal components respectively have the opposite and same parity as the original vector. Thus, the kinetic action consists of the parity-violating operators \[\mathbf{E}_{\parallel}\cdot\mathbf{B}_{\parallel}\,,\quad\mathbf{E}_{\perp} \cdot\mathbf{B}_{\perp}\,, \tag{16}\] as well as the parity-conserving operators \[\mathbf{E}_{\parallel}\cdot\mathbf{E}_{\parallel}\,,\quad\mathbf{E}_{\perp} \cdot\mathbf{E}_{\perp}\,,\quad\mathbf{B}_{\parallel}\cdot\mathbf{B}_{\parallel }\,,\quad\mathbf{B}_{\perp}\cdot\mathbf{B}_{\perp}\,. \tag{17}\] The latter parity-conserving terms can be interpreted as that permittivity and permeability tensors become non-diagonal due to the anisotropic medium. As for the parity-violating terms, in addition to the CS form \(F_{\mu\nu}F^{\mu\nu}\propto\mathbf{E}\cdot\mathbf{B}\), new independent parity-violating operators \(\mathbf{E}_{\parallel}\cdot\mathbf{B}_{\parallel}\) can appear (an \(\mathbf{E}_{\perp}\cdot\mathbf{B}_{\perp}\) term can be rewritten in terms of \(F_{\mu\nu}\tilde{F}^{\mu\nu}\) and \(\mathbf{E}_{\parallel}\cdot\mathbf{B}_{\parallel}\)). As we will give proof in appendix B, this new operator generates the anisotropic cosmic birefringence (ACB). It should be also noted that the parity-conserving operators (17) can cause a cosmic birefringence in the anisotropic background because the cosmological magnetic field \(\bar{\mathbf{B}}\) can modify the dispersion relations for the polarization modes parallel and orthogonal to \(\bar{\mathbf{B}}\). However, as expected from the fact that the modification depends on the relative angle of the propagating direction with \(\bar{\mathbf{B}}\), the resultant cosmic birefringence is anisotropic (see ref. [65]). Moreover, it depends on the frequency, which is not consistent with the report in ref. [7]. Therefore, we conclude that only the CS-type operators \(\tilde{\mathcal{O}}F_{\mu\nu}\tilde{F}^{\mu\nu}\) can generate the reported frequency-independent ICB. ## III CS-type operators in SMEFT/LEFT We now list up CS-type operators in the SMEFT or LEFT, \[\mathcal{L}_{\text{CS}}=\frac{\alpha}{8\pi}\sum_{a}\frac{\tilde{\mathcal{O}}_{ a}}{\Lambda_{a}^{n}}\,F_{\mu\nu}\tilde{F}^{\mu\nu}\, \tag{18}\] where the subscript \(a\) denotes the operator species, \(\Lambda_{a}\) is some mass scale and the power \(n\) is given by the dimension of the operator \(\tilde{\mathcal{O}}_{a}\). We have factored out the electromagnetic constant \(\alpha/(8\pi)\) as a convention. As shown in the previous section, any particle processes that might lead to the ICB should be described by Eq. (18) as an effective Lagrangian. We henceforth list possible \(\tilde{\mathcal{O}}_{a}\) of each dimension in the SMEFT/LEFT. **Dimension \(\mathbf{2-Let}\)** us first write down effective operators \(\tilde{\mathcal{O}}_{a}\) of dimension \(2\). In the SMEFT, the operators \(\tilde{\mathcal{O}}_{a}\) should be Lorentz scalars and singlets of the SM symmetry \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\). Their building blocks are the Higgs field \(H\) (dimension \(1\)), the covariant derivative \(D\) (dimension \(1\)), the SM fermion \(\psi\) (dimension \(3/2\)), and the SM gauge field strength tensor \(X\) (dimension \(2\)). Due to the Lorentz symmetry, the only possibilities of the dimension-two operators \(\tilde{\mathcal{O}}_{a}\) are thus two classes, \(H^{2}\) and \(D^{2}\). All independent operators of those classes have been listed in ref. [66]. The operators \(\tilde{\mathcal{O}}_{a}\) are exhausted by \(\tilde{\mathcal{O}}_{H}\equiv H^{\dagger}H\). Therefore, the dimension-two CS-type operator is given by \[\frac{\alpha}{8\pi}\frac{H^{\dagger}H}{\Lambda_{H}^{2}}\,F_{\mu\nu}\tilde{F}^{ \mu\nu}. \tag{19}\] **Dimension 3** - We next write down effective operators \(\tilde{\cal O}_{a}\) of dimension 3. The Lorentz symmetry restricts the candidates to three classes such as \(H^{3}\), \(HD^{2}\) and \(\psi^{2}\). However, the \(SU(2)_{L}\) symmetry requires that the operators should contain an even number of \(H\). This rules out all bosonic candidates. Since there is no hypercharge singlet of the SM fermion bilinear \(\psi^{2}\), we can conclude that the SMEFT does not contain dimension-three CS-type operators [67]. However, the operators of the \(\psi^{2}\) type can arise in the LEFT respecting \(SU(3)_{C}\times U(1)_{\rm EM}\). It has been shown in ref. [68] that the CS-type operators of the \(\psi^{2}\) type are generated by loop effects. In the LEFT, there are four-fermion interactions \((\bar{\psi}\Gamma\psi)(\bar{\psi}_{c}\Gamma\psi_{c})\) (\(\Gamma=1,\gamma_{\mu},\sigma_{\mu\nu}\)) with charged particles \(\psi_{c}\). Among them, the scalar-type interaction \((\bar{\psi}\psi)(\bar{\psi}_{c}\psi_{c})\) generates the CS-type operators. The Lagrangian (18) is thus given by \[\sum_{\psi=e,\nu,d,u}\frac{\alpha}{8\pi}\frac{\tilde{\cal O}_{\psi}}{\Lambda_{ \psi}^{3}}F_{\mu\nu}\tilde{F}^{\mu\nu}\;, \tag{20}\] for three generations of charged leptons \(e_{i}\,(i=1,2,3)\), neutrinos \(\nu_{i}\) and down-type quarks \(d_{i}\), and two generations of up-type quarks \(u_{i}\). Here, when the neutrinos are assumed to be Dirac fermions, the dimension-3 operators \(\tilde{\cal O}_{a}\) are composed of the following fermion bilinears, \[\tilde{\cal O}_{e} \equiv\tilde{\cal C}_{e}^{ij}\bar{e}^{i}P_{L}e^{j}+{\rm h.c.}\,, \tag{21a}\] \[\tilde{\cal O}_{\nu} \equiv\tilde{\cal C}_{\nu}^{ij}\bar{\nu}^{i}P_{L}\nu^{j}+{\rm h.c.}\,,\] (21b) \[\tilde{\cal O}_{d} \equiv\tilde{\cal C}_{d}^{ij}\bar{d}^{j}P_{L}d^{j}+{\rm h.c.}\,,\] (21c) \[\tilde{\cal O}_{u} \equiv\tilde{\cal C}_{u}^{ij}\bar{u}^{i}P_{L}u^{j}+{\rm h.c.}\,, \tag{21d}\] where \(\tilde{\cal C}_{e,\nu,d,u}\) denote dimensionless coupling matrices, \(\bar{f}\equiv f^{\dagger}\gamma^{0}\) and \(P_{L}\equiv(1-\gamma^{5})/2\). For Majorana neutrinos, we can define the similar operator \(\tilde{\cal O}_{\nu}\) but an extra factor of \(1/2\) should be included for \(i=j\). In this case, the operator violates the lepton number by two units. When the interaction (20) is matched to the SMEFT, the energy scale \(\Lambda_{\psi}\) is related to a characteristic energy scale in the SMEFT, \(\Lambda_{\rm SMEFT}\), as [68] \[\frac{1}{\Lambda_{\psi}^{3}}\sim\frac{v}{4\pi m_{c}}\frac{1}{\Lambda_{\rm SMEFT }^{3}}\,, \tag{22}\] with the Higgs vacuum expectation value (VEV) \(v\) and the mass \(m_{c}\) of a charged particle in the loop. The vector-type interaction \((\bar{\psi}\gamma_{\mu}\psi)(\bar{\psi}_{c}\gamma^{\mu}\psi_{c})\) such as the Fermi's interactions in the SM does not generate a CS-type operator. Instead, it could only generate an operator in the form: \[(\bar{\psi}\gamma_{\mu}\psi)K^{\mu}\;;\ K^{\mu}\equiv A_{\nu}\tilde{F}^{\mu \nu}\,, \tag{23}\] because of its tensor structure. As noted below Eq. (9), the operator (23) is not invariant under \(U(1)_{\rm EM}\) unless \(\bar{\psi}\gamma_{\mu}\psi=\nabla_{\mu}\tilde{\cal O}\) for a Lorentz scalar \(\tilde{\cal O}\). However, this is not the case because the current \(\bar{\psi}\gamma_{\mu}\psi\) has the transverse components. Therefore, we conclude that the operator (23) is forbidden by \(U(1)_{\rm EM}\) (see also ref. [69]). **Dimension 4** - The operators such as \(H^{4}\) and \(H\psi^{2}\) in the SM Lagrangian can appear in effective operators \(\tilde{O}_{a}\) of dimension 4. However, they are composed of the building blocks that have already appeared in the dimension-two/three operators. Then, we only consider novel candidates, \[\sum_{X=F,Z,W,G}\frac{\alpha}{8\pi}\left(\frac{X_{\alpha\beta}X^{\alpha\beta}} {\Lambda_{X}^{4}}+\frac{X_{\alpha\beta}\tilde{X}^{\alpha\beta}}{\Lambda_{X}^{ 4}}\right)F_{\mu\nu}\tilde{F}^{\mu\nu}\;, \tag{24}\] where \(X,\tilde{X}\) denote a field-strength tensor and its dual, and \(Z,W,G\) are the \(Z,W\) bosons and gluon, respectively. On the CMB scale \(T_{\rm LSS}\sim 0.3\,\)eV, these types of operator with \(X_{\mu\nu}=F_{\mu\nu}\) emerge from the electron loop [70; 71]: \[{\cal L}_{\rm EH}\supset\frac{7\alpha^{2}}{360m_{e}^{4}}\left(F_{\mu\nu} \tilde{F}^{\mu\nu}\right)^{2}\;, \tag{25}\] with the electron mass \(m_{e}\). This interaction is known as the Euler-Heisenberg Lagrangian, which is one example of non-linear electrodynamics. **Dimension \(n\,(>4)\)** - Much higher dimensional operators \(\tilde{O}_{a}\) do not contain new building blocks and will give subdominant effects. Therefore, we do not consider such operators any further. ## IV Isotropic cosmic birefringence Let us discuss whether the listed CS-type operator of each dimension is able to induce the reported nonzero ICB with the corresponding cosmological background. The Lagrangian of interest is given by \[{\cal L}_{A}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4}\tilde{\cal O}F_{\mu\nu} \tilde{F}^{\mu\nu}\;. \tag{26}\] On a cosmological background \(\phi_{\tilde{\cal O}}\equiv\langle\tilde{\cal O}\rangle\), the CS-type operator \(\tilde{\cal O}F_{\mu\nu}\tilde{F}^{\mu\nu}/4\) reduces to \(-\phi_{\cal O}{\bf E}\cdot{\bf B}\). Then, the induced ICB angle \(\beta\) is estimated as (see appendix A for the derivation) [24] \[\beta=\frac{1}{2}\int_{t_{\rm LSS}}^{t_{0}}dt\,\frac{\partial\phi_{\cal O}}{ \partial t}=\frac{1}{2}\left[\phi_{\cal O}(t_{0})-\phi_{\tilde{\cal O}}(t_{\rm LSS })\right]\,, \tag{27}\] where \(t_{0},t_{\rm LSS}\) denote the present time and the time at the last scattering surface (LSS), respectively. Note that there can be the terms, \(c_{EE}|{\bf E}|^{2}\) and \(c_{BB}|{\bf B}|^{2}\) with time-dependent coefficients in the SMEFT/LEFT. We assume that the time dependence of these terms is tuned to be small not to contradict the constraints on the time variation of the fine structure constant [72]. **Dimension 2** - First, we discuss the CS-type operator (19), \[\frac{\alpha}{8\pi}\frac{H^{\dagger}H}{\Lambda_{H}^{2}}\,F_{\mu\nu}\tilde{F}^{ \mu\nu}\;. \tag{28}\] After the electroweak phase transition, the Higgs field gets a VEV \(v\). We can neglect excitations from the vacuum because they are unstable and decay quickly. In the standard scenario, the VEV neither contributes to the ICB because it leads to the CS-type operator with a time-independent coefficient. If the VEV \(v\) depended on the time, it could induce the ICB. However, it would simultaneously induce the time variation of the electron mass \(m_{e}\) from the LSS to today, which is constrained to be \(\Delta m_{e}/m_{e}=(4\pm 11)\times 10^{-3}(68\%\) C.L.)[72]. This means that the time variation of the VEV is at most a few percent of the electroweak scale. On the other hand, \(\Lambda_{H}\) should be larger than the TeV scale from collider constraints [73]. Therefore, we conclude that the operator (19) cannot explain the reported nonzero ICB. **Dimension 3** - We next discuss whether each term of the CS-type operators (20), \[\sum_{\psi=e,\nu,d,u}\frac{\alpha}{8\pi}\frac{\tilde{\mathcal{O}}_{\psi}}{ \Lambda_{\psi}^{3}}F_{\mu\nu}\tilde{F}^{\mu\nu}\;, \tag{29}\] with the fermion bilinears of Eqs. (21a), (21b), (21c), (21d), can induce the reported ICB. The cosmic electron background is excluded because of the neutrality as we discussed in section II. The quark bilinear condensate after the QCD transition neither contributes to the ICB because the condensate is independent of time and thus the resultant CS term becomes a total derivative. Therefore, the cosmic neutrino background (C\(\nu\)B) seems to be the most relevant to the ICB and potentially able to explain the reported angle. The C\(\nu\)B is predicted to be generated from the thermal bath in the early Universe [74]. In the standard Big-Bang cosmology, its number density is comparable to that of the CMB photons. While the C\(\nu\)B has not been directly detected yet, its contribution to the radiation density has been detected by the WMAP 5-year observation [75] and later confirmed via the Planck observation [4]. Hence, the CS-type interaction (20) with the neutrino bilinear (21b) may induce the reported ICB during the photon propagation through the C\(\nu\)B. We can rewrite the neutrino bilinear operator (21b) as \[\tilde{\mathcal{O}}_{\nu}=\frac{(\tilde{\mathcal{C}}_{\nu}^{\dagger}+\tilde{ \mathcal{C}}_{\nu})^{ij}}{2}\bar{\nu}^{i}\nu^{j}+\frac{(\tilde{\mathcal{C}}_{ \nu}^{\dagger}-\tilde{\mathcal{C}}_{\nu})^{ij}}{2}\bar{\nu}^{i}\gamma^{5}\nu^ {j}\,. \tag{30}\] Since we are now interested in the evolution of the photon field in the presence of the C\(\nu\)B, the neutrino bilinear operator (30) is replaced with its background value that is given by the expectation value \(\langle\tilde{\mathcal{O}}_{\nu}\rangle\) with regard to a state of fixed neutrino and anti-neutrino number densities. As calculated in appendix D, we obtain \(\langle\bar{\nu}^{i}\gamma^{5}\nu^{j}\rangle=0\) and \[\langle\bar{\nu}^{i}\nu^{j}\rangle =\delta^{ij}\mathcal{F}(t)\,, \tag{31}\] \[\mathcal{F}(t) \equiv\int\frac{d^{3}p}{(2\pi)^{3}}\frac{m_{i}}{E_{\mathbf{p}}} \left[n^{i}(p,t)+\bar{n}^{i}(p,t)\right], \tag{32}\] where \(n^{i},\bar{n}^{i}\) denote the phase-space number densities of the \(i\)-th neutrino and anti-neutrino, respectively, and \(m_{i}\) is the neutrino mass. As noted at the beginning of Sec. II, we neglect small effects from the cosmic expansion (as well as those from the spacetime curvature) in the present calculations. Thus, the function \(\phi_{\tilde{\mathcal{O}}}\) in Eq. (27) is given by \[\phi_{\tilde{\mathcal{O}}}(t)=\frac{\alpha}{4\pi}\frac{\text{tr}[(\tilde{ \mathcal{C}}_{\nu}+\tilde{\mathcal{C}}_{\nu}^{\dagger})\mathcal{F}(t)]}{ \Lambda_{\nu}^{3}}\,, \tag{33}\] and we put an extra factor \(1/2\) for the case of Majorana neutrinos as noted below Eq. (21d). Since \(\phi_{\tilde{\mathcal{O}}}\) redshifts due to the cosmic expansion, we can well approximate the ICB angle as \(\beta\simeq-\phi_{\tilde{\mathcal{O}}}(t_{\text{LSS}})/2\). At the time of the last scattering, the temperature of the Universe is \(T_{\text{LSS}}\sim 0.3\,\text{eV}\). Assuming \(m_{i}\ll T_{\text{LSS}}\) at the last scattering for some or all of the neutrino species, we can analytically evaluate \(\mathcal{F}\) in Eq. (32), \[\mathcal{F}(t_{\text{LSS}})\simeq 0.5\,\frac{m_{i}}{T_{\text{LSS}}}\left(N^{i}+ \bar{N}^{i}\right),\;\;m_{i}\ll T_{\text{LSS}}\,, \tag{34}\] where \(N^{i}\) and \(\bar{N}^{i}\) are the number densities of the neutrino and anti-neutrino, respectively, at the LSS. Therefore, we obtain \[\beta\simeq-0.008\,^{\circ}\frac{\alpha}{137^{-1}}\sum_{i}\frac{m_{i}}{T_{ \text{LSS}}}(\tilde{\mathcal{C}}_{\nu}+\tilde{\mathcal{C}}_{\nu}^{\dagger})^{ ii}\frac{N^{i}+\bar{N}^{i}}{\Lambda_{\nu}^{3}}\;. \tag{35}\] Here, the neutrino number density at the last scattering is estimated to be \(N_{i}^{1/3}=\mathcal{O}(10^{-10})\,\text{GeV}\) in the natural unit. The CS-type neutrino interaction is constrained by various experiments measuring coherent elastic neutrino-nucleus scattering, deep inelastic neutrino-nucleon scattering and solar neutrino scattering, as well as collider searches [76]. The resulting lower bound on the mass scale \(\Lambda_{\nu}\) ranges from \(\mathcal{O}(10^{-2})\,\text{GeV}\) to \(\mathcal{O}(10^{2})\,\text{GeV}\). Then, we find that the ICB angle (35) would be much smaller than the observed value. **Dimension 4** - Finally, we discuss the CS-type operator (24). To see the propagation of a photon in the cosmological magnetic field, we separate the field strength into the background and propagating-photon parts \(F_{\mu\nu}=F_{\mu\nu}^{\text{(bg)}}+F_{\mu\nu}^{\text{(p)}}\). When the background \(F_{\mu\nu}^{\text{(bg)}}\) is pure magnetic field, the operators (24) give \[(F_{\alpha\beta}^{\text{(bg)}}F^{\text{(p)}\alpha\beta})(F_{\mu\nu}^{\text{(bg )}}\tilde{F}^{\text{(p)}\mu\nu})\;,\;(F_{\mu\nu}^{\text{(bg)}}\tilde{F}^{\text{( p)}\mu\nu})^{2}\;, \tag{36}\] as well as the CS-type term \[(F_{\alpha\beta}^{\text{(bg)}}F^{\text{(bg)}\alpha\beta})(F_{\mu\nu}^{\text{( p)}}\tilde{F}^{\text{(p)}\mu\nu})\;, \tag{37}\] to the quadratic action of the propagating-photon field. Here, their coefficients are of the same order. The operator (37) gives the CS-type term \({\bf E}\cdot{\bf B}\). Meanwhile, the operators (36) reduce to \({\bf E}_{\parallel}\cdot{\bf B}_{\parallel}\) in Eq. (16) and \({\bf E}_{\parallel}\cdot{\bf E}_{\parallel}\) in Eq. (17). Therefore, the operators (24) inevitably cause unwanted ACB as well as the ICB (see the last paragraph in section II). As for the gauge fields \(X=Z,W,G\), the weak gauge bosons are excluded because they are unstable and decay quickly. The gluon forms a nonzero condensate and contributes to the ICB. However, it would be negligibly small because the energy scale of the condensate is the QCD scale while the scale \(\Lambda_{X}\) is constrained from collider experiments as \(\Lambda_{X}\gtrsim 1\,\)TeV [77]. It is also notable that there are other operators with the same symmetry, _e.g.,_\((X_{\alpha\beta}F^{\alpha\beta})(X_{\mu\nu}\tilde{F}^{\mu\nu})\), which would induce the ACB and need to be suppressed. ## V Beyond SMEFT/LEFT We can extend our arguments to narrow down possible new particles that are able to explain the reported nonzero ICB. Unless a particle is a SM singlet like the ALP, the leading CS-type operator is given by \[\begin{split}&\frac{\alpha}{8\pi}\frac{\Phi^{\dagger}\Phi}{ \Lambda^{2}}F_{\mu\nu}\tilde{F}^{\mu\nu}\quad\text{(for a scalar $\Phi$)}\;,\\ &\frac{\alpha}{8\pi}\frac{\bar{\chi}\chi}{\Lambda^{3}}F_{\mu\nu} \tilde{F}^{\mu\nu}\quad\text{(for a fermion $\chi$)}\;.\end{split} \tag{38}\] For the operator to induce the ICB, \(\Phi^{\dagger}\Phi\) or \(\bar{\chi}\chi\) should have a time-dependent background value. There would be three possibilities: the cosmological background of \(\Phi^{\dagger}\Phi\) or \(\bar{\chi}\chi\) is composed of (i) classical fields, (ii) pair condensates, or (iii) particles. The case (i) is similar to the ALP case. In the case (ii), the pair condensates would also effectively work as axion-like fields. To discuss this possibility, we need to elaborate a model with interaction to form appropriate time-dependent condensates and its cosmological consequences, which is left for future studies. In the following, we focus on the possibility (iii). Since the one-particle energy \(E_{\bf p}\) is always larger than the mass \(m\), the cosmological background of \(\Phi^{\dagger}\Phi\) or \(\bar{\chi}\chi\) is bounded from above by the energy density \(\rho\) as \[\langle\Phi^{\dagger}\Phi\rangle\lesssim\rho/m^{2}\;,\quad\langle\bar{\chi} \chi\rangle\lesssim\rho/m\;, \tag{39}\] respectively. In the epoch after the LSS, the energy density is conservatively bounded by the critical density at the LSS, \(\rho_{\rm c,LSS}\simeq(3\times 10^{-13}{\rm TeV})^{4}\) (natural unit). Substituting these results to Eq. (27), we find \[\begin{split}& m\lesssim 10^{-14}\,{\rm eV}\left(\frac{|\beta|}{ 0.3\,^{\circ}}\right)^{-1/2}\left(\frac{\Lambda}{{\rm TeV}}\right)^{-1}\;({ \rm scalar})\;,\\ & m\lesssim 10^{-40}\,{\rm eV}\left(\frac{|\beta|}{0.3\,^{ \circ}}\right)^{-1}\left(\frac{\Lambda}{{\rm TeV}}\right)^{-3}\;({\rm fermion })\;.\end{split} \tag{40}\] The interactions (38) can be probed by collider experiments through production processes, \(\gamma\to\gamma\Phi\Phi\), \(\gamma\chi\chi\). As shown for the fermion \(\chi\) in ref. [78], the energy scale \(\Lambda\) should be roughly larger than the TeV scale due to the absence of such a process provided that it is kinematically allowed, _i.e.,_ the mass is smaller than the TeV scale. Then, the conditions (40) indicate that the mass of the particle should be extremely small. In addition to potential missing energy carried by such light particles in electromagnetic signals, this scenario has the same theoretical problem as the ALP: the existence of such ultra-light particles with the CS-type interactions (38) has tension with simple Grand Unified models because interactions with gluons give a large contribution to the mass. It is also noted that the particles should never be thermalized. Otherwise, we could apply the argument similar to the neutrino case, and thus the induced ICB angle would become negligibly small. This requirement restricts possible interactions with the SM particles. Moreover, the maximum temperature of the Universe, denoted as \(T_{\rm max}\), has an upper limit so that the particles are never thermalized through the interactions (38). Roughly estimating the interaction rate as \(\Gamma\sim(\alpha/8\pi)^{2}T^{5}/\Lambda^{4}\) (scalar) or \((\alpha/8\pi)^{2}T^{7}/\Lambda^{6}\) (fermion), we can find the upper limit from the Gamow's criterion \(\Gamma<H\) as \[\begin{split}& T_{\rm max}\lesssim 1\,{\rm MeV}\left(\frac{ \Lambda}{3\,{\rm GeV}}\right)^{4/3}\;({\rm scalar})\;,\\ & T_{\rm max}\lesssim 1\,{\rm MeV}\left(\frac{\Lambda}{0.2\,{\rm GeV }}\right)^{6/5}\;({\rm fermion})\;.\end{split} \tag{41}\] This argument implies that the energy scale \(\Lambda\) should be roughly larger than the GeV scale because \(T_{\rm max}\) must be larger than the temperature of the Big Bang Nucleosynthesis, \(T_{\rm BBN}\sim 1\,\)MeV. A similar argument can be applied to a dark vector field \(V_{\mu}\) such as a dark photon. In the massless case, the leading CS-type operator is given by Eq. (24) with the field strength \(X_{\alpha\beta}\) for the vector field \(V_{\mu}\). The ICB angle induced by this operator is much suppressed because the field strength is bounded by the energy density as \(|X_{\alpha\beta}X^{\alpha\beta}|<4\rho\). In the massive case, the CS-type operator \((V_{\alpha}V^{\alpha})F_{\mu\nu}\tilde{F}^{\mu\nu}\) is also allowed. When this operator gives the leading contribution to the ICB angle, the argument is parallel to the scalar case: the mass should be extremely small. Another possibility is that the \(U(1)_{\rm EM}\) symmetry is only realized through a Stueckelberg field so that the operator (9) is allowed. As mentioned above, this operator is reduced to \(c_{\rm EB}{\bf E}\cdot{\bf B}\) with \(\hat{c}_{\rm EB}=J_{0}\). Roughly estimating \(\hat{c}_{\rm EB}\sim Hc_{\rm EB}\) with the Hubble parameter \(H\), we see that the induced ICB angle \(\beta\) could be comparable to the reported value \(\beta\sim{\cal O}(0.1\,^{\circ})\) for the neutrino background \(J_{0}\sim n_{\nu}\) due to the significant enhancement factor \(H^{-1}\)[60; 61; 79]. However, in this scenario, the symmetry allows the photon mass in the vacuum, whose experimental upper bound is \({\cal O}(10^{-18})\,\)eV [80; 81], and we encounter a severe fine-tuning problem. ## VI Conclusion In the present paper, we have investigated the interpretation of the reported nonzero ICB angle. Adopting the EFT approach, it is concluded that (1) only a CS-type operator could produce such a parity-violating ICB effect in the presence of a cosmic background that is assumed to be homogeneous and neutral, and (2) no SM particles can explain the reported value under the standard cosmological evolution. Among all SM contributions, the C\(\nu\)B could be the most promising, but its contribution has been calculated explicitly and found to be too small to explain the reported ICB. Our result would indicate the existence of a new particle lighter than the electroweak scale or some exotic cosmological scenarios if the reported value of ICB should be confirmed. We have provided a guideline on searching for physics beyond the SM through the ICB apart from the ALP. Some constraints on a dark sector particle and the maximum temperature of the Universe have been discussed by assuming that the dark sector is responsible for the ICB, which might be helpful in identifying the dark matter. ###### Acknowledgements. We would like to thank Eiichiro Komatsu and Satoshi Shirai for discussions. This work is supported by Natural Science Foundation of China No. 12150610465 (YN), the RIKEN Incentive Research Project grant (RN), and JSPS Overseas Research Fellowship / JSPS KAKENHI No. JP20H05859 and 19K14702 (IO), No. 19H01891, No. 20H05860 (RS). ## Appendix A Derivation of the ICB angle \(\beta\) In this appendix, we give a derivation of Eq. (27), given (26), for the ICB angle \(\beta\) induced by a CS-type operator, \[-\frac{\phi_{\mathcal{O}}}{4}F_{\mu\nu}\tilde{F}^{\mu\nu}=\phi_{\tilde{ \mathcal{O}}}\mathbf{E}\cdot\mathbf{B}\, \tag{30}\] following the discussion in ref. [82]. We assume that a photon propagates in the homogeneous and isotropic background, \(ds^{2}=a(\eta)(-d\eta^{2}+d\vec{x}^{2})\), where \(a(\eta)\) is the scale factor and \(\eta\) is the conformal time. Since it can be conformally transformed into the Minkowski spacetime, we evaluate every quantity on the Minkowski spacetime in this appendix. In the presence of the CS-type operator (30), Maxwell equations are modified. We first write down general Maxwell equations for the spatially-invariant operators (3). In the momentum space, \(\mathbf{E}(x)\rightarrow\tilde{\mathbf{E}}(k)e^{ik\cdot x}\) and \(\mathbf{B}(x)\rightarrow\tilde{\mathbf{B}}(k)e^{ik\cdot x}\),5 they can be written as Footnote 5: This 4-vector Fourier transformation is justified for slow variations of the source terms compared to the oscillation time scale due to \(\omega\) and \(\mathbf{k}\), which is consistent with our underlying assumption in considering ICB. \[i\mathbf{k}\cdot\tilde{\mathbf{E}} =\rho_{\text{ext}}+\rho_{\text{ind}}\, \tag{31a}\] \[\mathbf{k}\cdot\tilde{\mathbf{B}} =0\,\] (31b) \[\mathbf{k}\times\tilde{\mathbf{E}} =-\omega\tilde{\mathbf{B}}\,\] (31c) \[i\mathbf{k}\times\tilde{\mathbf{B}} =\mathbf{j}_{\text{ext}}+\mathbf{j}_{\text{ind}}+i\omega\tilde{ \mathbf{E}}\, \tag{31d}\] where \(\omega\equiv k_{0}\). In addition to the external sources \(\rho_{\text{ext}}\) and \(\mathbf{j}_{\text{ext}}\), the source terms \(\rho_{\text{ind}}=\rho_{\text{ind}}(\tilde{\mathbf{E}},\tilde{\mathbf{B}})\) and \(\mathbf{j}_{\text{ind}}=\mathbf{j}_{\text{ind}}(\tilde{\mathbf{E}},\tilde{ \mathbf{B}})\) are induced by the operators (3) other than the standard one. These terms can be rewritten only in terms of the electric field \(\tilde{\mathbf{E}}\) by using the Maxwell equation (31c): \(\rho_{\text{ind}}=\rho_{\text{ind}}(\tilde{\mathbf{E}})\) and \(\mathbf{j}_{\text{ind}}=\mathbf{j}_{\text{ind}}(\tilde{\mathbf{E}})\). From Eq. (31c) and Eq. (31d), we have \[\mathbf{j}_{\text{ext}}+\mathbf{j}_{\text{ind}}=-i\omega\left\{\tilde{\mathbf{ E}}-\frac{|\mathbf{k}|^{2}}{\omega^{2}}\left[\tilde{\mathbf{E}}-\hat{\mathbf{k}} (\hat{\mathbf{k}}\cdot\tilde{\mathbf{E}})\right]\right\}\, \tag{32}\] where \(\hat{\mathbf{k}}\equiv\mathbf{k}/|\mathbf{k}|\). Decomposing \(\tilde{\mathbf{E}}\) into the components parallel and transverse to the wave vector \(\mathbf{k}\) as \[\tilde{\mathbf{E}}_{\text{l}}=\hat{\mathbf{k}}(\hat{\mathbf{k}}\cdot\tilde{ \mathbf{E}})\,\quad\tilde{\mathbf{E}}_{\text{t}}=\tilde{\mathbf{E}}-\tilde{\mathbf{E}}_{ \text{l}}\, \tag{33}\] we can rewrite Eq. (32) as \[\mathbf{j}_{\text{ext}}+\mathbf{j}_{\text{ind}}=-i\omega\left[\tilde{\mathbf{ E}}_{\text{l}}+\left(1-\frac{|\mathbf{k}|^{2}}{\omega^{2}}\right)\tilde{ \mathbf{E}}_{\text{t}}\right]. \tag{34}\] Since the operators in (3) are quadratic in the electromagnetic fields, the induced current \(\mathbf{j}_{\text{ind}}=\mathbf{j}_{\text{ind}}(\tilde{\mathbf{E}})\) is linear in \(\tilde{\mathbf{E}}\). The three spatial vectors \(\tilde{\mathbf{E}}_{\text{t}}\), \(\tilde{\mathbf{E}}_{\text{l}}\), and \(\hat{\mathbf{k}}\times\tilde{\mathbf{E}}=\hat{\mathbf{k}}\times\tilde{ \mathbf{E}}_{\text{t}}\) span the three-dimensional space, provided that neither \(\tilde{\mathbf{E}}_{\text{l}}\) nor \(\tilde{\mathbf{E}}_{\text{t}}\) is null. Thus, in general, we can parameterize the induced current as \[\mathbf{j}_{\text{ind}}=-i\omega\left[(1-\varepsilon_{\text{l}})\tilde{ \mathbf{E}}_{\text{l}}+(1-\varepsilon_{\text{t}})\,\tilde{\mathbf{E}}_{\text{ t}}-i\varepsilon_{\text{p}}\hat{\mathbf{k}}\times\tilde{\mathbf{E}}\right]. \tag{35}\] Here, the three parameters \(\varepsilon_{\text{l}}\), \(\varepsilon_{\text{t}}\), and \(\varepsilon_{\text{p}}\) consist of \(\omega\) and three arbitrary functions \(c_{\text{EE}}\), \(c_{\text{BB}}\), and \(c_{\text{EB}}\) in Eq. (3). As understood from the parity, the \(\varepsilon_{\text{p}}\) term is generated by the CS-type operator (30). Taking into account the assumption that \(\phi_{\tilde{\mathcal{O}}}\) is a function only of the time \(\eta\), the induced current from the CS-type operator (30) is given by \[\mathbf{j}_{\text{ind}}=\phi^{\prime}_{\tilde{\mathcal{O}}}(\eta)\tilde{ \mathbf{B}}=-\phi^{\prime}_{\tilde{\mathcal{O}}}(\eta)\,\frac{\mathbf{k} \times\tilde{\mathbf{E}}}{\omega}\,, \tag{36}\] where a prime denotes a derivative with respect to \(\eta\), and \(\varepsilon_{\text{p}}\) can be read off as \[\varepsilon_{\text{p}}=\phi^{\prime}_{\mathcal{O}}(\eta)\,\frac{|\mathbf{k}|}{ \omega^{2}}. \tag{37}\] Here, we have assumed that \(\phi^{\prime}_{\tilde{\cal O}}\) is approximately constant, _i.e.,_\(|\phi^{\prime}_{\tilde{\cal O}}/\phi_{\tilde{\cal O}}|\ll\omega,|{\bf k}|\), which is valid for the CMB photons propagating in the cosmological background. We now consider an electromagnetic (EM) wave propagating through media only with the induced charge current, \({\bf j}_{\rm ext}=0\). Let us define \[{\bf E}_{\pm}=\frac{1}{2}\left(\tilde{\bf E}_{\rm t}\pm i\hat{\bf k}\times \tilde{\bf E}\right)\;. \tag{10}\] In the absence of external source, i.e. \(\rho_{\rm ext}=0\), with \(\phi_{\tilde{\cal O}}\) having only time dependence, which gives \(\rho_{\rm ind}=0\), Eqs. (11a) and (11) tell us \(\tilde{\bf E}_{\rm l}=0\), which we set hereafter.6 Then, we can rewrite the wave equation (10) by inserting Eq. (10) as Footnote 6: In this case, we would in principle have to replace \(\tilde{\bf E}_{\rm l}\) in (10) by a term proportional to \(\hat{k}\) in order to span the 3-D space; however, one can trivially see that this term should be zero. \[0=i\omega\bigg{[}\left(\varepsilon_{\rm t}+\varepsilon_{\rm p}-\frac{|{\bf k} |^{2}}{\omega^{2}}\right)\tilde{\bf E}_{+}+\left(\varepsilon_{\rm t}- \varepsilon_{\rm p}-\frac{|{\bf k}|^{2}}{\omega^{2}}\right)\tilde{\bf E}_{-} \bigg{]}\;, \tag{11}\] and the following dispersion relations are satisfied: \[\varepsilon_{\rm t}\pm\varepsilon_{\rm p}=|{\bf k}|^{2}/\omega^{2}\;, \tag{12}\] where \(\pm\) denotes the 2 transverse helicity modes. As we mention in the main text, we only consider the CS-like operator (10). Then, we focus on the case with \(\varepsilon_{\rm t}=1\) and \(\varepsilon_{\rm p}=\phi^{\prime}_{\tilde{\cal O}}(\eta)|{\bf k}|/\omega^{2}\) [see Eq. (11)]. The dispersion relations (12) become \[\omega_{\pm}=|{\bf k}|\sqrt{1\mp\frac{\phi^{\prime}_{\tilde{\cal O}}}{|{\bf k }|}}\approx|{\bf k}|\mp\frac{1}{2}\phi^{\prime}_{\cal O}\;, \tag{13}\] for the \(\pm\)-helicity modes, where we have assumed that \(|\phi^{\prime}_{\cal O}|\ll|{\bf k}|\) in the second equation. The phases for the \(\pm\)-helicity modes are estimated as \[\theta_{\pm}=\bar{\theta}\pm\beta\,;\quad\beta\equiv\frac{1}{2}\int\phi^{ \prime}_{\cal O}(\eta){\rm d}\eta\,, \tag{14}\] where \(\bar{\theta}\equiv|{\bf k}|(-\eta+\hat{\bf k}\cdot{\bf x})\). Thus, from Eq. (10), the monochromatic EM waves at LSS and today are related as \[{\bf E}(\eta_{0}) ={\bf E}_{+}(\eta_{\rm LSS})e^{i\theta_{+}}+{\bf E}_{-}(\eta_{\rm LSS })e^{i\theta_{-}}\] \[=e^{i\bar{\theta}}\left[{\bf E}(\eta_{\rm LSS})\cos\beta-\hat{\bf k }\times{\bf E}(\eta_{\rm LSS})\sin\beta\right]\;, \tag{15}\] where the phases are defined by taking the integration from \(\eta_{\rm LSS}\) to \(\eta_{0}\) in Eq. (14). This equation shows that the polarization direction rotates clockwise by the angle \(\beta\) with respect to the line-of-sight direction \({\bf n}_{\rm LOS}\equiv-\hat{\bf k}\). Therefore, we can conclude that the ICB angle is given by \[\beta=\frac{1}{2}\int_{\eta_{\rm LSS}}^{\eta_{0}}\phi^{\prime}_{\tilde{\cal O} }(\eta)d\eta=\frac{1}{2}\left[\phi_{\tilde{\cal O}}(t_{0})-\phi_{\tilde{\cal O} }(t_{\rm LSS})\right]\;, \tag{16}\] where we have replaced the conformal time with the cosmic time after the integration. Therefore, we have derived Eq. (27). Note that the birefringence effect is only generated by \(\varepsilon_{\rm p}\), which is only induced by a CS-type operator. ## Appendix B Polarization tensors in medium Let us revisit the argument in section II from the viewpoint of the photon propagation in a medium, extending the analysis presented in ref. [82]. We explicitly show that the operator \({\bf E}\cdot{\bf B}\) (\({\bf E}_{\parallel}\cdot{\bf B}_{\parallel}\)) generates the isotropic (anisotropic) cosmic birefringence. The effective kinetic action of the photon field \(A_{\mu}\) is given by \[S_{\rm kin}=\frac{1}{2}\iint{\rm d}^{4}x\,{\rm d}^{4}y\left[A_{\mu}(x)(D^{-1}) ^{\mu\nu}(x,y)A_{\nu}(y)\right]\;, \tag{17}\] where \(D_{\mu\nu}\) denotes the propagator in the presence of a cosmological background field \(J\): \[\langle TA_{\mu}(x)A_{\nu}(y)\rangle_{J}=iD_{\mu\nu}(x,y)\;, \tag{18}\] for the time ordering \(T\). Introducing the self energy \(\Pi^{\mu\nu}\), we can write \((D^{-1})^{\mu\nu}\) as \[(D^{-1})^{\mu\nu}=(\Delta^{-1})^{\mu\nu}+\Pi^{\mu\nu}\;, \tag{19}\] where \((\Delta^{-1})^{\mu\nu}\) is the tree-level part, \[(\Delta^{-1})^{\mu\nu}\equiv-i(g^{\mu\nu}\nabla^{2}-\nabla^{\mu}\nabla^{\nu}) \delta^{4}(x-y)\;. \tag{20}\] In the Fourier space, the self-energy term in the kinetic action (17) is written as \[\frac{1}{2}\iint{\rm d}^{4}k_{1}{\rm d}^{4}k_{2}\left[A_{\mu}(k_{1})\Pi^{\mu \nu}(k_{1},k_{2})A_{\nu}(k_{2})\right]\;, \tag{21}\] and its gauge invariance requires that \(\Pi^{\mu\nu}\) should satisfy \[k_{1}^{\mu}\Pi_{\mu\nu}=0\,,\quad k_{2}^{\nu}\Pi_{\mu\nu}=0\,. \tag{22}\] Moreover, \[\Pi^{\mu\nu}(k_{1},k_{2})=\Pi^{\nu\mu}(k_{2},k_{1})\,, \tag{23}\] from the Bose-Einstein statistics and \[L^{\mu}{}_{\alpha}L^{\nu}{}_{\beta}\Pi^{\alpha\beta}(k_{1},k_{2};J)=\Pi^{\mu \nu}(L\cdot k_{1},L\cdot k_{2};L\cdot J)\,, \tag{24}\] for a Lorentz transformation \(L\). Here, \((L\cdot k_{1})^{\mu}\equiv L^{\mu}{}_{\alpha}k_{1}^{\alpha}\) and so on. In the condition (24), we have explicitly written the dependence on the background field \(J\), which is not necessarily invariant under the Lorentz transformation. In the vacuum without background, the four-momentum of the photon is conserved: \(k_{1}^{\mu}+k_{2}^{\mu}=0\). Thus, the independent tensors that can appear in \(\Pi^{\mu\nu}\) are \(g^{\mu\nu}\), \(\epsilon^{\mu\nu\alpha\beta}\) and \(k_{1}^{\mu}\) (\(=-k_{2}^{\mu}\)). The tensor structure of \(\Pi^{\mu\nu}\) can be extracted as \[\Pi^{\mu\nu}(k_{1},k_{2})\stackrel{{\rm vac}}{{=}}|k_{1}|^{2} \Pi(|k_{1}|^{2}){\cal P}_{1}{}^{\mu\nu}\delta^{4}(k_{1}-k_{2})\;, \tag{101}\] where we have introduced the projection operator \[{\cal P}_{a}{}^{\mu\nu}\equiv g^{\mu\nu}-\frac{k_{a}^{\mu}k_{a}^{\nu}}{|k_{a}|^ {2}}\quad(a=1,2)\,. \tag{102}\] This shows that the structure of the kinetic term is kept, \({\cal L}_{\rm kin}\sim F_{\mu\nu}F^{\mu\nu}\). That is, the ICB is not generated in the vacuum. In the presence of a cosmological background, a background tensor \(J\) should be added to the building blocks. The cosmological principle requires that \(J\) is a function of the cosmic time \(t_{c}\). Any tensor indices of \(J\) and its derivatives should be determined by the unit-four vector \(u^{\mu}\propto\nabla^{\mu}t_{c}\) as well as \(g^{\mu\nu}\) and \(\epsilon^{\mu\nu\alpha\beta}\). Moreover, the four-momentum conservation is violated by the cosmological background as \(k_{1}^{\mu}+k_{2}^{\mu}\propto u^{\mu}\). Then, the building blocks of \(\Pi^{\mu\nu}\) are \(g^{\mu\nu}\), \(\epsilon^{\mu\nu\alpha\beta}\), \(k_{1}^{\mu}\) and \(u^{\mu}\). The possible tensor structures of \(\Pi^{\mu\nu}\) with the properties (100)-(101) are \[{\cal P}_{1}{}^{\mu}{}_{\alpha}{\cal P}_{2}{}^{\alpha\nu}\,,\quad({\cal P}_{1 }\cdot k_{2})^{\mu}({\cal P}_{2}\cdot k_{1})^{\nu}\,,\quad\epsilon^{\mu\alpha \nu\beta}k_{1\alpha}k_{2\beta}\,, \tag{103}\] with \(k_{1}^{\mu}+k_{2}^{\mu}\propto u^{\mu}\), where \(({\cal P}_{a}\cdot k_{b})^{\mu}\equiv{\cal P}_{a}{}^{\mu\nu}k_{b\nu}\) (\(a,b=1,2\)). These three tensors give all independent components in \(\Pi^{\mu\nu}\). We can see this fact from the viewpoint of the photon propagation in the medium, which illuminates the role of the homogeneity and isotropy. As we can see from \(k_{1}^{\mu}+k_{2}^{\mu}\propto u^{\mu}\), the homogeneity ensures that the momentum \({\bf k}\) is conserved in the CMB rest frame. If we introduce the helicity basis with respect to \({\bf k}\), \[\{\epsilon_{L}{}^{\mu}({\bf k})\,,\epsilon_{+}{}^{\mu}({\bf k})\,,\epsilon_{- }{}^{\mu}({\bf k})\}\quad(\epsilon_{a}{}^{\mu}u_{\mu}=0)\,, \tag{104}\] the isotropy ensures that they are not mixed with each other in the propagation. Here, \(\epsilon_{L}{}^{\mu}\) and \(\epsilon_{\pm}{}^{\mu}\) respectively represent the longitudinal mode and the transverse modes with the helicity \(\sigma=\pm\) in the sense that \(k_{\mu}\epsilon_{\pm}^{\mu}=0\neq k_{\mu}\epsilon_{L}^{\mu}\). Taking also into account that the temporal components are determined by the gauge condition (100), we see that the independent components are three diagonal elements for the helicity basis (104): \[\Pi^{\mu\nu}=\sum_{a=L,\pm}\Pi_{a}\epsilon^{\star}{}_{a}{}^{\mu}\epsilon_{a}{ }^{\nu}+\left(\mbox{temporal components}\right), \tag{105}\] where \(\epsilon^{\star}{}_{a}{}^{\mu}\) is the complex conjugate of \(\epsilon_{a}{}^{\mu}\) (\(\epsilon^{\star}{}_{\pm}{}^{\mu}=\epsilon_{\mp}{}^{\mu}\)) and \(\Pi_{a}\) is a scalar quantity constructed from \(k_{1}^{\mu}\), \(k_{2}^{\mu}\) and \(u^{\mu}\). Therefore, the tensor structure of \(\Pi^{\mu\nu}\) can be determined by three independent tensors. It is straightforward to show that the tensor structure of the operator (3) is written as a linear combination of the three tensors (103) by using the covariant form of \({\bf E}\) and \({\bf B}\), \[E^{\mu}=u_{\nu}F^{\mu\nu}\,,\quad B^{\mu}=u_{\nu}\tilde{F}^{\mu\nu}\,. \tag{106}\] As we can understand from the parity, the first two tensors in (103) correspond to \(|{\bf E}|^{2}\) and \(|{\bf B}|^{2}\). For example, the tensor structure of the operator \(|{\bf E}|^{2}-|{\bf B}|^{2}=F_{\mu\nu}F^{\mu\nu}/2\) is given by \[\left[(k_{1}\cdot k_{2}){\cal P}_{1}{}^{\mu}{}_{\alpha}{\cal P}_{2}{}^{\alpha \nu}-({\cal P}_{1}\cdot k_{2})^{\mu}({\cal P}_{2}\cdot k_{1})^{\nu}\right]A_{ \mu}A_{\nu}\,. \tag{107}\] The third tensor in (103) corresponds to \({\bf E}\cdot{\bf B}\propto F_{\mu\nu}\tilde{F}^{\mu\nu}\): \[\epsilon^{\mu\alpha\nu\beta}k_{1\alpha}k_{2\beta}A_{\mu}(k_{1})A_{\nu}(k_{2}) \propto F_{\mu\nu}(k_{1})\tilde{F}^{\mu\nu}(k_{2})\,. \tag{108}\] Therefore, any parity-violating operator should be given by the CS-type operator in the cosmological background. Now, we see that the term \(c_{\rm EB}{\bf E}\cdot{\bf B}\) in (3) generates the ICB. Using \(k_{1}^{\mu}+k_{2}^{\mu}\propto u^{\mu}\), we can write the four-momenta \(k_{a}^{\mu}\) (\(a=1,2\)) as \[k_{a}^{\mu}=\omega_{a}u^{\mu}+\sigma_{a}|{\bf k}|\epsilon_{L}{}^{\mu}({\bf k}) \quad(\sigma_{a}=\pm 1)\,, \tag{109}\] and thus \[\epsilon^{\mu\alpha\nu\beta}k_{1\alpha}k_{2\beta} \propto\epsilon^{\mu\alpha\nu\beta}u_{\alpha}\epsilon_{L\beta}\] \[\propto\epsilon^{\star}{}_{+}{}^{\mu}\epsilon_{+}{}^{\nu}- \epsilon^{\star}{}_{-}{}^{\mu}\epsilon_{-}{}^{\nu}\,. \tag{110}\] The coefficient is a function of the cosmic time \(t_{c}\). Therefore, it causes the isotropic difference in propagation between the left- and right-handed photons in the CMB rest frame. We can perform a similar argument to show that the operator \({\bf E}_{\parallel}\cdot{\bf B}_{\parallel}\) in Eq. (16) generates the ACB. Recovering \(u^{\mu}\) and \(\bar{B}^{\mu}\), we can write it in a covariant way as \[(\bar{B}_{\mu}u_{\nu}F^{\mu\nu})(\bar{B}_{\alpha}u_{\beta}\tilde{F}^{\alpha \beta})\,. \tag{111}\] Using the decomposition, \[\bar{B}^{\mu}=\bar{B}_{L}\epsilon_{L}{}^{\mu}({\bf k})+\bar{B}_{+}\epsilon_{+}{}^{ \mu}({\bf k})+\bar{B}_{-}\epsilon_{-}{}^{\mu}({\bf k})\,, \tag{112}\] we find \[\bar{B}_{\mu}u_{\nu}F^{\mu\nu} \propto\left[(\bar{B}\cdot k)u^{\mu}-(u\cdot k)\bar{B}^{\mu} \right]A_{\mu}\,, \tag{113}\] \[\bar{B}_{\alpha}u_{\beta}\tilde{F}^{\alpha\beta} \propto\left[\bar{B}_{+}\epsilon_{+}{}^{\alpha}({\bf k})-\bar{B}_{-} \epsilon_{-}{}^{\alpha}({\bf k})\right]A_{\alpha}\,, \tag{114}\] and thus the operator of (111) contains the term that causes the birefringence, \[\bar{B}_{+}\bar{B}_{-}\left(\epsilon^{\star}{}_{+}{}^{\mu}\epsilon_{+}{}^{\nu}- \epsilon^{\star}{}_{-}^{\mu}\epsilon_{-}{}^{\nu}\right)A_{\mu}A_{\nu}\,. \tag{115}\] The amplitude \(\bar{B}_{+}\bar{B}_{-}\) represents the components of the magnetic field orthogonal to \({\bf k}\propto{\bf n}_{\rm LOS}\) with the line-of-sight direction \({\bf n}_{\rm LoS}\). Therefore, the resultant birefringence angle depends on a particular direction. ## Appendix C Dipole moment interactions We here show that the operator (11) \[J_{\mu\nu}F^{\mu\nu}\,, \tag{116}\] induce the frequency-dependent ICB angle in the framework of the SMEFT/LEFT. In the SMEFT/LEFT, the only possibility is given by [76, 83, 84] \[J_{\mu\nu}=\bar{\nu}^{i}\sigma_{\mu\nu}\lambda^{ij}\nu^{j}\;;\;\lambda^{ij}\equiv \mu^{ij}+i\varepsilon^{ij}\gamma^{5}\,, \tag{100}\] for three generations of neutrinos \(\nu^{i}\) (\(i=1,2,3\)) with \(\sigma_{\mu\nu}\equiv(i/2)[\gamma^{\mu},\gamma^{\nu}]\), which corresponds to the magnetic (\(\mu\)) and electric (\(\varepsilon\)) dipole moment interactions of neutrinos. We assume that the neutrinos are Dirac fermions. For Majorana neutrinos, an extra factor of \(1/2\) should be included. To see how the operator (11) affects the propagation of a CMB photon, let us first write down the Maxwell equation: \[\partial_{\mu}F^{\mu\nu}=-\partial_{\mu}J^{\mu\nu}\,. \tag{101}\] At the background level, the source term \(\partial_{\mu}J^{\mu\nu}\) is independent of the electromagnetic field \(A_{\mu}\) and thus does not modify its dispersion relation. Therefore, we need to consider the backreaction of \(A_{\mu}\) to the neutrino field \(\nu\). The Dirac equations of the neutrino fields in the mass eigenstates are modified as \[(i\not{\partial}-m_{i})\nu^{i}=(\sigma_{\mu\nu}\lambda^{ij}\nu^{j})F^{\mu\nu}\,. \tag{102}\] These equations can be formally solved as \[\nu^{i}=(\nu^{i})^{(\text{bg})}+(i\not{\partial}-m_{i})^{-1}[(\sigma_{\mu\nu} \lambda^{ij}\nu^{j})F^{\mu\nu}]\,, \tag{103}\] where \((i\not{\partial}-m_{i})^{-1}\) is the inverse of \((i\not{\partial}-m_{i})\). Here, \((\nu^{i})^{(\text{bg})}\) is the homogeneous solution. It corresponds to the solution in the absence of \(A_{\mu}\) and thus the background solution. Perturbatively expanding the solution (103) in terms of \(A_{\mu}\), we obtain \[\nu^{i}=(\nu^{i})^{(\text{bg})}+\hat{\psi}^{i}_{\mu\nu}F^{\mu\nu}+\mathcal{O} (A_{\mu}^{2})\,, \tag{104}\] where \[\hat{\psi}^{i}_{\mu\nu}\equiv(i\not{\partial}-m_{i})^{-1}\sigma_{\mu\nu} \lambda^{ij}(\nu^{j})^{(\text{bg})}\,. \tag{105}\] Hereafter, a quantity with a hat denotes an operator on the electromagnetic field \(F^{\mu\nu}\). Substituting this solution into (100), we find \[\delta J_{\mu\nu}=\hat{K}_{\mu\nu\alpha\beta}F^{\alpha\beta}\,, \tag{106}\] where \[\hat{K}_{\mu\nu\alpha\beta}\equiv(\bar{\nu}^{i})^{(\text{bg})}(\sigma_{\mu \nu}\lambda^{ij})\hat{\psi}^{j}_{\alpha\beta}+\text{h.c.}\,. \tag{107}\] In terms of \(\hat{K}_{\mu\nu\alpha\beta}\), the source term in the Maxwell equation is written as \[-\partial_{\mu}(\delta J^{\mu\nu})=-\partial_{\mu}(\hat{K}^{\mu\nu}{}_{\alpha \beta}F^{\alpha\beta})\,, \tag{108}\] and can be derived from the following interaction in the Lagrangian density \[\mathcal{L}_{\text{eff}}=-\frac{1}{4}F^{\mu\nu}\hat{K}_{\mu\nu\alpha\beta}F^{ \alpha\beta}\,. \tag{109}\] In the action, \(\hat{K}_{\mu\nu\alpha\beta}\) can be replaced as \[\begin{split}&\hat{K}_{\mu\nu\alpha\beta}\to\\ &\quad 2(\bar{\nu}^{i})^{(\text{bg})}(\sigma_{\mu\nu}\lambda^{ij})(i \not{\partial}-m_{j})^{-1}(\sigma_{\alpha\beta}\lambda^{jk})(\nu^{k})^{( \text{bg})}\,.\end{split} \tag{110}\] Hereafter, we will use this expression for \(\hat{K}_{\mu\nu\alpha\beta}\). To pick up the CS-type operator, we would like to extract the following components, \[\hat{K}_{\mu\nu\alpha\beta}\supset\frac{\hat{\mathcal{O}}_{\epsilon}\epsilon_{ \mu\nu\alpha\beta}}{2}\,, \tag{111}\] which results in the CS-type operator \[-\frac{1}{4}F_{\mu\nu}\hat{\mathcal{O}}_{\epsilon}\tilde{F}^{\mu\nu}\,. \tag{112}\] From the background symmetry, the interaction should be written as \[\int\text{d}^{3}x\ F_{\mu\nu}\hat{\mathcal{O}}_{\epsilon}\tilde{F}^{\mu\nu}= \int\frac{\text{d}^{3}p_{\gamma}}{(2\pi)^{3}}\tilde{\mathcal{O}}_{\epsilon}( \omega)F_{\mu\nu}(-p_{\gamma})\tilde{F}^{\mu\nu}(p_{\gamma})\,, \tag{113}\] where \(\omega\) is the frequency of the CMB photon: \(p_{\gamma}=(\omega,\omega\mathbf{n})\). Here, we have assumed that the cosmic expansion is adiabatic, which is appropriate for the CMB photons. In the following, we will derive the expression of \(\tilde{\mathcal{O}}_{\epsilon}(\omega)\). We can extract the component (111) by contracting \(\hat{K}_{\mu\nu\alpha\beta}\) with \(\epsilon^{\mu\nu\alpha\beta}\): \[\hat{\mathcal{O}}_{\epsilon}=-\frac{\epsilon^{\mu\nu\alpha\beta}\hat{K}_{\mu\nu \alpha\beta}}{12}\,. \tag{114}\] Using the identities \(\epsilon^{\mu\nu\alpha\beta}\sigma_{\alpha\beta}=-2i\gamma^{5}\sigma^{\mu\nu}\), \(\sigma_{\mu\nu}\sigma^{\mu\nu}=12\) and \(\sigma_{\mu\nu}\gamma^{\alpha}\sigma^{\mu\nu}=0\), \(\hat{\mathcal{O}}_{\epsilon}\) can be computed as \[\tilde{\mathcal{O}}_{\epsilon}=-4im_{j}(\bar{\nu}^{i})^{(\text{bg})}\lambda^{ ij}\gamma^{5}(\partial^{2}+m_{j}^{2})^{-1}\lambda^{jk}(\nu^{k})^{(\text{bg})}\,. \tag{115}\] Replacing the background neutrino bilinears in the momentum space (\(p_{\nu}=(E_{\nu},\mathbf{p}_{\nu})\)) with the expectation values as \((\bar{\nu}^{i})^{(\text{bg})}(p_{\nu}^{\prime})\gamma^{5}(\nu^{k})^{(\text{bg} )}(p_{\nu})\to 0\) and \[(\bar{\nu}^{i})^{(\text{bg})}(p_{\nu}^{\prime})(\nu^{k})^{(\text {bg})}(p_{\nu})\to\] \[\delta^{ik}(m_{i}/E_{\nu})[n_{\nu}^{i}(p_{\nu})+\bar{n}_{\nu}^{i} (p_{\nu})](2\pi)^{3}\delta^{(3)}(\mathbf{p}_{\nu}^{\prime}-\mathbf{p}_{\nu}) \tag{116}\] (see appendix D), we can read \[\tilde{\mathcal{O}}_{\epsilon}(\omega) =-2(\mu^{ij}\varepsilon^{ji}+\varepsilon^{ij}\mu^{ji})m_{j}\] \[\quad\times\int\frac{\text{d}^{3}p_{\nu}}{(2\pi)^{3}}\frac{m_{i} }{E_{\nu}}\frac{m_{i}}{(p_{\nu}+p_{\gamma})^{2}-m_{j}^{2}}\,. \tag{117}\] Taking into account that \(p_{\nu}\) is the momentum of the \(i\)-th neutrino, we can explicitly write the denominator in the integrand as, \[(p_{\nu}+p_{\gamma})^{2}-m_{j}^{2}=2\omega(E_{\nu}-\mathbf{n}\cdot\mathbf{p}_ {\nu})+m_{i}^{2}-m_{j}^{2}\,. \tag{118}\] Since the neutrino number density quickly decays due to the cosmic expansion, we can well approximate the ICB angle by \(-\bar{\mathcal{O}}_{\epsilon}(\omega)/2\) at the time of last scattering. Thus, \(\omega\) is estimated as \(\omega\sim T_{\rm LSS}\sim 0.3\)eV and the first term is dominant in (188): \[(p_{\nu}+p_{\gamma})^{2}-m_{j}^{2}\simeq 2\omega(E_{\nu}-\mathbf{n}\cdot\mathbf{ p}_{\nu})\,. \tag{189}\] In conclusion, we find \[\beta\simeq-\left.\frac{\bar{\mathcal{O}}_{\epsilon}(\omega)}{2}\right|_{t=t_ {\rm LSS}}\propto\frac{1}{\omega}\,, \tag{190}\] and thus the operator (170) induces the frequency-dependent ICB angle. ## Appendix D Background neutrinos We here consider the cosmic background neutrino \(\nu\) as a free Dirac fermion with mass \(m\), satisfying the Dirac equation, \((i\not{\partial}-m)\nu=0\). The cosmic expansion is adiabatic, and hence the (gravitational) particle production is a sub-leading effect. Quantization of the Dirac field gives \[\nu(x) =\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{\sqrt{2E_{\mathbf{p}}}} \sum_{s}\left[a_{\mathbf{p}}^{s}u^{s}(p)e^{-ipx}+b_{\mathbf{p}}^{s\dagger}v^{s }(p)e^{ipx}\right], \tag{191}\] \[\bar{\nu}(x) =\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{\sqrt{2E_{\mathbf{p}}}} \sum_{s}\left[b_{\mathbf{p}}^{s}\bar{v}^{s}(p)e^{-ipx}+a_{\mathbf{p}}^{s\dagger }\bar{u}^{s}(p)e^{ipx}\right]. \tag{192}\] Here, \(a_{\mathbf{p}}^{s},b_{\mathbf{p}}^{s}\) (\(s=1,2\)) denote the operator coefficients. They satisfy the anti-commutation relations, \(\{a_{\mathbf{p}}^{r},a_{\mathbf{q}}^{s\dagger}\}=\{b_{\mathbf{p}}^{r},b_{ \mathbf{q}}^{s\dagger}\}=(2\pi)^{3}\delta^{(3)}(\mathbf{p}-\mathbf{q})\delta^ {rs}\) and all the other anti-commutators are equal to zero. Spinor functions \(u,v\) are given by solutions of the Dirac equation, \[u^{s}(p)=\begin{pmatrix}\sqrt{p\cdot\beta}\,\xi^{s}\\ \sqrt{p\cdot\beta}\,\xi^{s}\end{pmatrix},\quad v^{s}(p)=\begin{pmatrix}\sqrt{p \cdot\beta}\,\eta^{s}\\ -\sqrt{p\cdot\beta}\,\eta^{s}\end{pmatrix}, \tag{193}\] where \(\beta^{\mu}\equiv(1,\mathbf{\beta})\) and \(\bar{\beta}^{\mu}\equiv(1,-\mathbf{\beta})\) with Pauli matrices \(\mathbf{\beta}\). The vectors \(\xi\) and \(\eta\) are both two-component spinors normalized as \(\xi^{\dagger}\xi=\eta^{\dagger}\eta=1\). Then, the expectation value of \(\bar{\nu}\nu\) with regard to a state of fixed neutrino and anti-neutrino number densities are obtained as \[\langle\bar{\nu}\nu\rangle=\int\frac{d^{3}p}{(2\pi)^{3}}\frac{m}{E_{\mathbf{p} }}\left[n(p,t)+\bar{n}(p,t)\right], \tag{194}\] where \(n,\bar{n}\) denote the number densities of the neutrino and anti-neutrino, respectively, and we have used \(\bar{u}^{s}u^{r}=2m\delta^{sr}\) and \(\bar{v}^{s}v^{r}=-2m\delta^{sr}\). On the other hand, using \(\bar{u}^{s}\gamma^{s}u^{r}=0\) and \(\bar{v}^{s}\gamma^{5}v^{r}=0\), we find \(\langle\bar{\nu}\gamma^{5}\nu\rangle=0\). For a Majorana neutrino, one can simply take \(a_{\mathbf{p}}=b_{\mathbf{p}}\) and the similar expectation values are derived.
2305.04818
Incompleteness Theorems for Observables in General Relativity
The quest for complete observables in general relativity has been a longstanding open problem. We employ methods from descriptive set theory to show that no complete observable on rich enough collections of spacetimes is Borel definable. In fact, we show that it is consistent with the Zermelo-Fraenkel and Dependent Choice axioms that no complete observable for rich collections of spacetimes exists whatsoever. In a nutshell, this implies that the Problem of Observables is to 'analysis' what the Delian Problem was to 'straightedge and compass'. Our results remain true even after restricting the space of solutions to vacuum solutions. In other words, the issue can be traced to the presence of local degrees of freedom. We discuss the next steps in a research program that aims to further uncover this novel connection between theoretical physics and descriptive set theory.
Aristotelis Panagiotopoulos, George Sparling, Marios Christodoulou
2023-05-08T16:18:11Z
http://arxiv.org/abs/2305.04818v2
# Incompleteness Theorems for Observables in General Relativity ###### Abstract The quest for complete observables in general relativity has been a longstanding open problem. We employ methods from descriptive set theory to show that no complete observable is Borel definable. In fact, we show that it is consistent with the Zermelo-Fraenkel and Dependent Choice axioms that no complete observable exists whatsoever. In a nutshell, this implies that the Problem of Observables is to 'analysis' what the Delian Problem was to'straightedge and compass'. Our results remain true even after restricting the space of solutions to vacuum solutions. In other words, the issue can be traced to the presence of local degrees of freedom in general relativity. From Einstein's century old hole-argument-paradox [1; 2; 3], to the contemporary programs for quantizing gravity [4; 5; 6; 7], the problem of deciding which 'functions' of the metric components do not depend on the choice of coordinates has been at the core of both technical and epistemological difficulties of general relativity (GR). This issue has become known as _the problem of observables_. The quest for complete observables -- observables which can discern between any pair of diffeomorphically inequivalent spacetimes -- begins in the 1950s [8; 9; 10; 11; 4; 12], and is ongoing [13; 14; 15; 16; 17; 18]. The state of affairs is dire: while one can occasionally tailor observables for special families of spacetimes [19; 20; 21], no non-trivial (non-constant) observable supported on the collection of all spacetimes has been reported. This, despite the seven-decades-long search since Bergmann famously stated the issue [8; 9; 10; 11]. The question arises: why this state of affairs? Notwithstanding some interesting but partial negative results in the Hamiltonian formulation of the problem [22; 23], a conclusive result which identifies the root of the issue has remained elusive. In this letter, we employ methods from descriptive set theory to prove a rather conclusive negative result for the problem of observables: _there is no constructive way to build complete observables for full general relativity_. We trace the root cause of this incompleteness phenomenon to a certain ergodic-theoretic behaviour that general covariance exhibits on any 'rich enough' space of solutions. The precise statements are given in Theorems 1 and 2. Both theorems hold for any space of solutions \(\mathcal{S}\) that is _rich_ --a technical term that we define below. In particular, they both hold when \(\mathcal{S}\) is the space of all solutions for general relativity. **Theorem 1**: _No concrete observable \(f\colon\mathcal{S}\to R\) is both complete and Borel definable._ The terms appearing in the statement of Theorem 1 will be defined below. In plain language: completeness requires that \(f\) differentiates between any two diffeomorphically inequivalent spacetimes by assigning to them different values; concreteness requires that \(f\) takes concrete objects as values, e.g. real numbers, invariant scalars, etc; Borel definability requires that \(f\) is given by some formula expressible in the language of analysis. Theorem 1 shows that it is as futile to seek an analytic description for a complete observable, as it is trying to construct \(\sqrt[3]{2}\) using straightedge and compass. This is not to say that complete observables do not 'exist'. In the extremely abstract sense allowed when utilising the Axiom of Choice (AC), complete observables do 'exist'. However, for a mathematical object to be useful to the physicist, it is not enough to merely exist, but to also be amenable to some kind of description with analytic tools. In a sense, when an object exists only by the power of AC, then for what concerns physics it is as useful as if it did not exist. From this point of view, the following is even more troubling. **Theorem 2**: _The statement "no complete concrete observable for \(\mathcal{S}\) exists" is consistent with \(\mathrm{ZF}+\mathrm{DC}\)._ Here ZF stands for the usual Zermelo-Fraenkel axioms of set theory and DC stands for the axiom of Dependent Choice: a 'fragment' of AC that is needed even for basic real analysis on the Euclidean space. Theorem 2 is proved in ZF+AC (ZFC) and it highlights the non-constructive nature of complete observables: any mathematical proof of the statement that complete observables merely 'exist', has to make use of the 'full' strength of AC. Importantly, _both theorems above hold even if we restrict \(\mathcal{S}\) to be the family of vacuum solutions on \(\mathbb{R}^{4}\)_. That is, the problem can already be traced to the local degrees of freedom present in the vacuum theory. This is in sharp contrast to the vacuum theory on \(\mathbb{R}^{3}\), which admits the constant map as a complete observable, as its only geodesically complete solution is the Minkowski spacetime. Another important point is that the above results immediately extend to incompleteness theorems for any _countable family_ of concrete and definable observables. Theorems 1 and 2 follow from Lemma 4 below. The latter is a stronger but more technical version of Theorem 1. All three results are proved for an arbitrary space of solutions \(\mathcal{S}\) which is 'rich', see further below. We show that the family of gravitational plane waves is rich, which implies that the vaccuum sector of solutions is rich, see Theorem 3. The proof of Theorem 3 is given in the Appendix, where we also show that the family of Robertson-Walker spacetimes is a rich. In closing, we discuss the reach of these incompleteness results and speculate on strategies for circumventing the issue. Our conclusions indicate the need for a new research program which employs descriptive set theory for measuring the intrinsic complexity of general covariance and for identifying which quantization procedures can or cannot be be implemented constructively. _The problem of observables--_ Originating in the work of Bergmann [8; 10; 11], the problem of observables refers to the problem of identifying those "_functions (or functionals) of field variables that are invariant with respect to coordinate transformations_" [12]. Formally, an observable for a collection \(\mathcal{S}\) of metric component fields is any function \(f\colon\mathcal{S}\to R\) to a set \(R\), so that for all \(g_{\mu\nu},\widetilde{g}_{\rho\sigma}\in\mathcal{S}\) \[g_{\mu\nu}\simeq_{\mathrm{diff}}\widetilde{g}_{\rho\sigma}\implies f(g_{\mu \nu})=f(\widetilde{g}_{\rho\sigma}). \tag{1}\] We write \(\widetilde{g}_{\rho\sigma}\simeq_{\mathrm{diff}}g_{\mu\nu}\) whenever there exists a smooth change of coordinates \(\widetilde{x}^{\xi}=\widetilde{x}^{\xi}(x^{\eta})\) so that \[g_{\mu\nu}(x^{\eta})=\frac{\partial\widetilde{x}^{\rho}}{\partial x^{\mu}} \frac{\partial\widetilde{x}^{\sigma}}{\partial x^{\nu}}\widetilde{g}_{\rho \sigma}(\widetilde{x}^{\xi}). \tag{2}\] The goal in Bergmann's program was to piece together a complete family of observables. That is, enough observables to tell apart different geometries represented in \(\mathcal{S}\), similarly to how Komar mass [24] classifies Schwarzschild spacetimes. Since the notions of concretness and definability below are closed under countable products, we can always replace a list \(f_{1},\ldots f_{n},\ldots\) of observables with a single observable \(f=\otimes_{n}f_{n}\). Hence, it suffices to consider completeness in the context of a single observable. _Completeness--_ An observable \(f\colon\mathcal{S}\to R\) is complete for \(\mathcal{S}\) if, for all \(g_{\mu\nu},\widetilde{g}_{\rho\sigma}\in\mathcal{S}\), we can strengthen (1) to \[g_{\mu\nu}\simeq_{\mathrm{diff}}\widetilde{g}_{\rho\sigma}\iff f(g_{\mu\nu})=f (\widetilde{g}_{\rho\sigma}). \tag{3}\] Without imposing any further restrictions on the 'concreteness' of the range \(R\) and the 'definability' of \(f\), any space of solutions \(\mathcal{S}\) admits a complete observable. For example, one can always take \(R\) to be the 'abstract' collection of all equivalence classes represented in \(\mathcal{S}\) \[[g_{\mu\nu}]_{\mathrm{diff}}:=\{\widetilde{g}_{\rho\sigma}\in\mathcal{S} \colon\widetilde{g}_{\rho\sigma}\simeq_{\mathrm{diff}}g_{\mu\nu}\}, \tag{4}\] and consider the complete observable that is given by the assignment \(g_{\mu\nu}\mapsto[g_{\mu\nu}]_{\mathrm{diff}}\). Or, one can take \(R=\mathbb{R}\) to be the more 'concrete' space of all real numbers, and use AC to build a complete \(\mathbb{R}\)-valued observable. To rule out such extreme'solutions' to the problem of observables, we will next require that observables are concrete and definable. For these notions, as well as for a few more technical points later on, we will need some nomenclature from descriptive set theory [25]. _Elements of descriptive set theory--_ Let \(X\) be a topological space and let \(A\subseteq X\). Then, \(A\) is nowhere dense if the complement of its closure is dense in \(X\); meager if it is a countable union of nowhere dense sets; comeager if its complement is meager; Borel if it is in the smallest \(\sigma\)-algebra of subsets of \(X\) that contains the open sets; Baire-measurable if it is in the smallest \(\sigma\)-algebra of subsets of \(X\) that contains both the open and the nowhere dense subsets of \(X\). A map \(f\colon X\to Y\) between topological spaces is Borel -- respectively, Baire-measurable -- if so is \(f^{-1}(U)\), for every open \(U\subseteq Y\). We are particularly interested in Polish spaces, where these notions are well behaved. A Polish space is a topological space \(X\) whose topology is separable and completely metrizable. _Topology and Borel sets on \(\mathcal{S}\)--_ Let \(\mathrm{Ein}(M)\) denote the collection of all smooth spacetimes supported on a smooth manifold \(M\). In what follows, we assume that \(\mathcal{S}\) is a subset of \(\mathrm{Ein}(M)\), for some fixed \(M\). We endow \(\mathcal{S}\) with the \(C^{\infty}\) compact-open topology. Specifically, let \(C^{\infty}(M,N)\) be the Polish space of all smooth maps \(M\to N\) between two manifolds \(M,N\) endowed with the \(C^{\infty}\) compact-open topology. A basic open \(U_{f,K,n,\varepsilon}\subseteq C^{\infty}(M,N)\) consists of all \(g\in C^{\infty}(M,N)\) whose derivatives up to degree \(n\) on the compact \(K\subseteq M\) are \(\varepsilon\)-close with those of \(f\), [26]. With the usual identifications we view \(\mathcal{S}\) as a subset of \(C^{\infty}(M,N)\), where \(N:=(TM\otimes TM)^{*}\). Then \(\mathcal{S}\) inherits from \(C^{\infty}(M,N)\) the \(C^{\infty}\) compact-open topology. This topology induces on \(\mathcal{S}\) a \(\sigma\)-algebra of Borel sets on which we base the notion of definable observables below. This choice of Borel structure on \(\mathcal{S}\) is implicit in the statement of Theorem 1. However, note that Theorem 2 is agnostic both on the choice of topology and of the Borel structure on \(\mathcal{S}\). _Concreteness--_ An observable \(f\colon\mathcal{S}\to R\) is concrete if it takes values in a Polish space. Restricting \(R\) to be a Polish space is a generic requirement. For instance, setting \(R\) to be either of the Polish spaces \(\mathbb{R}\) or \(C^{\infty}(M,\mathbb{R})\) we recover classical definitions of observables [11; 12]. However, our definition of concreteness allows observables to take values in much more general spaces, as Polish spaces include spaces of distributions, separable Banach spaces, as well as a vast array of more 'exotic' objects like the Cantor set. Restricting to Polish \(R\) is natural from the viewpoint of descriptive set theory as well, which considers Polish spaces to be 'well behaved' incarnations of uncountable sets. This is because their points are con trolled by a countable dense subset, similarly to how the rationals control the reals. _Definability_-- A concrete observable \(f\colon\mathcal{S}\to R\) is _Borel definable_ if it is a Borel map when \(\mathcal{S}\) is endowed with the \(C^{\infty}\) compact-open topology. These are exactly those observables which admit a description by an explicit formula in the language of analysis, in the following sense. The descriptive power of analysis is rooted in its ability to implement limiting procedures. For example, defining an ADM observable requires 'taking limits' at least once. Indeed, the value \(f(g_{\mu\nu})\) of an ADM observable \(f\) is given as the limit of integrals taken over a sequence \((K_{n})\) of compact regions of the underlying manifold [19]. Maps whose definition relies on limiting procedures of length two can already be surprisingly complex. For instance, the characteristic map \(\chi_{\mathbb{Q}}\colon\mathbb{R}\to\mathbb{R}\) of the rationals can be expressed as \(\chi_{\mathbb{Q}}(x)=\lim_{n}\lim_{m}\cos^{2m}(\pi n!x)\). Borel maps are precisely those maps which are attained by allowing iterations of such limiting procedures for any 'number' \(\xi\) of times, where \(\xi\) ranges over the set \(\omega_{1}\) of all countable ordinals [25, Theorem 24.3]. _Rich Families_-- Theorems 1 and 2 concern any family of solutions \(\mathcal{S}\) which is _rich_. For the definition of this notion we recall a few more elements from _invariant_ descriptive set theory [27, 28]. A Polish group \(G\) is a topological group whose topology is Polish. A Polish \(G\)-space is a continuous action \(G\curvearrowright X\) of the Polish group \(G\) on a Polish space \(X\). The associated _orbit equivalence relation_\(\simeq_{G}\) on \(X\) is given by setting \(x\simeq_{G}y\) if and only if \(x,y\) are in the same orbit, i.e., if \(G\cdot x=G\cdot y\). We say that \(G\curvearrowright X\) is _generically ergodic_ if: (1) there is \(x\in X\), whose orbit \(G\cdot x\) is dense in \(X\); (2) for every \(x\in X\), the orbit \(G\cdot x\) is meager in \(X\). A family \(\mathcal{S}\) of spacetimes is called _rich_, if there exists a generically ergodic Polish \(G\)-space \(G\curvearrowright X\) together with a _Borel reduction_\(r\) from \((X,\simeq_{G})\) to \((\mathcal{S},\simeq_{\mathrm{diff}})\). That is, a Borel map \(r\colon X\to\mathcal{S}\) so that for all \(\alpha,\beta\in X\) we have \[\alpha\simeq_{G}\beta\iff r(\alpha)\simeq_{\mathrm{diff}}r(\beta). \tag{5}\] One way for \(\mathcal{S}\) to be rich is if the action \(\mathrm{Diff}(M)\curvearrowright\mathcal{S}\) of the diffeomorphism group, implementing (2), is itself generically ergodic. In this case, it is very difficult to tell different orbits apart as any open set in the space of solution will be intersected by almost every orbit, and the mental picture which depicts orbits as 'curves' has to be replaced with that of a knotted ball of yarn. That being said, for \(\mathcal{S}\) to be rich it is enough for this tangling between orbits to occur just in some 'corner' of \(\mathcal{S}\). _The family of vacuum solutions_-- Before we turn to the proofs of Theorems 1 and 2 we would like to establish that rich families of solutions exist and hence, these theorems are not vacant. To the reader familiar with these arguments, it is probably not that surprising that rich families exist. Indeed, without imposing any restrictions on the stress-energy tensors of the members of \(\mathcal{S}\), one can simply concoct rich families of energy-momentum distributions that generate ergodic behaviour within \(\mathcal{S}\). An example which illustrates this strategy can be found in the Appendix, where we show that the family of all cosmological Robertson-Walker spacetimes is rich. Perhaps what is more surprising is that the problem is already present in the vacuum sector, making these incompleteness phenomena rather inexorable: **Theorem 3**: _Vacuum solutions on \(\mathbb{R}^{4}\) form a rich family._ Theorem 3 implies that any space of solutions which contains the vacuum solutions on \(\mathbb{R}^{4}\) is rich. It follows that the space of all solutions is rich. We now sketch the proof of Theorem 3, which is detailed in the Appendix. Consider the family GPW of all gravitational plane waves on \(\mathbb{R}^{4}\). These are all spacetimes which can be written in Brinkmann form [29] as \[H(u,x,y)du^{2}+dudv+dx^{2}+dy^{2}, \tag{6}\] where \(H\) is a smooth map that is quadratic in \(x,y\) and satisfies \(H_{xx}-H_{yy}=0\). Since members of GPW are vacuum solutions [30], it suffices to see that GPW is rich. As a model of generic ergodicity we will use the Bernoulli shift \(\mathbb{Z}\curvearrowright X\), where \(X:=\{0,1\}^{\mathbb{Z}}\) is the space of all integer-indexed sequences of \(0,1\) endowed with the product topology. The action \(\mathbb{Z}\curvearrowright X\) is implemented by \((k,\alpha):=k\cdot\alpha\), where \((k\cdot\alpha)(n)=\alpha(n-k)\). Hence, \[\alpha\simeq_{\mathbb{Z}}\beta\iff\exists k\in\mathbb{Z}\;\forall n\in \mathbb{Z}\;\;\alpha(n-k)=\beta(n). \tag{7}\] To see that \(\mathbb{Z}\curvearrowright X\) is generically ergodic, notice that its orbits are countable and that for the random \(\alpha\in X\) in the sense of the coin-flip measure, \(\alpha\) admits a dense orbit. We can now associate a smooth map \(W_{\alpha}\colon\mathbb{R}\to\mathbb{R}\) to each \(\alpha\in X\), so that \(W_{\alpha}\) reflects the distribution of \(0,1\)'s in the sequence \(\alpha\); see Figure 1. We define a Borel reduction \(r\colon X\to\mathrm{GPW}\) by setting \(r(\alpha)\) to be the metric with \(H(u,x,y):=W_{\alpha}(u)xy\) in (6). The map \(r\) is in fact continuous since compact regions of \(r(\alpha)\) are determined by finite regions of \(\alpha\). It is straightforward to check that \(r\) satisfies the (\(\Longrightarrow\)) direction of (5). The (\(\Longleftarrow\)) direction of (5) also holds and is given in the appendix. This part of the proof is more technical, it relies on the theory of Lie symmetries of planes waves from [30, 31]. In summary, we just showed that the highly-entangled orbit structure of the Bernoulli shift is also present in the Figure 1: To each \(\alpha\in X\) we associate a smooth \(W_{\alpha}\) orbit structure of general covariance, even after restricting to the vacuum sector. Our incompleteness theorems are a mere consequence of this complex orbit structure. _Incompleteness of Observables in General Relativity--_ We are now ready to see how Theorems 1 and 2 come about. The proofs follow from standard arguments used in invariant descriptive set theory [27; 28]. We sketch these arguments here for completeness. Let \(\mathcal{S}\) be a rich family and fix \(r\colon X\to\mathcal{S}\) as in (5). Let now \(f\colon\mathcal{S}\to R\) be any complete concrete observable and precompose \(\widehat{f}\colon=f\circ r\) to get a map \(\widehat{f}\colon X\to R\). By (3) and (5), for every \(\alpha,\beta\in X\) we have that: \[\alpha\simeq_{G}\beta\iff\widehat{f}(\alpha)=\widehat{f}(\alpha). \tag{8}\] We will make use of the following classical result. For the proof of its first part, see [27, Theorem 3.2] or [28]. The second part of the statement follows directly from the first, as the comeager \(C\subseteq X\) cannot be covered by a union of finitely many meager orbits, see e.g. [25]. **Lemma 4**: _Let \(\widehat{f}\colon X\to R\) be a Baire-measurable map which satisfies (8). Then, there exists a comeager set \(C\subseteq X\) on which \(\widehat{f}\) is constant. In particular, there are \(\alpha,\beta\in X\) with \(\alpha\not\simeq_{G}\beta\) and \(\widehat{f}(\alpha)=\widehat{f}(\beta)\)._ Theorem 1 follows from Lemma 4. Indeed, assume that the complete observable \(f\colon\mathcal{S}\to R\) is Borel definable. It follows that the associated \(\widehat{f}\) above is Baire-measurable and hence, by Lemma 4, we have \(\alpha,\beta\in X\) with \(\alpha\not\simeq_{G}\beta\) and \(\widehat{f}(\alpha)=\widehat{f}(\beta)\). But this contradicts (8). Theorem 2 follows from Lemma 4 and the fact that there exists, provably from ZFC, a model of ZF+DC in which every map \(\widehat{f}\colon X\to R\) is Baire-measurable [32; 33]. Since Lemma 4 is provable in \(\operatorname{ZF}+\operatorname{DC}\), all maps in this model satisfy the last statement of Lemma 4, and hence, they have to fail (8). Notice the resemblance of this proof with the usual consistency proof of the first four axioms of Euclid with the negation of the parallel postulate, which uses Euclidean geometry to construct a model of non-Euclidean geometry such as the Poincare disc. _Hamiltonian formulation--_ In this work, we focused on the problem of observables in the manifestly covariant formulation of general relativity. However, notice that both the statements and proofs of the Theorems 1 and 2 hold verbatim also for the Hamiltonian \((3+1)\) formulation of the problem where \(\mathcal{S}\) is any 'rich' family of initial data \((\gamma_{ab},K_{ab})\). In order to complete the line of argument, as we did above for the manifestly covariant formulation, one needs to ensure that the theorems are not vacant by showing that physically interesting sectors of the space of initial data is rich. Preliminary results not provided here show that this is indeed the case. The problem of identifying rich families of initial data is in fact a very interesting one and is deferred to a future work, as some of the techniques needed differ significantly from the ones we used here. _Discussion--_ Similar to Godel's first incompleteness theorem which shows that no 'rich enough' fragment of arithmetic admits a consistent extension that is both complete and computable, Theorems 1 and 2 show that no 'rich enough' collection of spacetimes admits an observable that is both complete and definable. Even worse, in view of Theorem 3, and the important role of GPW solutions for the theory of general relativity [34], it is hard to imagine a way out by attempting to 'excise' the 'troublesome' solutions while keeping the main physical content of the theory. Given the central role that observables play in quantization procedures [4; 5; 6; 7], the pertinent question becomes: how much of Bergmann's program for "_the identification and systematic exploitation of the observables_" [35] can be salvaged, and in what precise form? Some first attempts to resolve the issue can be immediately ruled out, merely on the basis of how flexible is the notion of 'concrete observable' in Theorems 1 and 2. For example, one strategy often proposed to circumvent the explicit construction of observables is the use of a gauge-fixing. However, since \(\operatorname{Ein}(M)\) is a Polish space, Theorems 1 and 2 show that there is no definable way \(s\colon\mathcal{S}\to\mathcal{S}\subseteq\operatorname{Ein}(M)\) to select a single representative \(s(g_{\mu\nu})\in[g_{\mu\nu}]_{\mathrm{diff}}\) from each class (4). In short, gauge-fixings uniformly defined across all orbits cannot be implemented in a definable fashion. Another possibility is to consider families of observables \(\mathcal{F}=\{f_{i}\colon i\in I\}\) instead of a single observable. This does not seem to be a good way out either. Theorems 1 and 2 immediately apply to countable families of observables. Indeed, since the notions of concreteness and definability are closed under countable products, a countable \(\mathcal{F}\) can be replaced by the single observable \(\otimes_{i}f_{i}\). In fact, a technical elaboration shows that these incompleteness results extend to uncountable families of observables, so long as the parametrisation \(i\mapsto f_{i}\) is 'definable enough'. Finally, one could also try to circumvent these issues by endowing \(\mathcal{S}\) with a different topology. This would have to be a topology so 'fine' that can admit a Borel definable complete observable \(f\colon\mathcal{S}\to\mathbb{R}\). While this is certainly doable -- for example one may consider the discrete topology -- no such topology would be amenable to computations. This a consequence of Theorem 2, whose statement is agnostic on the choice of topology on \(\mathcal{S}\). Perhaps a way out is to abandon fully invariant forms of quantization, in favor of 'equivariant' forms. That is, quantization procedures which seek to promote, not the classical fully-invariant observables, but rather 'equivariant observables' to operators on a Hilbert space. For such forms of quantization, it is natural to generalize our notion of observables as follows. Let \(G\) be a Polish group. A \(G\)-_observable for \(\mathcal{S}\)_ is a Polish \(G\)-space \(G\curvearrowright R\) together with a map \(f\colon\mathcal{S}\to R\) so that for all \(g_{\mu\nu},\widetilde{g}_{\rho\sigma}\in\mathcal{S}\) \[g_{\mu\nu}\simeq_{\rm diff}\widetilde{g}_{\rho\sigma}\iff f(g_{\mu\nu})\simeq_ {G}f(\widetilde{g}_{\rho\sigma}) \tag{9}\] This notion of \(G\)-observables is also in line with some modern criticisms on Bergmann's program [36], which maintain the use of scalars \(R=C^{\infty}(\mathbb{R}^{4},\mathbb{R})\) as potential values for observables, but replace the equality in the right-hand side of (1) with covariance \(\simeq_{\rm diff}\) of scalars. In the context of \(G\)-observables, a natural question arises: **Problem 5**: _For which Polish groups \(G\) does there exist a definable and complete \(G\)-observable for \(\operatorname{Ein}(\mathbb{R}^{4})\)?_ For this question to be pertinent for quantization, it is reasonable to consider restricting to groups \(G\) which admit'sufficiently nice' representation-theoretic properties. Interestingly, an elaboration on Theorem 1 shows that \(G\) cannot be a compact group, see [28, Exercise 5.4.5]. But, could \(G\) be locally-compact? More generally, could \(G\) be the unitary group of a separable \(C^{*}\)-algebra? Given the recent breakthroughs [37; 38; 39; 40] in our understanding of how dynamics of Polish groups interact with the complexity of various classification problems, Problem 5 suggests that Theorems 1 and 2 do not mark the end, but rather the beginning of a program to bridge invariant descriptive set theory and quantum gravity. A program whose goal would be to compute the intrinsic complexity that general covariance exhibits on various spaces of solutions. This would be the first step in identifying which types of quantization recipes for gravity can be implemented constructively. ###### Acknowledgements. We are grateful to Jonathan Holland for sharing with us his intuitions on the symmetries of plane waves. We thank Apoorv Tiwari for many valuable discussions in the early stages of this project. We thank Hans Halvorson for the phrase 'when an object exists only by the power of AC then for what concerns physics it is as useful as if it did not exist'. Last but not least, we would like to thank Clemmie Murdock III for introducing AP to GS. This research was supported by the NSF Grant DMS-2154258: "Dynamics Beyond Turbulence and Obstructions to Classification". MC acknowledges support from the ID# 61466 and ID# 62312 grants from the John Templeton Foundation, as part of the "Quantum Information Structure of Spacetime (QISS)" project (qiss.fr). ## Appendix Recall the relation \(\simeq_{\mathbb{Z}}\) on \(X:=\{0,1\}^{\mathbb{Z}}\) given by (7). Here we start by providing the exact definition of the map \(r\colon X\to\operatorname{GPW}\) and the details as to why it is a Borel reduction from \((X,\simeq_{\mathbb{Z}})\) to \((\operatorname{GPW},\simeq_{\rm diff})\). We then sketch how this argument adapts for showing that the family of all Robertson-Walker spacetimes is also rich. _Gravitational Plane Waves--_ Let \(W\colon[0,1]\to[0,1]\) be any smooth map so that: \(W\) and all its derivatives vanish at \(x=0\) and \(x=1\); and \(W(x)=1\) for some unique point in \([0,1]\) with \(x<1/2\). For every element \(\alpha\colon\mathbb{Z}\to\{0,1\}\) of \(X\) we define a smooth map \(W_{\alpha}\colon\mathbb{R}\to\mathbb{R}\) by setting: \[W_{\alpha}(x)=\begin{cases}2-W(x-\lfloor x\rfloor)&\text{if }\alpha(\lfloor x \rfloor)=0\\ 2+W(x-\lfloor x\rfloor)&\text{if }\alpha(\lfloor x\rfloor)=1\end{cases} \tag{10}\] For the graphs of \(W_{\alpha}\), \(W\) see Figures 1, 2, respectively. Recall that for every \(\alpha\in X\), the spacetime \(r(\alpha)\) is defined by setting \(H(u,x,y):=W_{\alpha}(u)xy\) in (6). To see that \(r\) is continuous, and hence Borel, notice that compact regions of \(r(\alpha)\) are determined by finite regions of \(\alpha\), and that basic open sets in \(X\) are of the form: \[U_{p}:=\{\alpha\in X\text{ with }\alpha\ \lceil\ \text{dom}(p)=p\}, \tag{11}\] where \(p\colon A\to\{0,1\}\) is a map with finite \(A\subseteq\mathbb{Z}\). It is also straighforward to see that \(r\) satisfies the (\(\Longrightarrow\)) direction of (5). Indeed, if \(\alpha\simeq_{\mathbb{Z}}\beta\) then there is some \(k\in\mathbb{Z}\) so that for all \(u\in\mathbb{R}\) we have that \(W_{\alpha}(u-k)=W_{\beta}(u)\). But then, the chance of coordinates \(u\mapsto u-k\), \(v,x,y\mapsto v,x,y\) witness that \(r_{\alpha}\simeq_{\rm diff}r(\beta)\). To prove that the (\(\Longleftarrow\)) of (5) holds as well, we will need to recall some basic facts regarding Killing symmetries, see for instance [41, Chapter 8]. For any spacetime \(g_{\mu\nu}\), let \(\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\) be the Lie algebra of all the vector fields \(\boldsymbol{V}\) which satisfy Killing's equation: \[\mathcal{L}_{\boldsymbol{V}}g_{\mu\nu}:=g_{\mu\nu,\alpha}\ V^{\alpha}+g_{\alpha \mu}\ V^{\alpha}_{\,\nu}+g_{\alpha\nu}\ V^{\alpha}_{\,\mu}=0 \tag{12}\] We will make repeated use of the standard fact that, if \(\widetilde{x}^{\xi}=\widetilde{x}^{\xi}(x^{\eta})\) is a smooth change of coordinates witnessing \(g_{\mu\nu}\simeq_{\rm diff}\widetilde{g}_{\rho\sigma}\) via (2), then it also induces an isomorphism \(i\colon\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\to\mathfrak{iso}_{\mathcal{L}}( \widetilde{g}_{\rho\sigma})\), with \(i(\boldsymbol{V})=\widetilde{\boldsymbol{V}}\), where \[V^{\alpha}(x^{\eta})=\frac{\partial x^{\alpha}}{\partial\widetilde{x}^{\beta}} \widetilde{V}^{\beta}(\widetilde{x}^{\xi}). \tag{13}\] We can now prove that the (\(\Longleftarrow\)) direction of (5) holds: **Lemma 6**: _If \(r(\alpha)\simeq_{\rm diff}r(\beta)\) holds then so does \(\alpha\simeq_{\mathbb{Z}}\beta\)._ Figure 2: The “bell curve” \(W\) To see this, one starts by solving (12) for an arbitrary vector field \(\mathbf{V}\) and for any \(g_{\mu\nu}\) of the form (6). Solutions for this can be found in [30, Chapter 4.3], for the case \(H_{xx}-H_{yy}=0\) in which we are interested; and in [31], for the general \(H(u,x,y)\). The additional requirement that \(H(u,x,y)=W_{\alpha}(u)xy\), for some \(\alpha\in X\), implies that \(\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\) is the 5-dimensional Heisenberg algebra \(\mathfrak{h}(2)\); see [30, 4.3.16] or [31, Table II.10]. In particular, the center of \(\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\) is spanned by \(\frac{\partial}{\partial v}\). As a consequence, if (2) holds for some \(\widetilde{x}^{\xi}=\widetilde{x}^{\xi}(x^{\eta})\) and \(g_{\mu\nu}:=\rho(\alpha),\widetilde{g}_{\rho\sigma}\), \(\rho(\beta)\), then it should map \(\partial/\partial\widetilde{v}\) to a constant multiple of \(\partial/\partial v\). In particular, \(\partial u/\partial\widetilde{v}=0\), and hence, \(\widetilde{x}^{\xi}=\widetilde{x}^{\xi}(x^{\eta})\) is of the following form; see, [31, (A.1)] or [30, 4.3.1]: \[\widetilde{u} =\frac{u+a}{c}\] \[\widetilde{x} =x\cos(b)+y\sin(b)+F(u)\] \[\widetilde{y} =-x\sin(b)+y\cos(b)+G(u) \tag{14}\] \[\widetilde{v} =c\big{[}v-x\big{(}\cos(b)F^{\prime}(u)-\sin(b)G^{\prime}(u)\big{)}\] \[\quad-y\big{(}\sin(b)F^{\prime}(u)-\cos(b)G^{\prime}(u)\big{)}-I (u)\big{]}.\] for some constants \(a,b,c\), with \(c\neq 0\), and some smooth maps \(F(u),G(u),I(u)\). Plugging (6) and (14) in (2) and solving for the coefficients of \(xydu^{2}\) we get: \[W_{\alpha}(u)=\frac{\cos^{2}(b)-\sin^{2}(b)}{c^{2}}\cdot W_{\beta}(\frac{u+a}{ c})\] From the structure of mimima and maxima of \(W_{\alpha}\) and \(W_{\beta}\), and since \(W\) in Figure 2 is not symmetric under any vertical axis, it follows that \(W_{\alpha}(u)=W_{\beta}(u+a)\) holds for some \(a\in\mathbb{Z}\). Thus, \(\alpha(n)=\beta(n-k)\) for \(k:=-a\). _Robertson-Walker cosmological spacetimes--_ For any \(d>0\), let \(\operatorname{RW}(\mathbb{R}^{d+1})\) be the collection of all \((d+1)\)-dimensional Robertson-Walker spacetimes: \[-dt^{2}+J(t)\big{(}(dx^{1})^{2}+\cdots+(dx^{d})^{2}\big{)}, \tag{15}\] where \(J\) is a smooth map with \(J(t)>0\); see, e.g., [42, Chapter 8]). By an argument similar to the above one may show that \(\operatorname{RW}(\mathbb{R}^{d+1})\) is rich, and hence, it satisfies the conclusions of our Incompleteness Theorems 1 and 2. Indeed consider the map \(r\colon X\to\operatorname{RW}(\mathbb{R}^{1+d})\) given by setting \(r(\alpha)\) to be of the form (15), with \(J(t):=W_{\alpha}(t)\). As in the case of GPW, one easily shows that \(r\) is continuous and that it satisfies the (\(\Longrightarrow\)) direction of (5). The analogue of Lemma 6 is proved in a similar fashion. One starts by solving (12); the additional requirement that \(J(t)=W_{\alpha}(t)\) implies that \(\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\) is the special Euclidean algebra \(\mathfrak{iso}(d)\) corresponding to the isometries of the spacelike surfaces \(t=\text{constant}\). Assuming that (2) holds for \(\widetilde{x}^{\xi}=\widetilde{x}^{\xi}(x^{\eta})\), \(g_{\mu\nu}:=r(\alpha),\widetilde{g}_{\rho\sigma}:=r(\beta)\), one gets an isomorphism between \(\mathfrak{iso}_{\mathcal{L}}(g_{\mu\nu})\) and \(\mathfrak{iso}_{\mathcal{L}}(\widetilde{g}_{\rho\sigma})\), which restricts to an isomorphism between the subalgebras \(\langle\partial/\partial x^{\eta}\colon\eta\neq 0\rangle\) and \(\langle\partial/\partial\widetilde{x}^{\xi}\colon\xi\neq 0\rangle\). It follows that there are smooth maps \(F^{0},\ldots,F^{d}\) and constants \(c^{\xi}_{\,\eta}\) with: \[\widetilde{x}^{0}=F^{0}(x^{0}),\ \ \text{and}\ \ \widetilde{x}^{\xi}=F^{\xi}(x^{0})+ \sum_{\eta>0}(c^{\xi}_{\,\eta}\cdot x^{\eta}). \tag{16}\] Plugging (16) in (2) and solving for the coefficients of each combination \(dx^{\eta}dx^{\eta^{\prime}}\) we get \(\partial F^{0}/\partial x^{0}=\pm 1\). From the structure of mimima and maxima of \(W_{\alpha}\) and \(W_{\beta}\), and since \(W\) in Figure 2 is not symmetric under any vertical axis, we have that \(F^{0}\colon\mathbb{R}\to\mathbb{R}\) is the shift map \(t\mapsto t-k\) for some \(k\in\mathbb{Z}\). Hence, \(\alpha(n)=\beta(n-k)\) for some \(k\in\mathbb{Z}\).
2306.02939
Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm
This paper presents a new generalization error analysis for Decentralized Stochastic Gradient Descent (D-SGD) based on algorithmic stability. The obtained results overhaul a series of recent works that suggested an increased instability due to decentralization and a detrimental impact of poorly-connected communication graphs on generalization. On the contrary, we show, for convex, strongly convex and non-convex functions, that D-SGD can always recover generalization bounds analogous to those of classical SGD, suggesting that the choice of graph does not matter. We then argue that this result is coming from a worst-case analysis, and we provide a refined optimization-dependent generalization bound for general convex functions. This new bound reveals that the choice of graph can in fact improve the worst-case bound in certain regimes, and that surprisingly, a poorly-connected graph can even be beneficial for generalization.
Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Kevin Scaman, Giovanni Neglia
2023-06-05T15:03:01Z
http://arxiv.org/abs/2306.02939v4
# Improved Stability and Generalization Analysis of the Decentralized SGD Algorithm ###### Abstract This paper presents a new generalization error analysis for the Decentralized Stochastic Gradient Descent (D-SGD) algorithm based on algorithmic stability. The obtained results largely improve upon state-of-the-art results, and even invalidate their claims that the communication graph has a detrimental effect on generalization. For instance, we show that in convex settings, D-SGD has the same generalization bounds as the classical SGD algorithm, no matter the choice of graph. We exhibit that this counter-intuitive result comes from considering the average of local parameters, which hides a final global averaging step incompatible with the decentralized scenario. In light of this observation, we advocate to analyze the supremum over local parameters and show that in this case, the graph does have an impact on the generalization. Unlike prior results, our analysis yields non-vacuous bounds even for non-connected graphs. ## 1 Introduction Studying the ability of machine learning models to generalize to unseen data is a fundamental and long-standing problem. Among the several approaches that have been proposed to bound generalization errors, the most prominent ones are based on the complexity of the hypothesis class like the Vapnik-Chervonenkis dimension or Rademacher complexity (Bousquet et al., 2004), algorithmic stability Bousquet and Elisseeff (2002), PAC-Bayesian bounds (Shawe-Taylor and Williamson, 1997; McAllester, 1998; Catoni, 2007; Alquier, 2021), or more recently information-theoretic generalization bounds (Xu and Raginsky, 2017). Over the last few years, a substantial amount of work has been dedicated to the study of the generalization properties of _optimization algorithms_, more specifically gradient-based methods (Lin et al., 2016; London, 2017; Zhou et al., 2018; Amir et al., 2021; Neu et al., 2021). In particular, since the seminal work of Hardt et al. (2016), approaches based on _algorithmic stability_ have encountered a large success as they allow to shed light on the implicit regularization brought by (stochastic) gradient methods (Kuzborskij and Lampert, 2018; Bassily et al., 2020; Lei and Ying, 2020; Schliserman and Koren, 2022). However, this large amount of work is mostly focusing on _centralized_ gradient-based algorithms. Decentralized learning algorithms, such as the celebrated Decentralized Stochastic Gradient Descent (D-SGD) algorithm (Lian et al., 2017), allow several agents to train models on their local data by exchanging model updates rather than the data itself. In D-SGD, agents solve an empirical risk minimization task by alternating between local gradient steps and averaging model parameters with their neighbors in a communication graph, which encodes which pairs of agents exchange information. A sparser graph thus reduces the per-iteration communication cost but tends to increase the number of iterations needed to converge. Most theoretical analyses of D-SGD and is variants focus on understanding the _optimization error_ by characterizing the convergence rate to the empirical risk minimizer. They notably highlight the impact of the communication graph and the heterogeneity of data across agents (Koloskova et al., 2020; Ying et al., 2021; Le Bars et al., 2023). In contrast, the _generalization error_ of decentralized learning algorithms is far less understood. To the best of our knowledge, only the recent work of Sun et al. (2021), Zhu et al. (2022) and Taheri and Thrampoulidis (2023) investigate this question for D-SGD, under the framework of algorithmic stability. These work mainly differ in their technical assumptions: Sun et al. (2021) consider Lipschitz and smooth loss functions, Zhu et al. (2022) focus on smooth and convex losses, and Taheri and Thrampoulidis (2023) assume a self-boundedness property for the gradients and a separability condition. Despite these differences, they all come to the same conclusion: "decentralization has a negative impact on generalization". Specifically, their generalization bounds for D-SGD get worse as the graph becomes sparser, and eventually become vacuous for non-connected graphs. Worse still, the generalization gap does not even tend to zero as the number of local samples increases. In this work, we demonstrate that these results are not tight and do not reflect the true impact of decentralization on the generalization error. Our contributions, summarized in Table 1, are twofold: **(1)** Placing ourselves in the framework of the above prior work, which analyzes the generalization properties of the _average of local parameters_ produced by D-SGD, we first show (Section 3) that the **generalization bounds are not impacted by decentralization**. In convex and strongly-convex cases, we even recover the exact same generalization bounds as those obtained by Hardt et al. (2016) for centralized SGD, regardless of the choice of communication graph. Our results therefore invalidate the conclusions of the previous state of the art. **(2)** After arguing that these surprising results are made possible by an implicit global averaging step which is incompatible with the constraints of decentralization, we propose (Section 4) a more suitable framework to analyze the generalization of decentralized algorithms. More specifically, we show that **the worst-case generalization error across the local parameters of agents is impacted by decentralization**. Our results reveal that the choice of communication graph nicely interpolates between the generalization of purely local SGD (no edge) and the generalization of centralized SGD (complete graph), are non-vacuous for non-connected graphs, and remain consistent with the increasing number of local data points. Before moving to our main contributions, the following section gives important reminders on Decentralized SGD, on the relationship between algorithmic stability and generalization, and discuss the assumptions considered throughout the paper. \begin{table} \begin{tabular}{|c||c||c|c|c|} \hline & SGD [H] & \multicolumn{4}{c|}{Decentralized SGD} \\ \cline{3-5} & & On-avg [S] & On-avg [**ours**] & Worst [**ours**] \\ \hline Convex & \(\mathcal{O}(\frac{T}{mn})\) & \(\mathcal{O}(\frac{T}{mn}+\frac{1}{\rho})\) & \(\mathcal{O}(\frac{T}{mn})\) & \(\mathcal{O}\Big{(}\frac{T}{n}(\frac{1}{m}+1-\rho)\Big{)}\) \\ \hline Strongly convex & \(\mathcal{O}(\frac{1}{mn})\) & \(\mathcal{O}(\frac{1}{mn}+\frac{1}{\rho})\) & \(\mathcal{O}(\frac{1}{mn})\) & \(\mathcal{O}\Big{(}\frac{1}{mn}+\frac{1-\rho}{n}\Big{)}\) \\ \hline Non-convex & \(\mathcal{O}\Big{(}\frac{T^{a}}{mn}\Big{)}\) & \(\mathcal{O}(\frac{T^{a}}{mn}+\frac{T^{a}}{\rho})\) & \(\mathcal{O}\Big{(}\frac{T^{a}}{m^{1-a}n}\Big{)}\) & \(\mathcal{O}\Big{(}\frac{T^{a}}{n}(\frac{1}{m}+1-\rho)^{1-a}\Big{)}\) \\ \hline \end{tabular} \end{table} Table 1: Simplified generalization bounds for (D)-SGD with Lipschitz and smooth loss functions. [H] indicates the results of Hardt et al. (2016), and [S] those of Sun et al. (2021). For simplicity, we omit constant factors. \(T\) represents the number of iterations, \(m\) the number of agents and \(n\) the number of local data points. We also have \(a\in(0,1)\) a constant that depends on the model parameters, and \(\rho\in[0,1]\) the spectral gap of the communication graph. For centralized SGD, we consider that the algorithm is run over \(mn\) data points. For D-SGD, ‘On-avg’ refers to the generalization error of the average of local models, and ‘Worst’ refers to the worst-case generalization error across the local parameters. Background ### Stability and generalization We consider the general setting of statistical learning, adapted to a decentralized framework with \(m\) agents. We consider that agent \(j\) observes examples drawn from a local data-distribution \(\mathcal{D}_{j}\) with support \(\mathcal{Z}\). The objective is to find a global model \(\theta\in\mathbb{R}^{d}\) minimizing the _population risk_ defined by: \[R(\theta)\triangleq\frac{1}{m}\sum_{j=1}^{m}\mathbb{E}_{Z\sim\mathcal{D}_{j}}[ \ell(\theta;Z)]\;,\] where \(\ell\) corresponds to a loss function. Although we cannot evaluate the population risk \(R(\theta)\), we usually have access to an empirical counterpart, computed over \(m\) local datasets \(S\triangleq(S_{1},\ldots,S_{m})\) where \(S_{j}=\{Z_{1j},\ldots,Z_{nj}\}\) the dataset of agent \(j\) with \(Z_{ij}\sim\mathcal{D}_{j}\). Note that for simplicity we consider that all local data sets are of same size \(n\), but our analysis can be extended to the case of different sizes. The resulting _empirical risk_ is given by: \[R_{S}(\theta)\triangleq\frac{1}{m}\sum_{j=1}^{m}R_{S_{j}}(\theta)\triangleq \frac{1}{mn}\sum_{j=1}^{m}\sum_{i=1}^{n}\ell(\theta;Z_{ij})\;.\] One of the most famous and studied estimator is the empirical risk minimizer, denoted by \(\widehat{\theta}_{\text{ERM}}=\arg\min_{\theta}R_{S}(\theta)\). However, in most situations this estimator cannot be directly computed and one has often access to a potentially randomized _optimization_ algorithm \(A\), taking as input the full data set \(S\), and outputting an approximate minimizer \(A(S)\in\mathbb{R}^{d}\) of the empirical risk \(R_{S}(\theta)\). In this setting, we can upper-bound the expected _excess risk_\(R(A(S))-R(\theta^{\star})\) based on two types of errors, namely the (expected) _generalization_ error, and the (expected) _optimization error_: \[\mathbb{E}_{A,S}[R(A(S))-R(\theta^{\star})]\] \[=\underbrace{\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]}_{\epsilon_{ gen}}+\underbrace{\mathbb{E}_{A,S}[R_{S}(A(S))-R_{S}(\widehat{\theta}_{ \text{ERM}})]}_{\epsilon_{opt}}+\underbrace{\mathbb{E}_{A,S}[R_{S}(\widehat{ \theta}_{\text{ERM}})-R(\theta^{\star})]}_{\leq 0}\;.\] The present work focuses on the control of the expected generalization error \(\epsilon_{gen}\), for which a popular approach is based on the stability analysis of the algorithm \(A\).1 Below, we recall the notion of _uniform stability_ that is widely considered for this kind of analysis. Footnote 1: Note that while we focus here on the _expected_ version of the generalization error, these tools are also well-adapted to provide _high-probability_ generalization bounds (Feldman and Vondrak, 2019). **Definition 1**.: (Uniform Stability)_. A randomized algorithm \(A\) is \(\varepsilon\)-uniformly stable if for all training datasets \(S\), \(S^{\prime}\in\mathcal{Z}^{nm}\) that only differ in one example, we have:_ \[\sup_{z\in\mathcal{Z}}\mathbb{E}_{A}[\ell(A(S);z)-\ell(A(S^{\prime});z)]\leq \varepsilon\;. \tag{1}\] One can then derive the following renowned lemma, linking generalization and uniform stability (Bousquet and Elisseeff, 2002; Shalev-Shwartz et al., 2010). **Lemma 1**.: (Generalization via uniform stability)_. Let \(A\) be \(\varepsilon\)-uniformly stable. Then,_ \[|\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]|\leq\varepsilon\;. \tag{2}\] Thanks to this lemma, it suffices to control the uniform stability of \(A\) in order to get the desired generalization bound. In our analysis, we also rely on the notion of _on-average model stability_(Lei and Ying, 2020) which has the advantage to be less restrictive than uniform stability. Below, we recall this notion, with a slight adaptation to the decentralized setting. **Definition 2**.: (On-average model stability). _Let \(S=(S_{1},\ldots,S_{m})\) with \(S_{j}=\{Z_{1j},\ldots,Z_{nj}\}\) and \(\tilde{S}=(\tilde{S}_{1},\ldots,\tilde{S}_{m})\) with \(\tilde{S}_{j}=\{\tilde{Z}_{1j},\ldots,\tilde{Z}_{nj}\}\) be two independent copies such that \(Z_{ij}\sim\mathcal{D}_{j}\) and \(\tilde{Z}_{ij}\sim\mathcal{D}_{j}\). For any \(i=1,\ldots,n\) and \(j=1,\ldots,m\), denote \(S^{(ij)}=(S_{1},\ldots,S_{j-1},S_{j}^{(i)},S_{j-1},\ldots,S_{m})\) with \(S_{j}^{(i)}=\{Z_{1j},\ldots,Z_{i-1j},\tilde{Z}_{ij},Z_{i+1j},\ldots,Z_{nj}\}\) as the data set formed from \(S\) by replacing the \(i\)-th element of the \(j\)-th agent's data set by \(\tilde{Z}_{ij}\). A randomized algorithm \(A\) is said to be on-average model \(\varepsilon\)-stable if_ \[\mathbb{E}_{S,\tilde{S},A}\Big{[}\frac{1}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}||A(S )-A(S^{(ij)})||_{2}\Big{]}\leq\varepsilon\;. \tag{3}\] ### Decentralized SGD Throughout this paper, we focus on the popular Decentralized Stochastic Gradient Descent (D-SGD) algorithm (Lian et al., 2017), which aims to find minimizers (or saddle points) of the empirical risk \(R_{S}(\theta)\) in a fully decentralized fashion. This algorithm is based on peer-to-peer communications between agents, where a graph is used to encode which pairs of agents (referred as nodes) can interact together. More specifically, this _communication graph_ is represented by a weight matrix \(W\in[0,1]^{m\times m}\), where \(W_{jk}>0\) gives the weight that agent \(j\) gives to messages received from agent \(k\), while \(W_{jk}=0\) (no edge) means that \(j\) does not receive messages from \(k\). ``` 0: Initialize \(\forall j\), \(\theta_{j}^{(0)}=\theta^{(0)}\in\mathbb{R}^{d}\), iterations \(T\), stepsizes \(\{\eta_{t}\}_{t=0}^{T-1}\), weight matrix \(W\). for\(t=0,\ldots,T-1\)do for each node \(j=1,\ldots,m\) (in parallel) do Sample \(I_{j}^{t}\sim\mathcal{U}\{1,\ldots,n\}\)\(\triangleright\) Random sample selection \(\theta_{j}^{(t+\frac{1}{2})}\leftarrow\theta_{j}^{(t)}-\eta_{t}\nabla\ell( \theta_{j}^{(t)};Z_{I_{j}^{t}j})\)\(\triangleright\) Stochastic gradient step \(\theta_{j}^{(t+1)}\leftarrow\sum_{k=1}^{m}W_{jk}\theta_{k}^{(t+\frac{1}{2})}\)\(\triangleright\) Neighborhood averaging step endfor endfor ``` **Algorithm 1** Decentralized SGD (Lian et al., 2017) D-SGD is summarized in Algorithm 1. At iteration \(t\), each agent \(j\) first updates its local estimate \(\theta_{j}^{(t)}\) based on \(\nabla\ell(\theta_{j}^{(t)};Z_{I_{j}^{t}j})\), the stochastic gradient of \(\ell\) evaluated at \(\theta_{j}^{(t)}\) with \(I_{j}^{t}\sim\mathcal{U}\{1,\ldots,n\}\) the index of the data point uniformly selected by agent \(j\) from its local dataset \(S_{j}\) at iteration \(t\). Then, each agent aggregates its current parameter value with its neighbors according to the weight matrix \(W\). Notice that at the end of the \(T\) iterations, each agent has its own model \(\theta_{j}^{(T)}\). ### Assumptions We focus on the classic setup of Hardt et al. (2016), also considered by Sun et al. (2021) in prior work on the generalization analysis of D-SGD. These works rely on the standard assumptions of \(L\)-Lipschitzness and \(\beta\)-Smoothness of the loss function. **Assumption 1**.: (\(L\)-Lipschitzness). _We assume that the loss function \(\ell\) is differentiable w.r.t \(\theta\) and uniformly Lipschitz i.e. \(\exists L>0\) such that \(\forall\theta,\theta^{\prime}\in\mathbb{R}^{d},z\in\mathcal{Z}\), \(|\ell(\theta;z)-\ell(\theta^{\prime};z)|\leq L\|\theta-\theta^{\prime}\|_{2}\), or equivalently, \(\|\nabla\ell(\theta;z)\|_{2}\leq L\)._ **Assumption 2**.: (\(\beta\)-Smoothness). _The loss function \(\ell\) is also \(\beta\)-smooth i.e. \(\exists\beta>0\) such that \(\forall\theta,\theta^{\prime}\in\mathbb{R}^{d},z\in\mathcal{Z}\), \(\|\nabla\ell(\theta;z)-\nabla\ell(\theta^{\prime};z)\|_{2}\leq\beta\|\theta- \theta^{\prime}\|_{2}\)._ **Remark 1**.: _Although standard, Assumptions 1 and 2 could be relaxed. The \(L\)-Lipschitz condition can be avoided by considering an analysis similar to the one proposed in Lei and Ying (2020), and more generally they can both be relaxed by imposing instead a self-bounding property to the gradients (Schliserman and Koren, 2022; Taheri and Thrampoulidis, 2023). We expect the conclusions of this paper to be the same for the analysis with relaxed hypotheses, and keep it for future investigations._ An interesting property of the \(L\)-Lipschitz condition is that it links the _on-average model stability_ (Def. 2) directly to the generalization error (Lei and Ying, 2020). **Lemma 2**.: (Generalization via on-average model stability). _Let \(A\) be on-average model \(\varepsilon\)-stable. Then, under Assumption 1, \(|\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]|\leq L\varepsilon\)._ Our last assumption concerns the weight matrix \(W\). It is again very standard and used extensively in the literature of decentralized optimization (see e.g., Lian et al., 2017; Koloskova et al., 2020). **Assumption 3**.: (Mixing matrix). \(W\) _is doubly stochastic, i.e., \(\mathbf{1}^{T}W=W\mathbf{1}=\mathbf{1}\) where \(\mathbf{1}\) is the vector (of size \(m\)) that contains only ones._ Note that contrary to what is usually considered by the literature, the mixing matrix \(W\) does not have to be connected. As an example, we allow \(W\) to be the identity matrix, which would reduce D-SGD to \(m\) independent local SGD. ## 3 On-average generalization error This section presents our first main contribution. Following prior work (Sun et al., 2021; Zhu et al., 2022; Taheri and Thrampoulidis, 2023), we prove generalization bounds for the _average of final iterates_\(\bar{\theta}^{(T)}=\frac{1}{m}\sum_{j=1}^{m}\theta_{j}^{(T)}\) of D-SGD.2 Our bounds contradict (and improve upon) the recent results of Sun et al. (2021), who claimed that the generalization error of D-SGD was badly impacted by sparse communication graphs and obtained vacuous bounds for non-connected graphs. Remarkably, our results demonstrate that the generalization error of D-SGD is in fact _not_ impacted by the choice of communication graph, or by decentralization at all, as we essentially recover the bounds proved by Hardt et al. (2016) for _centralized_ SGD. Footnote 2: Considering the average of final iterates is also common when analyzing the optimization error of D-SGD, see for instance (Koloskova et al., 2020; Ying et al., 2021) Throughout this section, we define \(A(S)=\bar{\theta}^{(T)}\). ### Convex loss functions **Theorem 1**.: _Assume that the loss function \(\ell(\cdot;z)\) is convex, \(L\)-lipschitz (Assumption 1) and \(\beta\)-smooth (Assumption 2). Let \(A(S)\) be the average of final iterates of D-SGD run for \(T\) iterations and with \(\eta_{t}\leq 2/\beta\). Then, D-SGD has a bounded expected generalization error:_ \[|\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]|\leq\frac{2L^{2}\sum_{t=0}^{T-1}\eta_{t }}{mn}\;. \tag{4}\] Sketch of proof (see Appendix B.1 for details).: Prior results (Sun et al., 2021; Taheri and Thrampoulidis, 2023) are suboptimal because they try to mimic state-of-the-art _optimization error_ analyses (Kong et al., 2021) which require to control a _consensus distance_ term \(\sum_{k}\|\theta_{k}^{(t)}-\bar{\theta}^{(t)}\|\). This term, important to ensure the minimization of the empirical risk, is small when all local parameters are close to each other, which is the case only if the communication graph is sufficiently connected. In the following proof, we provide a tighter (and simpler) analysis, which does not require the control of such consensus distance term. Denote by \(\bar{\theta}^{(T)}=\frac{1}{m}\sum_{k=1}^{m}\theta_{k}^{(T)}\), and \(\bar{\bar{\theta}}^{(T)}=\frac{1}{m}\sum_{k=1}^{m}\tilde{\theta}_{k}^{(T)}\), the average of final iterates for D-SGD run over two data sets \(S\) and \(S^{\prime}\) that differ in only one sample. Noticing that \(\|\bar{\theta}^{(t+1)}-\bar{\bar{\theta}}^{(t+1)}\|_{2}\leq\bar{\delta}_{t+1 }\triangleq\frac{1}{m}\sum_{k=1}^{m}\|\theta_{k}^{(t+1)}-\tilde{\theta}_{k}^ {(t+1)}\|_{2}\), the objective of the proof is to find a recurrence relation over the iterates of \(\mathbb{E}[\bar{\delta}_{t}]\) in order to prove that \(\mathbb{E}[\bar{\delta}_{T}]\leq\frac{2L\sum_{t=0}^{T-1}\eta_{t}}{mn}\) and finally use Lemma 2 to conclude the proof. Using triangle inequalities and the double stochasticity of \(W\) (Assumption 3), we show that \(\bar{\delta}_{t+1}\leq\frac{1}{m}\sum_{k=1}^{m}\|G_{k}^{(t)}(\theta_{k}^{(t)} )-\tilde{G}_{k}^{(t)}(\tilde{\theta}_{k}^{(t)})\|_{2}\), where \(G_{k}^{(t)}\) (respectively \(\tilde{G}_{k}^{(t)}\)) is the standard stochastic gradient update rule (Def. 4 in Appendix A) of agent \(k\) at time \(t\), computed over data set \(S\) (respectively \(S^{\prime}\)). Importantly, these update rules do not depend on \(W\). Moreover, since only one sample is different between \(S\) ans \(S^{\prime}\), only one agent has different update rules. At this point of the proof, we are in a scheme similar to that of Hardt et al. (2016) who proposes different results to bound the difference \(\|G_{k}^{(t)}(\theta_{k}^{(t)})-\tilde{G}_{k}^{(t)}(\tilde{\theta}_{k}^{(t)} )\|_{2}\) (Appendix A). Thanks to them, we show that with probability \(1-\frac{1}{n}\), \(\bar{\delta}_{t+1}\leq\bar{\delta}_{t}\), and with probability \(\frac{1}{n}\), \(\bar{\delta}_{t+1}\leq\bar{\delta}_{t}+\frac{2L\eta_{t}}{m}\). Using the fact that \(\bar{\delta}_{0}=0\), it follows that: \[\mathbb{E}[\bar{\delta}_{T}]\leq\mathbb{E}[\bar{\delta}_{T}]+\frac{2L\eta_{t} }{mn}\leq\ldots\leq 0+\frac{2L\sum_{t=0}^{T-1}\eta_{t}}{mn}\;,\] which concludes the proof. Theorem 1 shows that for convex functions, we recover the exact same generalization bound for D-SGD as the one obtained by Hardt et al. (2016) for centralized SGD, a result known to be optimal (Zhang et al., 2022). This contradicts the recent results of Sun et al. (2021), who obtained bounds of order \(\mathcal{O}(\frac{T}{mn}+\frac{1}{\rho})\) for D-SGD, where \(\rho\in[0,1]\) is the spectral gap of \(W\). This former result was therefore claiming that the generalization error is strongly impacted by the connectivity of \(W\) (tending to infinity as the graph becomes sparser), which is in fact not true as demonstrated by our result. Strikingly, their bound is not even consistent in the sense that it does not tend to \(0\) as \(n\) grows. The fact that our generalization bound for D-SGD matches the one of centralized SGD may seem surprising at first, but it simply comes from considering the average final iterate \(\bar{\theta}^{(T)}=\frac{1}{m}\sum_{j=1}^{m}\theta_{j}^{(T)}\). Indeed, this makes the algorithm more stable, erasing the impact of the communication graph \(W\) in a somewhat artificial way (since a global averaging step is incompatible with the decentralized scenario). In Section 4, we will introduce what we think is a more suitable notion of generalization error for D-SGD, and show that this quantity does depend on the choice of communication graph. ### Strongly convex loss functions We now consider strongly convex functions. As such functions cannot be Lipschitz (Assumption 1) over \(\mathbb{R}^{d}\), we restrict our analysis to the optimization over a convex compact set \(\Theta\) as done by Hardt et al. (2016). Denoting by \(\Pi_{\Theta}(\bar{\theta})=\arg\min_{\theta\in\Theta}\|\bar{\theta}-\theta\|\) the Euclidean projection onto \(\Theta\), we consider the _projected_ extension of the D-SGD algorithm, which replaces the stochastic gradient update step from Algorithm 1 by: \[\theta_{j}^{(t+\frac{1}{2})}\leftarrow\Pi_{\Theta}\Big{(}\theta_{j}^{(t)}- \eta_{t}\nabla\ell(\theta_{j}^{(t)};Z_{I_{j}^{t}j})\Big{)}\;.\] Before moving to our generalization result, we note that this algorithm is well-suited to solving Tikhonov regularization problems, which makes it quite natural to consider in practice. **Theorem 2**.: _Assume that the loss function \(\ell(\cdot;z)\) is \(\mu\)-strongly convex, \(L\)-Lipschitz over \(\Theta\) (Assumption 1) and \(\beta\)-smooth (Assumption 2). Let \(A(S)\) be the average of final iterates of the projected D-SGD run for \(T\) iterations and with constant stepsize \(\eta\leq 1/\beta\). Then, (projected) D-SGD has a bounded expected generalization error:_ \[|\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]|\leq\frac{4L^{2}}{\mu mn}\;. \tag{5}\] The proof of Theorem 2, provided in Appendix B.2, essentially follows the same scheme as the one derived above for convex functions. Once again, the bound matches the optimal one obtained for centralized SGD with strongly convex functions in Hardt et al. (2016). Note that contrary to the general convex case, the generalization bound for strongly convex functions is independent of the number of iterations \(T\), which makes these problems more stable and less likely to overfit. In the work of Sun et al. (2021), the authors have an additional error term in \(\mathcal{O}(\frac{1}{\mu\rho})\). Their generalization bound is therefore _not_ tending to \(0\) as the number of samples increases and is vacuous when the communication graph is not connected (\(\rho=0\)). This again illustrates the suboptimality of these previous results and the major gain brought by ours. ### Non-convex loss functions Hereafter we provide a generalization error bound for _bounded_ non-convex functions. The proof is deferred to Appendix B.3. **Theorem 3**.: _Assume that \(\ell(\cdot;z)\in[0,1]\) is an \(L\)-Lipschitz (Assumption 1) and \(\beta\)-smooth (Assumption 2) loss function for every \(z\). Let \(A(S)\) be the average of final iterates of D-SGD run for \(T\) iterations and with monotonically non-increasing step sizes \(\eta_{t}\leq\frac{c}{t+1}\), \(c>0\). Then, D-SGD has a bounded expected generalization error:_ \[|\mathbb{E}_{A,S}[R(A(S))-R_{S}(A(S))]|\leq(1+\frac{1}{\beta c})(2cL^{2})^{ \frac{1}{\beta c+1}}\frac{T^{\frac{\beta c}{Sc+1}}}{m^{\frac{1}{\beta c+1}}n}\;. \tag{6}\] When omitting constant factors in \(\beta\), \(c\) and \(L\), the non-convex generalization bound becomes of order \(\mathcal{O}(T^{\frac{\beta c}{Sc+1}}/nm^{\frac{1}{\beta c+1}})\). Several comments are in order. First, contrary to the convex cases, our bound does not exactly match the one of Hardt et al. (2016). Indeed, when centralized SGD is run over \(mn\) data points, Hardt et al. (2016) obtain a bound of order \(\mathcal{O}(T^{\frac{\beta c}{Sc+1}}/nm)\) which is strictly better than our bound. This comes from the fact that the proof technique relies on characterizing the number of steps that occur before the algorithm picks the data point that differs in \(S\) and \(S^{\prime}\). In centralized SGD, the probability to pick this point is \(1/mn\) at each iteration, while it is only \(1/n\) for D-SGD. Importantly, this means that the weaker bound is not directly due to decentralization, but rather to the fact that D-SGD selects \(m\) samples at each iteration (instead of only one for SGD). A fairer comparison would thus be to compare D-SGD to centralized SGD with batch size \(m\). On a more positive note, our generalization bound is still independent of the choice of graph and is tending towards \(0\) as \(n\) and \(m\) increase. This significantly improves the results obtained for D-SGD in prior work, notably the one of Sun et al. (2021), who obtained a bound of order \(\mathcal{O}(T^{\frac{\beta c}{Sc+1}}(1/nm+C_{\rho}))\), where \(C_{\rho}\) tends to infinity as the spectral gap \(\rho\) of \(W\) tends to \(0\) (e.g., as the communication graph gets sparser or as \(m\) goes to infinity). **Remark 2**.: _It has been recently shown (Zhang et al., 2022) that the proof of Hardt et al. (2016) is not tight in all regimes. However, using the refined proof techniques from Zhang et al. (2022) would not impact our main conclusions regarding the impact of the communication graph._ From on-average to worst-case generalization error In the previous section, we saw that if we consider the output of D-SGD to be the average of the \(m\) final iterates, the generalization error is not impacted by decentralization. Nevertheless, this implicitly assumes that a global averaging step is performed at the end of the algorithm, which is not compatible with the constraints of decentralization and would be very costly when the number of agents is large. In this section, we propose a new notion of generalization error that is better-suited to decentralized algorithms, which output \(m\) different models (one per agent). To this aim, we denote by \(A_{1}(S),\ldots,A_{m}(S)\) the \(m\) different outputs generated by a decentralized algorithm \(A\) and we propose to control the _worst-case expected generalization error given by_ \[\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\;. \tag{7}\] Similarly to classical analyses, we can propose a new notion of stability adapted to the decentralized framework and sufficient to control the above worst-case generalization error. **Definition 3**.: (Worst-model stability). _Consider the notations of Def. 2. A decentralized algorithm \(A\) with \(m\) outputs is said worst-model \(\varepsilon\)-stable if_ \[\mathbb{E}_{S,\tilde{S},A}\Big{[}\frac{1}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}\sup _{k\in\{1,\ldots,m\}}\|A_{k}(S)-A_{k}(S^{(ij)})\|_{2}\Big{]}\leq\varepsilon\;. \tag{8}\] **Lemma 3**.: (Worst-case generalization via worst-model stability)_. Let \(A\) be worst-model \(\varepsilon\)-stable. Then, under Assumption 1, \(\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\leq L\varepsilon\)._ This lemma is a key element of our analysis as it states that it is sufficient to control the worst-model stability in order to control the worst-case generalization error. Its proof, analogous to the one of Lemma 2, can be found in Appendix C.1. In the rest of this section, we provide bounds controlling the worst-case generalization error for convex, non-convex and strongly convex loss functions. ### Convex loss functions **Theorem 4**.: _Assume that the loss function \(\ell(\cdot;z)\) is convex, \(L\)-Lipschitz (Assumption 1) and \(\beta\)-smooth (Assumption 2). Let \(A_{1}(S),\ldots,A_{m}(S)\) be the \(m\) final iterates of D-SGD run for \(T\) iterations, with communication graph \(W\) satisfying Assumption 3 and with \(\eta_{t}\leq 2/\beta\). Then, D-SGD has a bounded worst-case expected generalization error:_ \[\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\leq \frac{2L^{2}\sum_{t=0}^{T-1}\eta_{t}}{mn}\times\sum_{j=1}^{m}\sup_{k\in\{1, \ldots,m\}}W_{kj}\;. \tag{9}\] Sketch of proof (see Appendix C.2 for details).: The proof is analogous to the one obtained for the averaged model (Theorem 1). The main difference resides in the fact that we now control \(\delta_{t+1}^{\sup}\triangleq\sup_{k=1,\ldots,m}\|\theta_{k}^{(t+1)}-\tilde{ \theta}_{k}^{(t+1)}\|_{2}\) instead of \(\bar{\delta}_{t+1}=\frac{1}{m}\sum_{k=1}^{m}\|\theta_{k}^{(t+1)}-\tilde{ \theta}_{k}^{(t+1)}\|_{2}\). Let \((i,j)\) denote the \(i\)-th sample of agent \(j\) that has been replaced by another one drawn from the same distribution. Our goal is then equivalent to show that \(\mathbb{E}[\delta_{T}^{\sup}]\leq\frac{2L\sum_{t}\eta_{t}}{n}\sup_{k}W_{kj}\). Indeed, in that case it suffices to sum this quantity over \(i\) and \(j\) and divide it by \(mn\) to show that D-SGD is worst-model \(\varepsilon\)-stable (Def. 3) with \(\varepsilon=\frac{2L^{2}\sum_{t=0}^{T-1}\eta_{t}}{mn}\sum_{j=1}^{m}\sup_{k}W_{kj}\). A direct use of Lemma 3 will conclude the proof. Again, the analysis is split over two probabilistic events. With probability \(1-\frac{1}{n}\), the swapped sample \((i,j)\) is not selected by agent \(j\), and it can be proved using tools from Theorem 1 that \(\delta_{t+1}^{\sup}\leq\delta_{t}^{\sup}\). With probability \(\frac{1}{n}\) however, sample \((i,j)\) is selected and in that case we show that \(\delta_{t+1}^{\sup}\leq\delta_{t}^{\sup}+2L\eta_{k}\sup_{k}W_{kj}\). In the end we have \(\mathbb{E}[\delta_{T}^{\sup}]\leq\mathbb{E}[\delta_{T-1}^{\sup}]+\frac{2L\eta _{T-1}}{n}\sup_{k}W_{kj}\) which can be unraveled until \(\delta_{0}^{\sup}=0\) to obtain the desired result. In contrast to the results of Section 3, the bound (9) shows that the worst-case generalization error over local models is impacted by the communication graph. As expected, we observe that when the graph is complete with uniform weights \(1/m\) (global averaging at each step), we recover the exact same generalization bound as the one of Theorem 1. On the other hand, when we take \(W\) to be the identity matrix (no interaction between the agents), the bound becomes \(2L^{2}\sum_{t=0}^{T-1}\eta_{t}/n\), which corresponds to the one obtained by Hardt et al. (2016) for centralized SGD run over a dataset of size \(n\). The quantity \(m^{-1}\sum_{j=1}^{m}\sup_{k\in\{1,\ldots,m\}}W_{kj}\) measures the connectivity of the graph \(W\). We can relate it to the classical notion of _spectral gap_, which appears in most of the optimization error bounds (Lian et al., 2017; Ying et al., 2021). Recall that the spectral gap \(\rho\in[0,1]\) of a doubly stochastic matrix \(W\) is \(\rho=1-|\lambda_{2}(W)|\), \(\lambda_{2}\) being the second-largest eigenvalue of \(W\) in module. The following corollary gives a slightly looser bound that explicitly depends on \(\rho\). **Corollary 1**.: _Under the conditions of Theorem 4, we also have:_ \[\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\leq \Big{(}\frac{1}{mn}+\frac{1-\rho}{n}\Big{)}2L^{2}\sum_{t=0}^{T-1}\eta_{t}\;. \tag{10}\] Proof.: Let \(\widetilde{W}=W-\frac{\mathbf{1}\mathbf{1}^{T}}{m}\) with \(\mathbf{1}\) the unit vector of size \(m\). Recall that the largest eigenvalue value \(\lambda_{1}(W)\) of \(W\) is equal to \(1\) in module and is associated to the eigenvector \(\frac{1}{\sqrt{m}}\mathbf{1}\). Hence, the largest eigenvalue of \(\widetilde{W}\) in module is equal to \(\lambda_{1}(\widetilde{W})=\lambda_{2}(W)\). We first notice that \(\sum_{j=1}^{m}\sup_{k}W_{kj}\leq m\|W\|_{\max}\) with \(\|W\|_{\max}=\sup_{kj}\{W_{kj}\}\) the max norm. Then, a simple use of triangular inequality and matrix norm equivalence gives: \[\|W\|_{\max}=\Big{\|}\widetilde{W}+\frac{\mathbf{1}\mathbf{1}^{T}}{m}\Big{\|} _{\max}\leq\Big{\|}\frac{\mathbf{1}\mathbf{1}^{T}}{m}\Big{\|}_{\max}+\| \widetilde{W}\|_{\max}\leq\frac{1}{m}+\|\widetilde{W}\|_{2}=\frac{1}{m}+ \lambda_{2}(W)\] By definition of \(\rho\) we can conclude that \(\sum_{j=1}^{m}\sup_{k}W_{kj}\leq 1+m(1-\rho)\). Plugging this result into Equation (9) finishes the proof. Again, when \(W\) is the complete graph with uniform weights, we have \(\rho=1\) and we recover the result of Theorem 1. On the other hand, when \(W\) is the identity matrix, \(\rho=0\) and we recover the rate \(\mathcal{O}(\sum_{t=0}^{T-1}\eta_{t}/n)\) of centralized SGD run over \(n\) samples. Importantly, the theorem shows that even in the "worst-case" setting, the bound does not become vacuous for non-connected communication graphs. ### Strongly-convex loss functions **Theorem 5**.: _Assume that the loss function \(\ell(\cdot;z)\) is \(\mu\)-strongly convex, \(L\)-Lipschitz over \(\Theta\) (Assumption 1) and \(\beta\)-smooth (Assumption 2). Let \(A_{1}(S),\ldots,A_{m}(S)\) be the \(m\) final iterates of D-SGD run _for \(T\) iterations, with communication graph \(W\) satisfying Assumption 3 and with constant stepsize \(\eta\leq 1/\beta\). Then, (projected) D-SGD has a bounded worst-case expected generalization error:_ \[\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\leq \frac{4L^{2}}{\mu mn}\times\sum_{j=1}^{m}\sup_{k\in\{1,\ldots,m\}}W_{kj}\;. \tag{11}\] The proof of Theorem 5 can be found in Appendix C.3. The same way we proved Corollary 1, we can obtain a looser bound involving the spectral gap \(\rho\) of \(W\), of order \(\mathcal{O}\Big{(}\frac{1}{mn}+\frac{1-\rho}{n}\Big{)}\). Like for the convex case, the more (uniformly) connected is the graph, the closer to \(1\) is \(\rho\) and the better is the generalization bound, tending towards the one obtained in Theorem 2. Contrary to what can be found in the literature Sun et al. (2021); Taheri and Thrampoulidis (2023), the bound is tending to \(0\) as \(n\) increases and is not overly impacted by poorly connected graphs. ### Non-convex loss functions **Theorem 6**.: _Assume that \(\ell(\cdot;z)\in[0,1]\) is an \(L\)-Lipschitz (Assumption 1) and \(\beta\)-smooth (Assumption 2) loss function for every \(z\). Let \(A_{1}(S),\ldots,A_{m}(S)\) be the \(m\) final iterates of D-SGD run for \(T\) iterations, with communication graph \(W\) satisfying Assumption 3 and with monotonically non-increasing step sizes \(\eta_{t}\leq\frac{c}{t+1}\), \(c>0\). Then, D-SGD has a bounded worst-case expected generalization error:_ \[\sup_{j\in\{1,\ldots,m\}}|\mathbb{E}_{A,S}[R(A_{j}(S))-R_{S}(A_{j}(S))]|\leq \tilde{C}\cdot\frac{T^{\frac{\beta c}{\beta c+1}}}{n}\times\|W\|_{\max}^{ \frac{1}{\beta c+1}}\;, \tag{12}\] _where \(\|W\|_{\max}=\sup_{kj}\{W_{kj}\}\) is the max norm and \(\tilde{C}=(1+\frac{1}{\beta c})(2cL^{2})^{\frac{1}{\beta c+1}}\)._ The proof of Theorem 6 is detailed in Appendix C.4. Based on the proof of Corollary 1, we can again obtain a looser bound of order \(\mathcal{O}\Big{(}\frac{T^{n}}{n}(\frac{1}{m}+1-\rho)^{1-a}\Big{)}\) with \(a=\frac{\beta c}{\beta c+1}<1\). We can also recover, in the same way as for the convex cases of the previous sections, the result of Theorem 3 or the bound for centralized SGD with \(n\) data points. Finally, we can note that the rate for non-convex functions is slightly more impacted by the choice of graph \(W\). Indeed, since \(\|W\|_{\max}\leq 1\) (Assumption 3) and \(1/(\beta c+1)<1\), we have \(m^{-1}\sum_{j=1}^{m}\sup_{k\in\{1,\ldots,m\}}W_{kj}\leq\|W\|_{\max}\leq\|W\|_{ \max}^{\frac{1}{\beta c+1}}\). ## 5 Conclusion In this paper, we provide a new generalization error analysis for D-SGD with Lipschitz and smooth loss functions based on algorithm stability. Our results show that previous analyses were very loose and led to incorrect conclusions regarding the impact of the communication graph (and, more generally, of decentralization) on generalization. Our bounds improve upon previous results and address their inconsistencies, such as generalization errors that do not decay with the number of local data points. We then present a new notion of generalization error that is better suited to the decentralized setting. Rather than looking at the generalization performance of the averaged model, we propose to control the worst error made by an agent with its local model. This new approach sheds light on the true impact of decentralization on generalization, which is not as dramatic as what was suggested by previous work. Overall, our bounds match those obtained for centralized SGD, and the simplicity of the proposed proofs should make them adaptable to other centralized proof techniques. Our results, however, should not be completely dissociated from the optimization error: what is the point of generalizing well, if the empirical risk is badly minimized? In particular, **although generalization is mildly affected by the choice of the graph, the optimization error remains **heavily impacted by it**. Future investigations could therefore seek to provide a better understanding of the generalization-optimization trade-offs in decentralized learning. More generally, possible future work includes the relaxation of certain assumptions, the construction of lower bounds and the development of data-dependent generalization bounds that can highlight the impact of data heterogeneity between agents. ## Acknowledgments This work was supported in part by the French National Research Agency (ANR) through grant ANR-20-CE23-0015 (Project PRIDE), and by the Groupe La Poste, sponsor of the Inria Foundation, in the framework of the FedMalin Inria Challenge. Batiste Le Bars is supported by an Inria-EPFL fellowship.
2304.14922
Supervised and Unsupervised Deep Learning Approaches for EEG Seizure Prediction
Epilepsy affects more than 50 million people worldwide, making it one of the world's most prevalent neurological diseases. The main symptom of epilepsy is seizures, which occur abruptly and can cause serious injury or death. The ability to predict the occurrence of an epileptic seizure could alleviate many risks and stresses people with epilepsy face. We formulate the problem of detecting preictal (or pre-seizure) with reference to normal EEG as a precursor to incoming seizure. To this end, we developed several supervised deep learning approaches to identify preictal EEG from normal EEG. We further develop novel unsupervised deep learning approaches to train the models on only normal EEG, and detecting pre-seizure EEG as an anomalous event. These deep learning models were trained and evaluated on two large EEG seizure datasets in a person-specific manner. We found that both supervised and unsupervised approaches are feasible; however, their performance varies depending on the patient, approach and architecture. This new line of research has the potential to develop therapeutic interventions and save human lives.
Zakary Georgis-Yap, Milos R. Popovic, Shehroz S. Khan
2023-04-24T05:21:10Z
http://arxiv.org/abs/2304.14922v3
# Supervised and Unsupervised Deep Learning Approaches for EEG Seizure Prediction ###### Abstract Epilepsy affects more than **50** million people worldwide, making it one of the world's most prevalent neurological diseases. The main symptom of epilepsy is seizures, which occur abruptly and can cause serious injury or death. The ability to predict the occurrence of an epileptic seizure could alleviate many risks and stresses people with epilepsy face. Most of the previous work is focused at seizure detection, we pivot our focus to seizure prediction problem. We formulate the problem of detecting preictal (or pre-seizure) with reference to normal EEG as a precursor to incoming seizure. To this end, we developed several supervised deep learning approaches model to identify preictal EEG from normal EEG. We further develop novel unsupervised deep learning approaches to train the models on only normal EEG, and detecting pre-seizure EEG as an anomalous event. These deep learning models were trained and evaluated on two large EEG seizure datasets in a person-specific manner. We found that both supervised and unsupervised approaches are feasible; however, their performance varies depending on the patient, approach and architecture. This new line of research has the potential to develop therapeutic interventions and save human lives. **Keywords:** deep learning, intracranial EEG, seizure prediction, signal processing ## 1 Introduction Epilepsy is one of the most prevalent neurological disorders in the world, affecting approximately 1% of the world's population [1, 2, 3]. Epilepsy is characterized by spontaneously occurring seizures, which could lead to bodily injuries, fractures, burns [4], and death in many cases [5]. People with epilepsy are mostly concerned with the fear of incoming seizures [6]. Therefore, there is a dire need to reduce the unpredictability of seizures to reduce the risk of injuries and improve their quality of life. Electroencephalography (EEG) is normally used to analyze brain activity pertaining to seizures [7]. Brain activity in people with epilepsy can be separated into four states: regular brain activity (interictal), brain activity before the seizure (preictal), brain activity during the seizure (ictal), and brain activity immediately after a seizure (postictal). The preictal state can contain observable physiological changes prior to the onset of a seizure [8] that can be used to predict an incoming seizure. The capability to predict an epileptic seizure could alleviate the risks patients face [9]; it would give patients the time to get help and greatly reduce the risk of injury. However, the biggest challenge is designing seizure prediction approaches is that there is no universally agreed upon preictal period length (PPL). Bandarabadi et al. [10] investigated the optimal PPL for seizure prediction using statistical analysis and found that the optimal PPL varies for each patient and for seizure within each patient [10]. Most of the work in this area is around seizure detection [11], which involves detecting a seizure after its occurrence. Although this is important, contemporary work must aim to predict seizures before their onset as it can save patients' lives and improve their quality of life. Our main hypothesis is that the correct detection of preictal state against normal brain activity (through supervised or unsupervised approaches) can be a strong indicator of an incoming epileptic seizure. In the supervised setting, a binary classifier can be trained between interictal and preictal periods. Whereas, in the unsupervised setting, a classifier can be trained on only normal EEG (interictal) and preictal state can be identified as an anomaly. Our main contributions are: * Presented supervised and new unsupervised deep learning approaches to predict epileptic seizures. * Experimentally determined the PPL and window size, as against heuristics or domain knowledge. * Performed leave-one-seizure out cross validation for better generalization of results. * All the experiments were performed in a patient-specific manner to avoid data leakages, overestimation of results and emphasis on individualized outcomes. Our results showed that the unsupervised approaches were able to obtain comparable results to supervised seizure prediction in many patients. However, across all implementations there was not one clear best-performing model. This paper is an extension of our preliminary work [12] that introduced supervised convolution neural network (CNN) on SWEC-ETHZ dataset [13]. In this paper, we present two new supervised approaches: CNN-Long Short Term Memory (LSTM) and Temporal Convolution Network (TCN), and three new unsupervised approaches (CNN, CNN-LSTM, TCN autoencoders). We developed new seizure prediction baselines for the SWEC-ETHZ dataset [13] and included a new CHB-MIT dataset [14]. ## 2 Related Work Seizure prediction using supervised machine learning has been used to distinguish the interictal and preictal states [15]. Typical supervised machine learning seizure prediction approaches involve signal pre-processing, extracting and selecting features, followed a classifier [15]. Common signal processing techniques include high-pass, low-pass, or band-pass filtering, as well as artifact removal techniques [15]. Feature extraction is typically done by a bio-signals or epilepsy expert examining a patient's EEG and deciding appropriate features for separating the preictal and interictal states [15]. These features are often patient-specific and include statistical, non-linear, frequency domain, and time-frequency domain features [15, 16]. Common classifier choices include support vector machines (SVM), k-nearest neighbour and random forest [15]. Machine learning approaches may have limitations in terms of extracting hand-crafted features, which could be sub-optimal and time consuming. Deep learning approaches can overcome some of these challenges by able to learn features from data with little to no pre-processing, generate high-level representations of data, and learn complex functions [17]. An overview of preictal-interictal classification seizure prediction methods (on human subjects) using deep learning is shown in table 1. Many reviewed deep learning methods performed some type of pre-process the EEG before passing it on to the classifier, typically through filtering [1, 21], artifact removal [35], or time-frequency analysis [21, 22]. Common deep learning architectures used for seizure prediction include CNN [1, 22], LSTM network [29, 32], and feed-forward multilayer perceptron (MLP) [18]. We observed that majority of the studies use CNNs, LSTMS and/or their combinations to benefit from learning spatial and temporal features. The window size (fixed duration data to analyze) and PPL were kept fixed in most of the studies, and they varied even when working on the same dataset and patients. This is an issue in building classifiers to predict seizures because the optimal PPL varies across patients (as concluded by Bandarabadi et al. [10]). Only four of the studies reported experimenting with the PPL [18, 21, 29, 35], while others did not present any rationale for their choice. Some of the studies (e.g., [14]) also found different PPLs sizes, showing that the optimal PPL varies depending on the method's implementation. These studies show that it is better to determine the PPL empirically at a patient-specific level, rather than using a generic or pre-determined average over a population. We extend the existing supervised methods by obtaining PPL and window size using a leave-one-seizure-out (LOSO) evaluation and introduced new supervised TCN classifier for this task. There is no known work on using unsupervised deep learning for seizure prediction using EEG. For the first time, we introduced three different autoencoder models and studied their performance for this problem. ## 3 Supervised Seizure Prediction Preictal-interictal classification for seizure prediction is performed with three different architectures: convolutional neural networks (CNN) (used in our previous work [12]), and two new architectures, i.e., CNN-LSTM), and TCN. We briefly dicuss them below. ### Cnn The CNN model takes in EEG samples that have been time-frequency transformed using a STFT [22] (see Section 5.3). This helps the model in extracting time and frequency features and puts the data into a suitable format for 2D convolutions [22]. The CNN architecture takes advantage of spatial information in data to learn relevant features. Each sample was converted into a 2D matrix \(F\times T\) using a STFT, where \(F\) was the number of sample frequencies used and \(T\) was the number of segment times used. The matrix was then resized to a \(128\times 128\) "image" using bilinear interpolation so that image sizes were consistent regardless of the window size. The time-frequency transform was done independently for each channel, resulting in each sample being of dimensions \(C\times 128\times 128\), where \(C\) is the total number of channels. The samples were then passed to the CNN model, which is made up of three convolutional blocks (see Figures 1 and 2), followed by three fully connected layers with ReLU activation functions. Table 2 shows the model hyperparameters used for the CNN. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Citation** & **Window Size** & **PPL** & **Pre-processing** & **DL Architecture** \\ & (Seconds) & (Minutes) & & & \\ \hline [18] & 5 & 10,20,30,40 & Various fixed features & MLP \\ \hline [19] & 10 & Unknown & Fractional FT & MLP \\ \hline [20] & 300 & 50 & Handcrafted & CNN \\ \hline [21] & 1 & 10 & CWT & CNN \\ \hline [22] & 30 & 60 & STFT & CNN \\ \hline [23] & 15 & 60 & None & CNN \\ \hline [24] & 5 & 30 & Common spatial pattern & CNN \\ \hline [25] & 30 & 60 & Fast FT & Multi-view CNN \\ \hline [26] & 10 & 60 & MFCC & CNN \\ \hline [27] & 1,2,3,8 & 10 & None & CNN \\ \hline [12] & 5,10,15,30,60 & 30,60,120 & STFT & CNN \\ \hline [28] & 20 & 30,60 & None & CNN \\ \hline [29] & 5 & 15 & Handcrafted & LSTM \\ \hline [1] & 10 & 30 & STFT & CNN + LSTM \\ \hline [30] & 4 & 60 & None & CNN-AE+BiLSTM \\ \hline [31] & 5 & 60 & None & MLP,CNN,CNN+LSTM \\ \hline [32] & 10 & 30 & Image conversion & CNN+LSTM \\ \hline [33] & 30 & 60 & STFT & CNN+SVM \\ \hline [34] & 30 & 30 & None & 1DCNN+GRU \\ \hline [35] & Unknown & 60 & Image conversion & 3DCNN \\ \hline [36] & 4 & 30 & Various fixed features & 3DCNN \\ \hline [37] & 28 & 30 & STFT & Convolutional GAN \\ \hline \end{tabular} \end{table} Table 1: Overview of deep learning EEG seizure prediction methods. ### Cnn-Lstm The CNN-LSTM architecture takes advantage of both the spatial feature extraction of the CNN along with the LSTM's propensity to work well with temporal data. The CNN-LSTM model takes in STFT images similar to the CNN model. The input is a consecutive series of images as one sample. The input sequence is divided into smaller sub-sequences, which are independently time-frequency transformed and resized into \(64\times 64\) images, leading to a dimensions \(C\times n\times 64\times 64\), where \(n\) is the number of sub-sequences in a sample and is equal to the sequence length divided by the sub-sequence length. Each sub-window is passed into a CNN model with two convolutional blocks that outputs a feature vector. Then, each feature vector is concatenated into a sequence and passed into a 2-layer LSTM, whose outputs are passed to a fully connected layer that outputs the final scores. An overview of the CNN-LSTM architecture and hyperparameters are shown in Figure 3 and Table 2(a). ### Tcn The TCN model takes in scaled sequences of size \(C\times S/4\), where \(S\) is the sequence length and the sequences were down-sampled by a factor of 4. The TCN model [38] consisted of TCN blocks (see Figure 1). Each TCN block is two consecutive sub-blocks that contain a causal 1D convolution layer with a dilation, a weight normalization layer, a ReLU activation function, and a dropout layer [38]. The TCN blocks have skip connections, where the input to the block is added to the output [38]. The model contained 6 TCN blocks with 32 channels each, followed by a 1D convolution layer, and a fully connected layer. The dilation factor of each block was \(2^{(}n-1)\), where \(n\) is the layer number. Figure 4 and Table 2(b) shows the TCN architecture and hyperparameters. Figure 3: CNN-LSTM architecture showing time-frequency (TF) transform, CNN layers, LSTM and fully connected (FC) layer. \begin{table} \begin{tabular}{|c|c|} \hline **CNN kernel size** & 5 \\ \hline **CNN filter sizes** & 8, 16 \\ \hline **LSTM feature vector size** & 32 \\ \hline **LSTM hidden size** & 16 \\ \hline **Fully connected size** & 96 \\ \hline **Dropout** & 0.5 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline **TCN kernel size** & 5 \\ \hline **TCN filter sizes** & 32, 32, 32, 32, 32, 32, 32, 32 \\ \hline **Fully connected size** & 64 \\ \hline **Dropout** & 0.2 \\ \hline \end{tabular} \end{table} Table 3: Hyperparameters of (a) CNN-LSTM, and (b) TCN models Figure 4: (a) TCN block description. TCN: temporal convolutional network. ReLU: rectified linear unit. (b) Supervised TCN architecture overview. TCN: temporal convolutional network. FC layer: fully connected layer. ## 4 Unsupervised Seizure Prediction The reliance on preictal data for supervised seizure prediction methods remains a challenge. Preictal data is typically scarce, and deep learning methods require a considerable amount of data from both classes to work well. Preictal-interictal classification methods cannot be used effectively on patients with little preictal data, and class imbalance still remains an impending problem. An unsupervised approach (anomaly detection) to seizure prediction could remedy these problems. Anomaly detection for seizure prediction would require only interictal (and no preictal data) to train, making it easier to be more accessible to a larger population. Autoencoders (AEs) and its variants are apt to be used within this framework, with reconstruction error used as an anomaly score. To our knowledge, this is the first seizure prediction work that uses unsupervised deep learning approach for epileptic seizure prediction. We implemented the following autoencoder approaches for this task. * CNN autoencoder [39]. Similar to the supervised CNN, it takes STFT images as input. The encoder is made up of three convolutional blocks followed by a fully connected layer which generates an embedding state of size 64. The decoder is a mirrored version of the encoder (see Figure 4(a)). * CNN-LSTM autoencoder [40]. Similar to the supervised CNN-LSTM, the input sequence was divided into smaller sub-sequences and then an STFT was performed on each sub-sequence. The encoder consisted of an individual CNN encoder for each sub-sequence followed by an LSTM that generated an embedding state of size 64. The decoder has the reverse architecture to the encoder. (see Figure 4(b)). * TCN autoencoder [41]. It takes in raw scaled sequences as is the case with the supervised TCN. The encoder was a TCN with three layers, each with 16 channels followed by a 1d convolution and a fully connected layer. The size of the embedding state was 64. The decoder was an exact mirror of the encoder (see Figure 4(c)). ## 5 Data Processing ### Datasets We used two EEG Epilepsy seizure datasets, the Sleep-Wake Epilepsy Centre ETH Zurich (SWEC-ETHZ) dataset [13] and the Children's Hospital Boston Massachusetts Institute of Technology (CHB-MIT) dataset [14]. Both datsets are publicly available, easy to access, and contain human raw EEG recordings, where no seizure states have been pre-selected. This is important so we can define and experiment with different preictal and interictal regions. The SWEC-ETHZ dataset is an iEEG dataset containing over \(2,500\) hours of recordings across 18 patients with a sampling rate of either 512Hz or 1024Hz [13]. The CHB-MIT dataset contains scalp EEG recordings from 22 patients sampled at 256Hz with at least 22 EEG electrodes [14]. We define a "lead seizure" as any seizure that occurs at least 30 minutes after a preceding seizure [22]. Only preictal periods from lead seizures were considered because of the lack of interictal and preictal data to train models. Patients that have less than three lead seizures were withheld from the experiments because at least three lead **Fig. 5:** Autoencoders: (a) CNN, (b)CNN-LSTM, (c) TCN seizures were required to perform test partitioning combined with an internal leave-one-seizure-out (LOSO) cross-validation step (see Figure 5(b)). Six out of the 18 patients in the SWEC-ETHZ dataset were not considered for this work due to this condition. All patients in the CHB-MIT dataset had at least three lead seizures. A description of dataset attributes for all patients used from both the SWEC-ETHZ and CHB-MIT datasets is shown in Tables 4 and 5, respectively. ### Data Preprocessing The length and location of the preictal period is defined by the PPL and the intervention time (IT). The IT is the time between the preictal state and the seizure onset. Interictal data is defined as any data that is not preictal, ictal, postictal, and is \(d\) distance away from the preictal state, as shown in Figure 5(a). The data was divided into samples of a fixed window size, which were labelled as either interictal or preictal. We set \(d=0\) to evaluate the model's ability to classify interictal and preictal samples in close temporal proximity to actual seizures. The IT was set to 0, increasing it can be a future experiment after generating a baseline. In the SWEC-ETHZ dataset, interictal samples were randomly selected with a down-sampling factor of 8 because interictal data was overly abundant and the classes were significantly imbalanced (patients ID04, ID09, and ID10 used a down-sampling factor of 2 instead because there was less interictal data). The number of preictal samples were artificially increased by using 50% overlapping windows. The size of each sample was \(sf\times C\) where \(s\) was the window size, \(f\) was the sampling rate, and \(C\) was the number of EEG electrodes. The dataset was partitioned into a training set and a testing set using LOSO partitioning. We used the last lead seizure's preictal data as the test set, while all other preictal data was part of the training set. As shown in Figure 5(b), LOSO partitioning is a better way to evaluate a model's ability to generalize to a new seizure's preictal \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Patient ID** & **Hours of data** & **Seizures** & **Lead seizures** & **Electrodes** \\ \hline ID03 & 158 & 4 & 4 & 64 \\ \hline ID04 & 41 & 14 & 14 & 32 \\ \hline ID05 & 110 & 4 & 4 & 128 \\ \hline ID06 & 146 & 8 & 8 & 32 \\ \hline ID07 & 69 & 4 & 4 & 75 \\ \hline ID08 & 144 & 4 & 4 & 61 \\ \hline ID09 & 41 & 23 & 14 & 48 \\ \hline ID10 & 42 & 17 & 15 & 32 \\ \hline ID12 & 191 & 9 & 9 & 56 \\ \hline ID13 & 104 & 7 & 7 & 64 \\ \hline ID16 & 177 & 5 & 5 & 34 \\ \hline ID18 & 205 & 5 & 5 & 42 \\ \hline \end{tabular} \end{table} Table 4: SWEC-ETHZ dataset patient description [13] data. Standard test partitioning where samples are randomly assigned to the training or test set may be an overestimation of the actual performance of the classifier. ### Time-Frequency Transform We transformed the EEG data from a time-series input into the time-frequency domain [42, 43] using short-time Fourier transform (STFT). It converts a one-dimensional time-series signal into a two-dimensional matrix of values with axes of time and frequency [44]. The STFT splits the signal into a series of smaller sequences and then performing Fourier transforms on each one individually, providing a way to see changes in the frequency domain at various points in time [45]. In CNN based models used in the work, an STFT was used to pre-process the input before passing samples to the model. Other time-frequency analysis methods such as the continuous wavelet transform [21] and phase-amplitude coupling [46] were experimented with in our preliminary work but did not provide better results. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Patient ID** & **Hours of data** & **Seizures** & **Lead seizures** & **Electrodes** \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 5: CHB-MIT dataset patient description [14] ## 6 Experimental Setting and Results A grid search was performed to find the optimal window size and PPL for each patient. We ran the model with varying window size \((5,10,15,30,60\) seconds) and PPL \((30,60,120\) minutes) values. We used an internal LOSO cross-validation to tune the parameters without looking at test data. This was done by dividing the training set into folds where each fold was a different seizure's preictal and interictal data. One fold was the validation set while the others were used for training. Each fold in the set was used as the validation set once, and the performance across all runs in a patient was averaged. An example of the cross-validation method used is shown in Figure 7. The Figure 6: (a) Labelling of the preictal and interictal periods with parameters. (b) Simplified visualization of LOSO test partitioning by withholding the last seizure. Figure 7: LOSO cross-validation example with four seizures. One seizure is used for validation while the others are used for model training. area under the Receiver Operating Characteristic curve (AUC ROC) [47] was used as a performance metric for hyperparameter tuning. The test set was completely withheld from this process. All the models were trained using an NVIDIA V100S-PCIe GPU with 32GB memory. A class-weighted (class weights vary per patient) cross-entropy loss function was used with the Adam optimizer and was trained for 100 epochs with a batch size of 128 and a learning rate of 0.0001. All implementations were done in the PyTorch framework [48]. After the final parameters for a model were set, it was evaluated on the test set using the AUC ROC and precision-recall curve (AUC PR). AUC PR is more appropriate for imbalanced classification problems [49]. ### Supervised Prediction Hyperparameter tuning results using the supervised CNN are shown in Tables 6 and 7 for the SWEC-ETHZ and CHB-MIT datasets, respectively. The window size and PPL obtained using cross-validations as well as AUC ROC vary considerably across different patients in both datasets. More than half of the patients in each dataset show AUC ROC values greater than 0.7. In the SWEC-ETHZ dataset, six of the patients had a test AUC ROC at least 0.1 lower than their validation AUC ROC while in the CHB-MIT dataset it was eight patients. This is consistent with Bandarabadi et al. [10] that the optimal preictal period for seizure prediction varies even on seizures within the same patient. The best way to account for this problem is to train and test on as many lead seizures' preictal data as possible. #### 6.1.1 Comparison with Fixed Parameters We implemented a preictal-interictal classification model with a fixed window size of 30 seconds and PPL of 1 hour to compare to our tuned hyperparameter model. The model architecture is a CNN identical to the optimized parameter implementation. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Patient ID** & **Window size** & **PPL** & **Validation AUC ROC** & **Test AUC ROC** \\ \hline ID03 & 30 & 1800 & 0.793 & **0.939** \\ \hline ID04 & 60 & 3600 & 0.708 & 0.509 \\ \hline ID05 & 60 & 7200 & 0.953 & **0.918** \\ \hline ID06 & 60 & 7200 & 0.704 & **0.948** \\ \hline ID07 & 30 & 7200 & 0.722 & 0.713 \\ \hline ID08 & 15 & 7200 & 0.722 & 0.454 \\ \hline ID09 & 30 & 7200 & 0.901 & **0.944** \\ \hline ID10 & 60 & 3600 & 0.807 & 0.574 \\ \hline ID12 & 60 & 1800 & 0.981 & **0.798** \\ \hline ID13 & 60 & 1800 & 0.721 & 0.499 \\ \hline ID16 & 10 & 1800 & 0.719 & 0.423 \\ \hline ID18 & 15 & 1800 & 0.832 & **0.850** \\ \hline \end{tabular} \end{table} Table 6: Validation and test results for preictal-interictal classification with optimized hyperparameters on the SWEC-ETHZ dataset. This was done to explore the benefits of optimizing hyperparameters for seizure prediction. Figures 8 show the comparison of the two methods on the SWEC-ETHZ and CHB-MIT dataset. In general for the SWEC-ETHZ dataset, the optimized hyperparameter implementation performed slightly better than the fixed parameter. In patient ID09, the optimized hyperparameter implementation performed much better than the fixed parameter implementation. For patient ID09, the hyperparameter tuning found a window size of 30 seconds and a PPL of 2 hours. It is likely that there was additional preictal information in the extra hour of data not used in the fixed parameter implementation. For the CHB-MIT dataset, most patients had similar results for both the fixed and optimized hyperparameter implementations. There were a few patients (ID 5, 16, 17, 18) that had much better results with the optimized model. However, there were also patients (ID 9, 22, 23) who performed better with a fixed hyperparameter implementation. For these patients, the last seizure's optimal hyperparameters were likely different from the optimal hyperparameters for the preceding seizures in the patient's dataset. Figures 10 and 11 show the comparison between the optimized and fixed hyperparameter implementations for the SWEC-ETHZ and CHB-MIT datasets \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Patient ID** & **Window size** & **PPL** & **Validation AUC ROC** & **Test AUC ROC** \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 7: Validation and test results for preictal-interictal classification with optimized hyperparameters on the CHB-MIT dataset. respectively using AUC PR instead. It can be observed that the optimized implementation generally performs better on the SWEC-ETHZ dataset in both metrics and that the difference is marginal in the CHB-MIT dataset. These experiments indicate that hyperparameter tuning can potentially improve the performance in comparison to fixed paramters. Figure 8: AUC ROC Comparison of CNN models using optimized hyperparameters vs fixed hyperparameters on the SWEC-ETHZ dataset. Figure 9: AUC ROC Comparison of CNN models using optimized hyperparameters vs fixed hyperparameters on the CHB-MIT dataset. #### 6.1.2 Comparison with Other Architectures Using a fixed hyperparameter implementation, CNN-LSTM and TCN models were trained using a window size of 30 seconds and a PPL of 1 hour. A comparison of the AUC PR for all models is shown in Figures 12 and 13. In the SWEC-ETHZ dataset, Figure 11: PR AUC Comparison of CNN models using optimized hyperparameters vs fixed hyperparameters on the CHB-MIT dataset. Figure 10: PR AUC Comparison of CNN models using optimized hyperparameters vs fixed hyperparameters on the SWEC-ETHZ dataset. the CNN, CNN-LSTM and TCB were the best performing model for 3, 5 and 4 patients. The CNN and CNN-LSTM performed comparably well, and in each patient, the results were fairly similar. The TCN results was more variable, performing well on some patients that the other models performed poorly.In the CHB-MIT dataset, the CNN, CNN-LSTM and TCN were the best performing model for 7, 6 and 10 patients. The TCN model performed much better in the CHB-MIT dataset compared to the SWEC-ETHZ dataset. Overall, the CHB-MIT results were very variable with PR AUC values, varying considerably even within the same patient. ### Unsupervised Prediction In the unsupervised approach, the training set only contained interictal data. For these experiments, the hyperparameters were fixed, with a window size of 30 seconds and a PPL of 60 minutes. The models were trained for 500 epochs with a batch size of 128 and a learning rate of 0.0005. After training, the models were evaluated on the test set that contained both interictal and preictal samples. Both AUC ROC and PR AUC were used to evaluate performance. #### 6.2.1 Comparison of Architectures Figures 14, 15a, and 15b show the anomaly detection seizure prediction AUC PR results for the CNN, CNN-LSTM, and TCN AEs on the SWEC-ETHZ dataset and CHB-MIT dataset, respectively. We also show the supervised CNN with fixed hyperparameters for comparison. It can be observed that the performance varies significantly across different architectures and patients. For the SWEC-ETHZ dataset, the CNN AE Figure 12: Comparison of CNN, CNN-LSTM, and TCN implementations for preictal-interictal classification on the SWEC-ETHZ dataset. performed the worst across most patients while the CNN-LSTM and TCN AEs performed relatively better, and even surpassed the supervised implementation in some patients. In the CHB-MIT dataset, the results vary even more with no clear winner. Figure 14: Comparison of unsupervised seizure prediction using different model architectures on the SWEC-ETHZ dataset. U: unsupervised, S: supervised. Figure 13: Comparison of CNN, CNN-LSTM, and TCN implementations for preictal-interictal classification on the CHB-MIT dataset. ### Best Implementations Tables 8 and 9 show the best-performing implementation (from all experiments with supervised and unsupervised approaches) for each patient in the SWEC-ETHZ and CHB-MIT datasets and its corresponding AUC PR. For SWEC-ETHZ dataset, an unsupervised approach was the best-performing implementation for 7 out of 12 patients. For the CHB-MIT dataset, for 16 out of 23 patients, supervised approaches Figure 15: Comparison of unsupervised seizure prediction using different model architectures on the CHB-MIT dataset (a) patients 1 to 11, (b) patients 12 to 23. U: unsupervised, S: supervised. performed better. In particular, the supervised CNN performed the best for 8 patients - the most of any model. Figure 16 shows that using the CNN-LSTM was the most effective for the most patients with best performance in 16 of the 35 patients. ### Discussion We found that it is important to tune the window size and PPL to maximize performance. Preictal-interictal classification performed slightly better in both datasets when using an optimized hyperparameter implementation. However, in the CHB-MIT dataset, this difference was marginal. This is likely because of the size of the dataset. The CHB-MIT dataset has less data per patient compared to the SWEC-ETHZ dataset, so it is harder to properly tune hyperparameters that will generalize to new seizures. The CNN and CNN-LSTM architectures performed similarly in most experiments. This is likely because both use time-frequency transforms followed by two-dimensional convolutions for spatial feature extraction. Even though the architectures are not exactly the same, it is likely that both are capturing similar underlying patterns in the data. The TCN performed fairly well and was able to get good results in some patients when the other two models failed. Although there was not one consistently high-performing model, it is encouraging that different architectures were able to perform well for different patients in different datasets. The prediction results vary considerably across datasets, patients, and implementations. This demonstrates the variable nature of preictal and interictal data. To account \begin{table} \begin{tabular}{|c|c|c|} \hline **Patient ID** & **Best Implementation** & **PR AUC** \\ \hline 1 & Supervised CNN & 0.966 \\ \hline 2 & Supervised CNN & 0.737 \\ \hline 3 & Supervised CNN & 1.000 \\ \hline 4 & Supervised TCN & 0.064 \\ \hline 5 & Unsupervised CNN-LSTM & 0.612 \\ \hline 6 & Supervised CNN & 0.878 \\ \hline 7 & Unsupervised CNN-LSTM & 0.115 \\ \hline 8 & Supervised TCN & 0.704 \\ \hline 9 & Supervised CNN & 0.660 \\ \hline 10 & Supervised CNN-LSTM & 0.733 \\ \hline 11 & Supervised CNN & 0.878 \\ \hline 12 & Unsupervised TCN & 0.928 \\ \hline 13 & Supervised CNN & 0.979 \\ \hline 14 & Supervised TCN & 0.742 \\ \hline 15 & Unsupervised CNN-LSTM & 0.720 \\ \hline 16 & Unsupervised CNN-LSTM & 0.319 \\ \hline 17 & Unsupervised CNN-LSTM & 0.414 \\ \hline 18 & Supervised CNN-LSTM & 0.104 \\ \hline 19 & Supervised CNN-LSTM & 0.733 \\ \hline 20 & Supervised CNN-LSTM & 0.706 \\ \hline 21 & Supervised CNN-LSTM & 0.477 \\ \hline 22 & Unsupervised CNN-LSTM & 0.308 \\ \hline 23 & Supervised TCN & 0.953 \\ \hline \end{tabular} \end{table} Table 9: Best-performing implementation for each patient in the CHB-MIT dataset. for this, it is important to have as many lead seizures data in a patient as possible since preictal data is typically scarce. A limitation of our work is that a patient requires three lead seizures in their data to work with this method. It may not always be feasible for a patient's data to have at least three lead seizures, especially considering the difficulty of data acquisition. Anomaly detection seizure prediction performance varied significantly across different architectures. Although supervised preictal-interictal classification performed better overall, there were many patients where an unsupervised approach was the best implementation. Additionally, in the SWEC-ETHZ dataset, an unsupervised approach was the best implementation for the majority of patients. This is likely because the SWEC-ETHZ dataset had a much larger recording duration and interictal-preictal ratio. Anomaly detection seizure prediction shows promise, and it may not be necessary to have access to substantial preictal data to predict a seizure. Figure 17 shows the average performance in terms of AUC PR across all patients. It can be observed that the supervised CNN and the supervised CNN-LSTM performed the best on average. However, the difference in performance across models is not large, and with a large standard deviation, it is impossible to make a statistical claim on the best performing model. In general, it can be observed that the supervised approaches performed better than the unsupervised approaches with results varying across individual patients. Our results also showed the potential of using unsupervised approaches for seizure prediction. A major advantage is that it only uses unlabelled normal EEG data, which is easier to acquire and is not dependent on an expert to annotate. Figure 17: Average performance of each implementation. S: supervised. U: unsupervised. ## 7 Conclusions and Future Directions We developed several supervised approaches and introduced new unsupervised deep learning approaches for predicting epileptic seizures. In each approach, the main goal was to identify a preictal state (either as a class or anomaly) to predict the onset of an incoming seizure. We accounted for the variability of EEG and the preictal period by tuning the window size and PPL using a grid search. We trained personalized models and tuned hyper-parameter using LOSO approach for better generalization of results. This method has achieved good results on more than half of the patients. We experimented with different supervised and unsupervised deep learning architectures on two large EEG datasets. Our results vary across different implementations depending on the patient. The advantage of unsupervised methods is that they do not require preictal data to train, alleviating the challenges around data acquisition, and effort and time spent in labelling. We found that in many cases, an unsupervised approach was able to get similar or even better performance than a supervised approach; however, there was no single best performing model. Our extensive experiments show the feasibility of supervised and unsupervised deep learning approaches for seizure prediction. However, the amount of preictal data per patient appears to be a crucial factor in training generalized models. A future extension would be to experiment with a larger range for the hyper-parameters. These parameters can also vary across implementations, so optimized hyperparameter implementations with the CNN-LSTM or TCN architecture as the base could be valuable. Another extension would be to try different signal processing methods and advanced CNN and sequential models, including Resnet and Transformers. A breakthrough in reducing intervention time before the onset of seizure would lead to development of therapeutic interventions that can empower epilepsy patients to live without the fear or adversarial outcomes.
2310.19341
Skywork: A More Open Bilingual Foundation Model
In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves \emph{state of the art} performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that test data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs.
Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou
2023-10-30T08:31:47Z
http://arxiv.org/abs/2310.19341v1
# Skywork: A More Open Bilingual Foundation Model ###### Abstract In this technical report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual foundation model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage training methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, respectively. We show that our model not only excels on popular benchmarks, but also achieves _state of the art_ performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that data contamination is a pressing issue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with checkpoints obtained during intermediate stages of the training process. We are also releasing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre-training corpus to date. We hope Skywork-13B and our open corpus will serve as a valuable open-source resource to democratize access to high-quality LLMs. ## 1 Introduction Natural Language Processing (NLP), a vital branch of artificial intelligence, has experienced a transformative surge in recent years. Pivotal to this revolution has been the advent and advancement of large language models (LLMs) (Ouyang et al., 2022; OpenAI, 2023; Bubeck et al., 2023; Chowdhery et al., 2022; Anil et al., 2023; Touvron et al., 2023, 2023). These complex computational structures, composed of billions of parameters, are capable of understanding, generating, and translating human language with an unprecedented degree of accuracy and sophistication. However, the proliferation of these models has also been accompanied by a growing trend towards commercialization and a lack of transparency, a phenomenon that is increasingly influencing the dynamics of the open-source community. Historically, the open-source community has thrived on the principles of collaboration, transparency, and unrestricted sharing of ideas. However, as the commercial potential of LLMs has been recognized, this openness has begun to diminish. The reality is that many organizations only make model checkpoints publicly accessible, while withholding vital information on model reproduction. This practice significantly hampers the progress of the field. In an effort to revive the spirit of the open-source community and contribute to the ongoing dialogue about transparency in AI, we present Skywork-13B: a family of bilingual large language models with 13 billion parameters, trained on a colossal corpus of more than 3.2 trillion tokens drawn from both English and Chinese texts. To our knowledge, our Skywork-13B is the most thoroughly trained family of open LLMs of comparable size to date. In this technical report, we offer a comprehensive disclosure of the Skywork-13B developmental journey. We detail the composition of our training data, provide insights into the evolutionary trajectory of the model's abilities during training, and share methodologies that could be employed to enhance model ability in specific domains. We believe that such an open approach not only aids in the reproducibility of our work but also provides a valuable resource for other researchers seeking to explore and expand the capabilities of large language models. This technical report is also a call to action for renewed transparency in the field of NLP. Through it, we hope to inspire a return to a more collaborative, open-source community, where progress is not hampered by commercial considerations but propelled by collective intelligence and shared wisdom. Our contributions are the following: * We release Skywork-13B1, a family of LLMs that is the most extensively trained and openly published LLMs of comparable size to date. Our Skywork-13B family includes 1) Skywork-13B-Base, a strong foundation model with _state of the art_ Chinese language modeling capability, and 2) Skywork-13B-Chat, a fined-tuned version optimized for conversation2. Footnote 1: Github repository: [https://github.com/SkyworkAI/Skywork](https://github.com/SkyworkAI/Skywork). * We disclose detailed information on the training process and data composition. We also release intermediate checkpoints, which provide a valuable resource for understanding how the model's capabilities develop over the course of training. It enables other researchers to leverage these checkpoints for their specific use-cases. * We release a portion of our high quality training corpus, totaling more than 150 billion tokens. To our knowledge, this is the largest open Chinese corpus for language model pre-training to date. * We develop a novel method that detects the level of in-domain data usage during the training stage. To facilitate reproduction of the experiments presented in this report, we have released the relevant data. ## 2 Methodology ### Two Pre-training Stages In order to train Skywork-13B, we constructed SkyPile (see Section 3.1), a massive training corpus primarily constituted by publicly accessible web pages. We identified a small subset of SkyPile, encompassing exercises and solutions that span a broad spectrum of subjects from primary to graduate school. This includes coding problems, national exam questions, textbook exercises, and others. Given the majority of these exercises are STEM-related, we henceforth refer to this subset and its complement as SkyPile-STEM and SkyPile-Main, respectively. Rather than training the Skywork-13B foundation model directly on SkyPile as a whole, we adopted a two-stage training approach. The first stage, which constitutes the primary pre-training phase, involves training the model from scratch on SkyPile-Main. In the second stage, our Skywork-13B is enriched with STEM-related domain knowledge and problem-solving skills through continual pre-training on SkyPile-STEM. To circumvent the potential issue of catastrophic forgetting, this continual pre-training is performed on a mix of SkyPile-STEM and SkyPile-Main, rather than exclusively on SkyPile-STEM. The decision to segregate Stage-1 and Stage-2 pre-training serves a dual purpose. Firstly, we acknowledge that a significant proportion of the samples from SkyPile-STEM are, by their nature, supervised data. Those data are closely related to popular benchmarks such as CEVAL (Huang et al., 2023), MMLU (Hendrycks et al., 2021) and GSM8K (Cobbe et al., 2021), and can be utilized in a supervised fine-tuning (SFT) process to directly enhance model performance on related downstream tasks. In this context, the separation between Stage-1 and Stage-2 training enables us to more effectively assess the impacts of general-purpose pre-training (on web texts) and targeted pre-training (on in-domain/supervised data). Such insights could inform future data collection and compilation strategies for foundational model training. Secondly, by restricting first stage pre-training to general-purpose data, we are able to produce a version of foundation model as an alternative to the one with targeted enhancement. While the latter demonstrates superior performance on certain downstream tasks, it is less capable in language modeling of natural texts. We posit that this alternative is a valuable contribution to the community, given its potential to excel in applications that do not require STEM-related competencies. ### Training Progress Monitoring It is of vital importance to monitor and assess progress made during pre-training in real-time. Existing methods such as monitoring training loss and benchmark results on intermediate checkpoints, however, have their limitations. The main issue of monitoring training loss lies in that its effectiveness comes into question when considering the potential of overfitting. The training loss is equivalent to validation loss only if the training data is utilized exactly once (i.e., in one epoch). Yet, in practical scenarios of training LLMs, high-quality data often go through the training process multiple times (Taylor et al., 2022; Touvron et al., 2023; Roziere et al., 2023; Gunasekar et al., 2023; Li et al., 2023). Besides, even after explicit de-duplication, there may still exist significant amount of duplicated data in the training set (Soboleva et al., 2023; Abbas et al., 2023). In either cases, solely relying on training loss can lead to overlooking the issue of overfitting, thereby producing overly optimistic estimates of model performance. The top left subplot in Figure 3 illustrates the trajectory of the pre-training loss for our Skywork-13B model. Consistent with findings reported in (Touvron et al., 2023, 2023), the loss demonstrates a steady decline throughout the training process. However, an observation not disclosed in these cited works is the behavior of the validation loss on held-out sets. From the figure it can be clearly seen that the validation losses seem to level off as training approaches its final stages. Benchmarking based on intermediate checkpoints is another common monitoring approach (Touvron et al., 2023; Baichuan Inc., 2023). Nevertheless, it presents several challenges. Firstly, there is a high variance in benchmark results, which can lead to unstable and unreliable assessments of training progress. Secondly, benchmark results are not sensitive to minor progress in training. This insensitivity makes it difficult to accurately track gradual improvements during the training process. Besides, weaker models do not follow instructions well. Hence benchmark results may not accurately reflect their true learning progress or potential. Finally, an inconvenience posed by most benchmarks is the necessity for model generation. This process is notably resource-intensive, demanding substantial computational power. During the pre-training of Skywork-13B, we embrace the method of monitoring the language modeling loss across numerous reserved validation sets, each reflecting a distinct data distribution. More specifically, we have created separate validation sets for code, academic publications, social media posts, web texts in Chinese and English, among others. Conventional monitoring metrics are also utilized, but they serve merely as supplementary tools. In Figure 1 we plot the curve of language model validation loss on English web texts against the average metric of several English downstream tasks. It is apparent that there is a very high correlation between the two quantities, showing that validation loss can serve as a valid proxy metric for downstream task performance. In the context of LLM pre-training, this approach also yields several other benefits: * Ease of construction: Crafting multiple validation sets is a relatively effortless task. This enables the evaluation of a model's language modeling performance across varied domains. * Simplicity in computation: Calculation of validation loss is straightforward, significantly reducing the computational and log-gistical overhead associated with tracking model training. * High sensitivity to training progress: Validation loss is finely attuned to the progression of training, thereby offering a more detailed Figure 1: Validation loss on English web texts vs. average task metric during the pre-training of Skywork-13B. The tasks include BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2019), Winogrande (Sakaguchi et al., 2021), TriviaQA (Joshi et al., 2017) and RACE (Lai et al., 2017). perspective on how models evolve and improve over time. * Model-agnosticism: Validation loss is indifferent to the composition of the training corpus or the model architecture. It allows for comparison not only between different checkpoints produced within a single training session, but also across varied models from the community. This ensures a consistent and equitable basis for model comparison. Note that monitoring the validation loss on a held-out set sharing the same distribution as the training set is a ubiquitous practice in machine learning. However, the observation of validation loss across multiple held-out sets, each with deliberate, unique distributions, is not common. We also note that the perspective asserting the primacy of language modeling loss as the paramount performance metric for models is not a recent revelation. This principle has been either explicitly or implicitly adopted in a number of research studies, as exemplified in Kaplan et al. (2020); Hoffmann et al. (2022); Anil et al. (2023); Xia et al. (2023); Deletang et al. (2023). ## 3 Pre-training ### SkyPile Corpus In order to train Skywork-13B, we build SkyPile, a vast, high quality corpus comprising more than 6 trillion tokens. A segment of the corpus, comprising over 150 billion tokens of web text, has been open sourced to facilitate research and training on Chinese LLMs3. Footnote 3: huggingface.co/datasets/Skywork/SkyPile-150B Our SkyPile is an amalgamation of several sources, the overwhelming majority of which is gleaned from publicly accessible channels. Numerous prior research works, exemplified by initiatives such as LLaMA Touvron et al. (2023) and RefinedWeb Penedo et al. (2023), have substantiated the notion that publicly accessible web data can yield exceptionally high-quality LLMs. In alignment with this empirical evidence, we subscribe to the premise of leveraging publicly accessible webpages as our primary source for training data. The construction of SkyPile is characterized by a dedicated emphasis on two primary dimensions: text quality and information distribution. Our data processing pipeline, inspired by Wenzek et al. (2020); Touvron et al. (2023); Penedo et al. (2023), incorporates the following stages: * **Structural Extraction:** Due to the predominant source of our dataset being publicly accessible web pages, the objective of the first stage is the extraction of pertinent content while concurrently expunging extraneous textual elements that are deemed non-contributory to the training of our language model, e.g. these superfluous components include navigational bars, site-specific contact information, disjunctive title texts devoid of substantive content, etc. Subsequent to this culling process, the retained information predominantly consists of contiguous, medium to long-form textual passages. * **Distribution Filtering:** In the pursuit of cultivating a profoundly adept LLM, the model's exposure must encompass a diverse array of content spanning an extensive spectrum of domains. Prior endeavors within the field have entailed the task of assigning categorical labels to each individual document or webpage, thereby manually dictating the composition of the training corpus. However, we posit that the corpus employed for LLM training has burgeoned to such an extent that the knowledge it encapsulates can not be compartmentalized discretely. Consequently, eschewing a label-centric approach, our methodology centers on benchmarking the semantic affinities existing between textual segments, thereby identifying and omitting those text blocks characterized by an exceedingly high recurrence rate. * **Deduplication:** Deduplication has demonstrated its remarkable efficacy in enhancing the overall quality of a training corpus, and it has found extensive application in virtually all prominent datasets Hernandez et al. (2022); Kandpal et al. (2022); Abbas et al. (2023); Lee et al. (2022). Within the framework of SkyPile, we regard deduplication as an integral component of the Distribution Filtering process. When considering the broader perspective, it becomes evident that duplication constitutes a paramount factor influencing the semantic distribution of a corpus. Consequently, the techniques and strategies we employed during the distribution filtering phase autonomously eliminated a substantial portion of duplicated content. * **Quality Filtering:** In this phase, we deploy the CCNet (Wenzek et al., 2020) pipeline to perform two critical filtration tasks: the elimination of content of inferior quality and the exclusion of pages that are neither in English nor Chinese. We trained a binary classifier that predicts the likelihood that a given webpage is suitable for inclusion as a reference within the Wikipedia corpus. The outcome of this stage is organized into distinct quality-based categories, and we retain exclusively the high quality groups, opting to discard the remaining groups in its entirety. Above we described our pre-processing pipeline for natural text. As for Github content, we employ an approach that is similar to (Together Computer, 2023). We have devised a collection of straightforward yet efficacious heuristics, encompassing criteria such as line length filtration and alphanumeric thresholds, designed to discern and exclude content of low quality. Our criteria are specifically oriented toward enhancing content quality, as opposed to merely curbing its volume. Notably, in contrast to prevailing practices that involve the wholesale removal of a significant portion of json, xml, yaml, and html content, we have made a deliberate choice to retain a judiciously proportionate representation of these data formats. Note that in pursuit of harmonizing the model's proficiency in both English and Chinese, we include in SkyPile a curated high-quality parallel corpora. This data is meticulously structured to pair a complete English paragraph with its corresponding Chinese counterpart, ensuring a seamless alignment of linguistic capabilities between the two languages. ### Training Data Composition Our Skywork-13B is pre-trained for 3.2 trillion tokens, sampled from SkyPile. Texts from certain sources are deemed as of high quality, e.g. Wikipedia, hence have undergone upsampling. However, we generally stick to the rule that the number of repetition does not exceed five, as is recommended by recent studies (Taylor et al., 2022; Muennighoff et al., 2023). We report in Table 1 a breakdown of the constituent components of the training tokens during Stage-1 pre-training. The training tokens are primarily composed of English and Chinese texts, constituting 49.8% and 39.6% of the data, respectively. Code contributes 8.0% to the total, with texts in other languages accounting for the remaining 2.4%. The category labeled as "miscellany" encompasses a diverse range of texts, including but not limited to, legal articles, court documents, company annual reports, and classical literature. ### Tokenizer We tokenize the data using byte-pair encoding (BPE) as implemented in SentencePiece (Kudo and Richardson, 2018), following the approach of LLaMA (Touvron et al., 2023). Since our model is intended to be English-Chinese bilingual, we extend the original vocabulary of LLaMA, which primarily consists of latin-based words and subwords, with frequently used Chinese characters and words. Specifically, we add 8000 single-character tokens from BERT's vocabulary (Devlin et al., 2019) to LLaMA's vocabulary. We further expand the vocabulary with 25k frequent Chinese multi-character words. This results in a total vocabulary size of 65,536 tokens, of which 17 are reserved as \begin{table} \begin{tabular}{c|l|c} \hline \hline & **Category** & **Percentage** \\ \hline \multirow{4}{*}{**English**} & Webpages & 39.8\% \\ & Books & 3.6\% \\ & Academic Papers & 3.0\% \\ & Encyclopedia & 0.5\% \\ & Miscellany & 2.9\% \\ \hline \multirow{4}{*}{**Chinese**} & Webpages & 30.4\% \\ & Social Media & 5.5\% \\ & Encyclopedia & 0.8\% \\ & Miscellany & 3.1\% \\ \hline **Other Lang.** & Encyclopedia & 2.4\% \\ \hline **Code** & Github & 8.0\% \\ \hline \hline \end{tabular} \end{table} Table 1: Breakdown of training data in Stage-1 pre-training of Skywork-13B. special symbols. As in LLaMA, we split all numbers into individual digits, and fall back to bytes to decompose unknown UTF-8 characters. ### Architecture Our Skywork-13B is based on the transformer architecture (Vaswani et al., 2017), consisting of stacks of transformer-decoder layers. In contrast to the original transformer model, we have incorporated several modifications, inspired by LLaMA (Touvron et al., 2023, 20). Our preliminary experiments, as illustrated in Figure 2, validate these changes, demonstrating the improved performance they confer. Details on this experiment can be found in Appendix A. While our network architecture takes after the LLaMA model to a great extent, there exists a notable difference in our preference for a deeper, yet narrower, network. A comparative exploration of the Skywork-13B and LLaMA2-13B network configurations is presented in Table 3. The specific modifications made are described in detail below. * **Positional Embedding:** We use Rotary Positional Embedding (RoPE) (Su et al., 2022), that was motivated by its extensive adoption in various prominent large language models, such as LLaMA and PaLM, as well as its demonstrated effectiveness in extending the length of context windows, as evidenced by recent studies (Chen et al., 2023; Roziere et al., 2023; Xiong et al., 2023). * **Layer Normalization:** We replaced the conventional layer normalization with RMSNorm (Zhang and Sennrich, 2019). Additionally, we adopted pre-normalization in each layer instead of post-normalization, which has been shown to enhance the training stability of transformer models. * **Activation:** We employed the SwiGLU activation function (Shazeer, 2020). In line with established conventions in prior studies, we reduced the dimension of the feed-forward network (FFN) from four times the hidden size to eight-thirds of the hidden size. This adjustment was made to maintain parity between the total parameters in a layer and those in the vanilla transformer layer. ### Infrastructure Our Skywork-13B is trained on a cluster of 64 NVIDIA-HGX-A800 nodes, a total of 512 A800-80G SXM GPUs. Each node in the cluster is outfitted with high-speed 400GB/s NVLinks \begin{table} \begin{tabular}{r l} \hline \hline \multicolumn{1}{c}{} & Category & Size \\ \hline Latin based words \& subwords & 32000 \\ Chinese characters \& Unicode symbols & 8000 \\ Chinese words & 25519 \\ Reserved symbols & 17 \\ \hline \multicolumn{1}{c}{} & **Total** & **65536** \\ \hline \hline \end{tabular} \end{table} Table 2: Breakdown of the vocabulary used in Skywork-13B. \begin{table} \begin{tabular}{r|r l} \hline \hline & LLaMA2-13B & Skywork-13B \\ \hline Vocab. Size & 32,000 & 65,536 \\ Hidden Dim. & 5,120 & 4,608 \\ FFN Dim. & 13,696 & 12,288 \\ Head Dim. & 128 & 128 \\ Num. Heads & 40 & 36 \\ Num. Layers & 40 & 52 \\ \hline Seq. Len. & 4,096 & 4,096 \\ \#Tokens per Batch & 4M & 16M \\ Peak LR & 3e-4 & 6e-4 \\ Minimum LR & 3e-5 & 6e-5 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons in architecture and important hyper-parameters of Skywork-13B and LLaMA2-13B. Figure 2: Preliminary Experiments: Comparison of conventional GPT architecture and more recent LLaMA architecture. For each of the two transformer variants, a model with 7 billion parameters is trained from Scratch on 200 Billion Tokens. The plot clearly shows that the LLaMA architecture achieves a lower training loss than GPT, demonstrating the former’s superiority. for intra-node communication and an 800Gb/s RoCE network for inter-node connectivity. Our training framework is based on Megatron-LM (Shoeybi et al., 2020) library, designed to support the stable, prolonged training of large-scale models, accommodating thousands of GPUs and model sizes in the order of hundreds of billions parameters. Considering the relatively moderate size of our Skywork-13B model, we have avoided the use of GPU memory optimization techniques and parallel schemes that could impede speed. These include Tensor Model Parallelism (Shoeybi et al., 2020), Sequence Parallelism (Korthikanti et al., 2022), ZeRO-Stage2 (Rajbhandari et al., 2020), and Checkpointing (Chen et al., 2016). Instead, we have leveraged Data Parallelism (DP) with ZeRO-1 (Rajbhandari et al., 2020) and Pipeline Parallelism (PP) (Narayanan et al., 2021) as the primary parallelization strategies for training Skywork-13B. ZeRO-1 substantially diminishes the GPU memory footprint of the Adam optimizer state without increasing the burden on intercommunication. Pipeline Parallelism offers memory optimization at a minimal communication overhead, which decreases as the gradient accumulation step increases, thereby mitigating the slowdown of all-reduce as DP Size increases. Regarding operator optimization, we adopted Flash Attention V2 (Dao et al., 2022; Dao, 2023), a strategy that both optimizes GPU memory and expedites the training process. Upon extensive preliminary experiments, we have decided to adopt the combination of DP256, PP2, and ZeRO-1 as our distributed training strategy for Skywork-13B. With this configuration, we achieved a token throughput of 1873 per GPU per second and a model flops utilization (MFU) of 56.5%. An overview of these experiments is provided in Appendix B. The training process of Skywork-13B spanned a total of 39 days. ### Training Details As outlined in Section 2.1, the pre-training of Skywork-13B is executed in two stages: * **Stage-1:** General purpose pre-training on SkyPile-Main. * **Stage-2:** STEM-oriented continual pre-training on SkyPile-STEM. In both stages, the model is trained using the standard auto-regressive language modeling objective, with context lengths fixed at 4096 tokens. The AdamW optimizer (Loshchilov and Hutter, 2019), applied for the training process, uses \(\beta_{1}\) and \(\beta_{2}\) values of 0.9 and 0.95, respectively. Throughout the pre-training, we applied a weight decay of 0.1 and gradient clipping of 1.0. Our model was trained with bfloat16 mixed precision. #### 3.6.1 Stage-1 Pre-training In the first stage, our Skywork-13B model is trained from scratch on SkyPile-Main for over three trillion tokens. This stage consists of two sequential training sessions, covering the first \(0\sim 2\)T tokens and the subsequent \(2\sim 3\)T tokens, respectively. Our initial plan was to train Skywork-13B for two trillion tokens. We launched a training session accordingly, with a cosine learning rate schedule that gradually decays from a peak learning rate of 6e\(-\)4 to a final learning rate of 6e\(-\)5. In Figure. 3, we report in red curves the evolution of language modeling losses and several benchmark results of our Skywork-13B during this session. It is evident that by the end of this session, the model had not reached saturation. We hypothesized that the model could further benefit from additional pre-training, prompting us to launch a secondary training session targeting an additional one trillion tokens. The second training session utilized a slightly different composition of training data compared to the initial \(0\sim 2\)T session, as data from certain sources had been depleted and fresh sources were introduced. Owing to the shift in the training distribution, we meticulously tuned the learning rate parameter, eventually deciding on a constant learning rate of 6e-5 for the \(2\sim 3\)T session. In Figure. 4, we illustrate the model losses under varying learning rate conditions. Results indicate that a higher learning rate leads to escalations in training loss which we deem too costly to reverse. The impact of the second training session is depicted in blue curves of Fig. 3. The enhancement in the model's performance continues, albeit at a decelerating pace. Interestingly, although our Skywork-13B trails in the realm of English language modeling, it significantly surpasses all other comparable open LLMs in Chinese language modeling. In Section 4.3, we will confirm that the superiority of our Skywork-13B in Chinese language modeling is not only true on our validation set, it also holds true on a number of test sets sourced from diverse domains. More results can be found in Appendix (see Figure 6). #### 3.6.2 Stage-2 Pre-training The primary aim of Stage-2 pre-training is to augment the model with capabilities pertinent to STEM disciplines. The data utilized in this stage comprises an approximate 20% from SkyPile-STEM and 80% from SkyPile-Main, amassing a total of roughly 130 billion tokens. A constant learning rate of 6e\(-\)5 is adopted, maintaining parity with the terminal learning rate used in Stage-1 pre-training Consequent to the data distribution shift from Stage-1 to Stage-2, it becomes crucial to meticulously calibrate the sampling ratio between the different data sources. Initial experiments revealed that a gradual increment in the SkyPile-STEM ratio yielded the most effective results. Therefore, for the actual Stage-2 pre-training phase, we implemented a sampling plan that commenced with 10% of SkyPile-STEM initially, gradually escalating to a peak of 40% towards the conclusion of the training. This training strategy proved successful in maintaining the stability of the model's language modeling validation loss while enabling an optimum transfer of STEM knowledge. The extended training period ensures a comprehensive assimilation of STEM-related knowledge into the model without causing significant disturbance to the pre-existing learned information. The impact of Stage-2 pre-training is illustrated in Figure 5, which presents the progres Figure 3: Trajectory of important monitoring metrics during Stage-1 pre-training. Top Left: Training loss. Top Middle and Right: Validation loss on English and Chinese held-out sets of web texts. The horizontal dashed lines in the middle and right plots correspond to the evaluated language modeling loss for several similar-sized open LLMs. Bottom: Benchmark results on CEVAL, MMLU and GSM8K respectively. Stage-1 pre-training consists of two sequential training sessions, represented by different colors in the loss curves (red for session \(0\sim 2\)T and blue for session \(2\sim 3\)T). sion of the CEVAL benchmark score. The evolution of scores on other STEM-related benchmarks, such as GSM8K, mirrors a similar trend. Improvements in individual subjects of the CEVAL can be found in Table 12 (see appendix). ## 4 Evaluation ### Baselines We compare the performance of our Skywork-13B with open models that are similar in size, including LLaMA-13B (Touvron et al., 2023a), LLaMA2-13B (Touvron et al., 2023b), Baichuan-13B, Baichuan2-13B (Baichuan Inc., 2023), Xverse-13B (Xverse-AI, 2023), InterLM-20B (InternLM Team, 2023). A summary of these models can be found in Table 4. ### Benchmark Evaluation We focus on the following popular benchmarks: * MMLU (Hendrycks et al., 2021): MMLU is a benchmark designed to measure knowledge acquired during pre-training. The benchmark covers 57 subjects across STEM, the humanities, the social sciences, and more, ranging in difficulty from an elementary level to an advanced professional level. It tests both world knowledge and problem solving ability. * CEVAL (Huang et al., 2023) and CMMLU (Li et al., 2023a): Those are Chinese benchmarks that mimick MMLU. CEVAL consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. CMMLU covers 67 disciplines that span from elementary to advanced professional levels. * GSM8K (Cobbe et al., 2021): This dataset consists of 8500 high-quality grade school math word problems created by human writers. These multi-step problems require between 2 and 8 steps to solve. GSM8K is usually used in benchmarking multi-step mathematical reasoning ability of LLMs. In Table 5 we present a comparison of performance results from different models on these benchmarks. The metrics for CEVAL, CMMLU and MMLU are 5-shot accuracy, while for GSM8K it is 8-shot accuracy. Higher numbers indicate better performance. It can be seen that our Skywork-13B achieves the highest score on both the CEVAL and MMLU and \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & \#**Tokens** & **Language** \\ \hline OpenLLaMA-13B & 1.0T & English \\ LLaMA-13B & 1.0T & English \\ LLaMA2-13B & 2.0T & English \\ Baichuan-13B & 1.4T & English \& Chinese \\ Baichuan2-13B & 2.6T & English \& Chinese \\ Xverse-13B & 1.4T & English \& Chinese \\ InternLM-20B & 2.3T & English \& Chinese \\ \hline Skywork-13B & 3.2T & English \& Chinese \\ \hline \hline \end{tabular} \end{table} Table 4: Details of various models. The column labeled "#Tokens" indicates the quantity of training tokens used by each model, whereas the "Language" column specifies the primary languages supported by each model. Figure 4: Test runs for tuning the learning rate of the \(2\sim 3\)T training session. It can be seen that 6-5, which is the terminal learning rate from \(0\sim 2\)T training session, yields the best result. Figure 5: Evolution of CEVAL score during Stage-2 pre-training. GSM8K benchmarks, with scores of 60.6 and 62.1 and 55.8 respectively. On the CMMLU benchmark, Baichuan2-13B achieves the highest performance with a score of 62.0. In summary, our Skywork model has demonstrated exceptional performance across a diverse range of comprehensive benchmark tests. Results of individual subjects of the CEVAL can be found in Table 12. Results of other benchmarks can be found in Appendix C. ### Language Modeling Results #### 4.3.1 LM as a solution to benchmark overfitting Conventional benchmarks for evaluating LLMs often rely on static datasets of human-annotated examples. A core issue with this approach is that updating the test samples regularly is difficult and costly. Over time, the static test sets tend to be overfitted, producing misleading benchmark results. We propose language modeling evaluations as a compelling alternative. Perplexity in language modeling acts as a proxy metric strongly linked to performance on diverse downstream tasks (see Figure 1). Since language modeling solely requires unlabeled natural text, it eliminates the need for expensive human annotation. Constructing and revising language modeling test sets is low-cost, as new data can be readily sampled from newly published content. Additionally, if a test set becomes compromised, fresh test data can quickly be sampled as a replacement. #### 4.3.2 Construction of diverse LM testsets We compare the language modeling capabilities of various language models with our Skywork-13B, focusing on Chinese language. To conduct a robust evaluation of language modeling capability, we have separately collected a diverse corpus of texts from a myriad of websites, each labeled according to its respective domain. The domains we cover span a wide spectrum, encompassing areas such as technology, movies, finance, to name a few. These domain-specific evaluation datasets have also been open-sourced for public access4. Footnote 4: Github: [https://github.com/SkyworkAI/Skywork/tree/main/data/eval_loss](https://github.com/SkyworkAI/Skywork/tree/main/data/eval_loss) We ensure that every test sample consists of documents or user posts published _after_ September 1, 2023. This cut-off date guarantees that no test sample was inadvertently included during the pre-training of any evaluated language model. Specifically, SkyPile's cut-off date is June 30, 2023, and the majority of models under evaluation were released prior to August 31. Note that while the held-out validation set used to monitor the training progress (as shown in Figure 3) of our model can also serve this purpose, it has the same distribution (web texts) as the bulk of the training corpus, thus may lead to overly optimistic estimate of the actual language modeling capability of the model. More details on the sources of the test samples and the underlying data collection pipeline can be found in Appendix D. #### 4.3.3 Results The results of our language modeling evaluation are presented in Table 6, where results from ChatGLM3-6B (THUDM, 2023), MOSS-7B (Sun and Qiu, 2023), Baichuan2-7B (Baichuan Inc., 2023), Qwen-7B (Qwen Team, 2023), InternLM-7B (InternLM Team, 2023) and Aquilla2-34B are also included. It can be seen that our Skywork-13B model shows the best performance overall, obtaining the lowest average perplexity score of 9.42. It also exhibits the best performance across individual domains, achieving the lowest perplexity scores in tech (11.58), movie (21.84), government (4.76), and finance (4.92) domains. It excels not only in surpassing the performance of models of a similar size, but also in outperforming significantly larger models such as InternLM-20B and Aquila2-34B. We attribute the excellent language modeling performance of our Skywork-13B to the quality of our training corpus. Details on rigorous data filtering pipeline are described in Section 3.1. ## 5 Discussion In this section, we delve into the benefits and associated risks of pre-training on the in-domain data5 of benchmark tasks. ### Effect of pre-training on in-domain data Pre-trained language models, or foundation models, are intended to be used in transfer learning as a general purpose backbone. As a foundation model in itself has little usage other than sentence completion, the quality of a foundation model is typically evaluated in terms of its performance in those tasks. Apparently, when it comes to improve a foundation model's quality as measured by its task performance, it is always far more efficient to train the model on in-domain data of that task Hernandez et al. (2021); Chung et al. (2022), as compared to general-purpose data (web texts). We have shown that Stage-2 pre-training significantly amplifies our Skywork-13B's STEM related capabilities, leading to a substantial improvement in performance on STEM-related tasks. Now we show that it is even possible to enhance a much weaker base model, i.e., an intermediate checkpoint, using only a fraction of the data and compute used in Stage-2 pre-training. Table 7 presents the CEVAL and GSM8K scores before and after pre-training on in-domain data, utilizing a relatively weak model checkpoint that has only undergone 0.5T pre-training. The results indicate that after pre-training with merely 1B tokens of in-domain \begin{table} \begin{tabular}{l|c c c c c|c|c} \hline \hline & **Tech** & **Movie** & **Gov.** & **Game** & **Finance** & **General** & **Average** \\ \hline ChatGLM3-6B & 12.48 & 23.48 & 5.07 & 18.45 & 5.67 & 7.47 & 10.25 \\ MOSS-7B & 20.83 & 39.66 & 11.08 & 31.24 & 10.59 & 13.25 & 18.50 \\ InternLM-7B & 13.43 & 24.9 & 5.88 & 19.78 & 6.17 & 8.10 & 11.17 \\ Qwen-7B & 13.39 & 25.16 & 5.55 & 19.26 & 5.76 & 7.78 & 10.83 \\ Baichuan2-7B & 12.89 & 23.26 & 5.34 & 18.36 & 5.68 & 7.62 & 10.41 \\ \hline LLaMA2-13B & 23.26 & 50.66 & 18.09 & 32.52 & 14.85 & 16.55 & 23.54 \\ Xverse-13B & 12.55 & 23.49 & 5.20 & 17.69 & 5.54 & 7.46 & 10.19 \\ Baichuan-13B & 12.38 & 22.46 & 5.21 & 17.59 & 5.42 & 7.37 & 10.03 \\ Baichuan2-13B & 12.14 & 21.85 & 5.05 & 17.15 & 5.35 & 7.24 & 9.81 \\ Qwen-14B & 11.90 & 22.43 & 4.89 & 16.94 & 5.24 & 7.03 & 9.67 \\ InternLM-20B & 12.34 & 22.06 & 5.75 & 17.45 & 5.73 & 7.78 & 10.34 \\ Aquila2-34B & 14.62 & 29.09 & 5.72 & 21.78 & 5.83 & 8.45 & 11.73 \\ \hline Skywork-13B & 11.58 & 21.84 & 4.76 & 17.28 & 4.92 & 6.82 & 9.42 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparative analysis of language modeling capabilities across diverse domains. Performance is measured using perplexity (lower values is better). Underlined figures correspond to the best result in each column. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **CEVAL** & **CMMLU** & **MMLU** & **GSM8K** \\ \hline OpenLLaMA-13B & 27.1 & 26.7 & 42.7 & 12.4 \\ LLaMA-13B & 35.5 & 31.2 & 46.9 & 17.8 \\ LLaMA-2-13B & 36.5 & 36.6 & 54.8 & 28.7 \\ Baichuan-13B & 52.4 & 55.3 & 51.6 & 26.6 \\ Baichuan2-13B & 58.1 & 62.0 & 59.2 & 52.8 \\ XVERSE-13B & 54.7 & - & 55.1 & - \\ InternLM-20B & 58.8 & - & 62.0 & 52.6 \\ \hline Skywork-13B & 60.6 & 61.8 & 62.1 & 55.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of results on popular benchmarks. Best result in each column is underlined. It can be seen that our Skywork-13B consistently perform well across the different benchmarks, indicating its overall robustness. data, a weak model, initially performing only slightly better than random at CEVAL and GSM8K, can surpass the performance of our strongest Skywork-13B (3T) backbone without in-domain pre-training. However, this comes at the cost of significant degradation in language modeling performance, as evidenced by the higher loss on both tasks, shown in the two rightmost columns of the table. ### Pre-training on in-domain data: a common practice? It is of interest to explore whether popular foundational models are pre-trained on in-domain data. In pursuit of this, we delve into the GSM8K datasets, equipped with official train/test splits and comprehensive solutions. We evaluate an LLM's language modeling loss on three datasets drawn from the same distribution: 1) The official GSM8K training set, 2) The official GSM8K test set, 3) A set composed of GSM8K-like samples generated by GPT-4. The corresponding losses are denoted as \(L_{train}\), \(L_{test}\), and \(L_{ref}\), respectively. Theoretically, if a language model has not been exposed to any of the three datasets during pre-training, the three losses \(L_{train}\), \(L_{test}\), and \(L_{ref}\) should be approximately equivalent. However, if the model has been pre-trained on the training set or if the test data has been inadvertently exposed during the pre-training process, we would anticipate a notable discrepancy between \(L_{train}\), \(L_{test}\), and \(L_{ref}\). Our results are outlined in Table 8, which also reports the differences in losses \(\Delta_{1}=L_{test}-L_{ref}\) and \(\Delta_{2}=L_{test}-L_{train}\). Notably, the \(\Delta_{2}\) column reveals that for most models, the language modeling loss on the GSM8K training and test splits are almost identical. However, models such as ChatGLM3-6B, Baichuan2-13B, Qwen-7B/14B, and Aquila2-34B display markedly lower loss on the training split than on the test split. Consequently, we postulate that these models may have been considerably pre-trained on GSM8K training split or similar data. Moreover, we notice one particular anomaly in the \(\Delta_{1}\) column, indicating the significantly lower \(L_{test}\) loss compared to \(L_{ref}\), which is interesting to further study for better understanding. ### Pre-Training or Supervised Fine-Tuning? In the era preceding the advent of LLMs such as GPT-4 (Bubeck et al., 2023; OpenAI, 2023) and Claude (Bai et al., 2022), supervised data for NLP tasks was generally scarce. This was because the process of data collection and annotation was both time-consuming and costly. Due to the scarcity of supervised data, NLP researchers rely on unsupervised pre-training techniques (Mikolov et al., 2013; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) to improve downstream task performance via transfer learning, where supervised data is to be used only in the fine-tuning stage. In this context, pre-training on in-domain (supervised) data was pointless, as it would defeat the purpose of pre-training itself (transfer learning). This reality has significantly shifted, however, with the emergence of powerful LLMs. This is because procuring large amounts of high quality supervised/in-domain data is now as simple as making a few API requests to these LLMs, and it is comparatively low-cost (Wang et al., 2023; Taori et al., 2023). This new reality blurs the boundary between pre-training and supervised fine-tuning, making it feasible to incorporate substantial amounts of supervised data into the pre-training phase (Gunasekar et al., 2023; Li et al., 2023). After all, curated in-domain data, whether written by human annotators or generated by LLM, are all form of human knowledge, and there is good reason for this knowledge to be absorbed into a foundation model. That said, we believe that there is valid risk on the practice of targeted pre-training, in that it compromise fairness in benchmarking. While through pre-training on in-domain data a model \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & CEVAL & GSM8K & En Loss & Zh Loss \\ \hline Before & 28.3 & 6.9 & 1.86 & 2.08 \\ After & 50.8 & 40.7 & 2.09 & 2.21 \\ \hline \(\Delta\) & +22.5 & +33.8 & +0.23 & +0.13 \\ \hline \hline \end{tabular} \end{table} Table 7: The impact of pre-training on a 0.5T checkpoint of Skywork-13B using only 1B tokens. The training data is sourced from a subset of our SkyPile-STEM corpus. The columns “En Loss” and “Zh Loss” show the model’s validation loss on held-out sets of English and Chinese web texts, respectively. may excel at specific tasks, it remains uncertain how well it would perform on unseen tasks. Its capabilities may be overestimated based on the benchmark alone, which can lead to unfair comparisons between models and mislead users or stakeholders about the true capabilities of the model. ## 6 Limitation Our pre-training approach for Skywork-13B involved a two-stage process: general purpose pre-training followed by domain-specific enhancement pre-training. However, it remains unclear whether this methodology can produce a model on par with, or superior to, a model trained in one stage on a mixed corpus. Further investigation is needed to determine the comparative effectiveness of these pre-training approaches. Additionally, we have proposed using language modeling loss or perplexity as proxy metrics for monitoring and evaluating large language models. A limitation is that language modeling evaluation relies on the specific distribution used to sample test data, of which there are infinite possibilities. While language modeling perplexity over a given data distribution may predict performance on some tasks, it may not translate to other tasks. The correlation between language modeling and downstream performance could vary across different distributions and tasks. ## 7 Conclusion Our work on Skywork-13B represents a significant leap forward in the development of open large language models. We believe that our comprehensive and transparent approach to the model's development will be a valuable resource for researchers in the field, fostering collaboration and open-source principles. Our two-stage training methodology, leveraging a segmented corpus, offers a novel approach for enhancing model capability in specific domain, while our method of monitoring the training progress provides a practical solution to the challenges of tracking the improvement of these models over time. However, our work is more than just the creation of a new LLM. It is a call to action for the broader NLP community, urging a return to \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline & \(L_{test}\) & \(L_{train}\) & \(L_{ref}\) & \(\Delta_{1}\) & \(\Delta_{2}\) \\ \hline ChatGLM3-6B & 0.99 & 0.78 & 0.99 & 0.0 & 0.21 \\ MOSS-7B & 1.51 & 1.52 & 1.49 & 0.02 & \(-\)0.01 \\ InternLM-7B & 1.21 & 1.12 & 1.27 & -0.06 & 0.09 \\ Qwen-7B & 1.07 & 0.64 & 1.10 & -0.03 & 0.43 \\ Baichuan2-7B & 1.41 & 1.42 & 1.36 & 0.05 & \(-\)0.01 \\ \hline LLaMA-13B & 1.41 & 1.42 & 1.36 & 0.05 & \(-\)0.01 \\ LLaMA2-13B & 1.36 & 1.38 & 1.33 & 0.03 & \(-\)0.01 \\ Xverse-13B & 1.42 & 1.43 & 1.39 & 0.03 & \(-\)0.01 \\ Baichuan-13B & 1.41 & 1.42 & 1.37 & 0.04 & \(-\)0.01 \\ Baichuan2-13B & 1.09 & 0.72 & 1.12 & -0.03 & 0.37 \\ Qwen-14B & 1.03 & 0.42 & 1.14 & -0.11 & 0.61 \\ InternLM-20B & 1.20 & 1.09 & 1.19 & 0.01 & 0.11 \\ Aquila2-34B & 0.78 & 0.39 & 1.29 & 0.51 & 0.39 \\ \hline Skywork-13B & 1.01 & 0.97 & 1.00 & 0.01 & 0.04 \\ \hline \hline \end{tabular} \end{table} Table 8: We evaluate the language modeling (LM) loss on samples (a sample is a concatenation of question and answer) from GSM8K dataset for several foundation models. For each LLM, we compare LM loss on the training split (\(L_{train}\)), the test split (\(L_{test}\)), and a specially curated reference set (\(L_{ref}\)), generated by GPT-4, designed to mimic the GSM8K dataset. We also reports two key metrics: \(\Delta_{1}=L_{test}-L_{ref}\), serving as an indicator of potential test data leakage during the training of the LLM, i.e., a lower value suggests possible leakage; and \(\Delta_{2}=L_{test}-L_{train}\), which measures the degree of overfitting on the training split of the dataset. A higher value of \(\Delta_{2}\) implies excessive overfitting. Outliers for both \(\Delta_{1}\) and \(\Delta_{2}\) are highlighted in gray. the principles of fairness, transparency, and the sharing of ideas that have historically fueled progress in the field. We hope that Skywork-13B will not only serve as a powerful tool for a wide range of applications but also inspire a renewed commitment to openness and cooperation in the development of future models.
2306.04390
Gain assisted controllable fast light generation in cavity magnomechanics
We study the controllable output field generation from a cavity magnomechanical resonator system that consists of two coupled microwave resonators. The first cavity interacts with a ferromagnetic yttrium iron garnet (YIG) sphere providing the magnon-photon coupling. Under passive cavities configuration, the system displays high absorption, prohibiting output transmission even though the dispersive response is anamolous. We replace the second passive cavity with an active one to overcome high absorption, producing an effective gain in the system. We show that the deformation of the YIG sphere retains the anomalous dispersion. Further, tuning the exchange interaction strength between the two resonators leads to the system's effective gain and dispersive response. As a result, the advancement associated with the amplification of the probe pulse can be controlled in the close vicinity of the magnomechanical resonance. Furthermore, we find the existence of an upper bound for the intensity amplification and the advancement of the probe pulse that comes from the stability condition. These findings may find potential applications for controlling light propagation in cavity magnomechanics.
Sanket Das, Subhadeep Chakraborty, Tarak N. Dey
2023-06-07T12:42:26Z
http://arxiv.org/abs/2306.04390v1
# Gain assisted controllable fast light generation in cavity magnomechanics ###### Abstract We study the controllable output field generation from a cavity magnomechanical resonator system that consists of two coupled microwave resonators. The first cavity interacts with a ferromagnetic yttrium iron garnet (YIG) sphere providing the magnon-photon coupling. Under passive cavities configuration, the system displays high absorption, prohibiting output transmission even though the dispersive response is anomalous. We replace the second passive cavity with an active one to overcome high absorption, producing an effective gain in the system. We show that the deformation of the YIG sphere retains the anomalous dispersion. Further, tuning the exchange interaction strength between the two resonators leads to the system's effective gain and dispersive response. As a result, the advancement associated with the amplification of the probe pulse can be controlled in the close vicinity of the magnomechanical resonance. Furthermore, we find the existence of an upper bound for the intensity amplification and the advancement of the probe pulse that comes from the stability condition. These findings may find potential applications for controlling light propagation in cavity magnomechanics. ## I Introduction Cavity magnonics [1; 2], has become an actively pursued field of research due to its potential application in quantum information processing [3; 4]. The key constituent to such systems is a ferrimagnetic insulator with high spin density and low damping rate. It also supports quantized magnetization modes, namely, the magnons [5; 6]. With strongly coupled magnon-photon modes, cavity magnonics is an excellent platform for studying all the strong-coupling cavity QED effects [7]. Besides originating from the shape deformation of the YIG, the magnon can also couple to a vibrational or phonon mode [5]. This combined setup of magnon-photon-phonon modes, namely the cavity magnomechanics, has already demonstrated magnomechanically induced transparency [5], magnon-induced dynamical backaction [8], magnon-photon-phonon entanglement [9; 10], squeezed state generation [11], magnomechanical storage and retrieval of a quantum state [12]. Recently, \(\mathcal{PT}\)-symmetry drew extensive attention to elucidate the dynamics of a coupled system characterized by gain and loss [13; 14]. Here, \(\mathcal{P}\) stands for the parity operation, that results in an interchange between the two constituent modes of the system. The time reversal operator \(\mathcal{T}\) takes \(i\) to \(-i\). \(\mathcal{PT}\)-symmetry demands the Hamiltonian is commutative with the joint \(\mathcal{PT}\) operators _i.e.,_\([H,PT]=0\). This system possesses a spectrum of entirely real and imaginary eigenvalues that retain distinguishable characteristics [15]. The point separating these two eigenvalues is the exceptional point (EP) [16] where the two eigenvalues coalesce, and the system degenerates. A natural testbed for \(\mathcal{PT}\)-symmetric Hamiltonian is optical as well as quantum optical systems [17; 18; 19] which already led to the demonstration of some of the exotic phenomena, like nonreciprocal light propagation [20], unidirectional invisibility [21; 22], optical sensing and light stopping [23]. Very recently, a tremendous effort has been initiated to explore non-Hermitian physics in magnon assisted hybrid quantum systems. The second-order exceptional point is detected in a two-mode cavity-magic system, where the gain of the cavity mode is accomplished by using the idea of coherent perfect absorption [24]. The concept of Anti-\(\mathcal{PT}\) symmetry has been realized experimentally [25], where the adiabatic elimination of the cavity field produces dissipative coupling between two magnon modes. Beyond the unique spectral responses, these non-Hermitian systems can manipulate the output microwave field transmission [26; 27]. The underlying mechanism behind such an application is magnetically induced transparency[5; 28], where the strong magnon-photon coupling produces a narrow spectral hole inside the probe absorption spectrum. Further studies in this direction establish the importance of the weak magnon-phonon coupling to create double transmission windows separated by an absorption peak. Moreover, manipulating the absorption spectrum is also possible by varying the amplitude and phase of the applied magnetic field [29]. It is well established over the past decade that optomechanically induced transparency (OMIT) [30; 31; 32] is an essential tool for investigating slow light [33] and light storage [34; 35] in cavity. In addition, incorporating \(\mathcal{PT}\)-symmetry in optomechanical systems, provides a better controllability of light transmission [36; 37] and produces subluminal to superluminal light conversion. Nonetheless, their proposals may find experimental challenges as the gain of the auxiliary cavity can lead the whole system to instability [38]. An eminent advantage of the magnomechanical system over the optomechanical system is that it offers strong hybridization between the magnon-photon mode. The magnomechanical systems offer better tunability as an external magnetic field can vary the magnon frequency. Exploiting these advantages, a \(\mathcal{PT}\)-symmetry-like magnomechanical system can be constructed by resonantly driving the YIG sphere to an active magnon mode [39]. The controllable sideband generation with tunable group delay can be feasible by changing the power of the control field. This paper investigates a controllable advancement and transmission of the microwave field from a coupled cavity magnomechanical system. Optical coupling between a passive cavity resonator containing YIG sphere and a gain-assisted auxiliary cavity can form a coupled cavity resonator. An external drive has been used to deform the YIG sphere's shape, resulting in the magnon-phonon interaction in the passive cavity. We show how the gain of the auxiliary cavity helps to overcome absorptive behaviour in our hybrid system. As a result, the output microwave field amplifies at the resonance condition. Moreover, the weak magnon-phonon interaction exhibits anomalous dispersion accompanied by a gain spectrum, demonstrating superluminal light. We also examine how the slope of the dispersion curve can be controlled by tuning the photon hopping interaction strength between the two cavities. The paper is organized as follows. In Section II, a theorical model for the coumpound cavity magnomechanical system with \(\mathcal{PT}\)-symmetric resonator is described. The Heisenserg equations of motion to govern the expectation values of operators of every system are derived in this Section. In Section III.1, we analyse the stability criteria of the model system and examine the effect of the auxiliary cavity gain on the absorptive and dispersive response of the system in Section III.2. Section III.3 discusses the output probe field transmission. Further, the group velocity of the optical probe pulse has been studied analytically and verified numerically in Section III.4. Finally, we draw our conclusions in Section IV. ## II Theoretical model Recently, there has been a growing interest in realizing a gain in different components of cavity magnonics systems [24, 39]. In this work, we investigate the effect of medium gain on the probe response and its transmission. The system under consideration is a hybrid cavity magnomechanical system that consists of two coupled microwave cavity resonators. One of the resonators is passive and contains a YIG sphere inside it. We refer to this resonator as a cavity magnomechanical (CMM) resonator. Applying a uniform bias magnetic field to the YIG sphere excites the magnon mode. The magnon mode, in turn, couples with the cavity field by the magnetic-dipole interaction. Nonetheless, the external bias magnetic field results in shape deformation of the YIG sphere, leading to the magnon-phonon interaction. The second resonator (degenerate with the first one) is coupled to the first resonator via optical tunnelling at a rate \(J\). Two input fields drive the first resonator. The amplitude of the control, \(\varepsilon_{l}\), and probe fields, \(\varepsilon_{p}\), are given by \(\varepsilon_{i}=\sqrt{P_{i}/\hbar\omega_{i}},(i\in l,p)\) with \(P_{i}\) and \(\omega_{i}\) being the power and frequency of the respective input fields. The Hamiltonian of the combined system can be written as \[H=\hbar\omega_{e}a_{1}^{\dagger}a_{1}+\hbar\omega_{e}a_{2}^{ \dagger}a_{2}+\hbar\omega_{m}m^{\dagger}m+\hbar\omega_{b}b^{\dagger}b\] \[+\hbar J(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1})+\hbar g_{ma}( a_{1}^{\dagger}m+a_{1}m^{\dagger})\] \[+\hbar g_{mb}m^{\dagger}m(b^{\dagger}+b)+i\hbar\sqrt{2\eta_{a} \kappa_{1}}\varepsilon_{l}(a_{1}^{\dagger}e^{-i\omega_{l}t}-a_{1}e^{i\omega_{ l}t})\] \[+i\hbar\sqrt{2\eta_{a}\kappa_{1}}\varepsilon_{p}(a_{1}^{\dagger }e^{-i\omega_{p}t}-a_{1}e^{i\omega_{p}t}), \tag{1}\] where the first four terms of the Hamiltonian describe the free energy associated with each system's constituents. The constituents of our model are characterized by their respective resonance frequencies: \(\omega_{c}\) for the cavity mode, \(\omega_{m}\) for the magnon mode, \(\omega_{b}\) for the phonon mode. The annihilation operators for the cavity, magnon and phonon modes are represented by \(a_{i}\), \((i=1,2)\), \(m\) and \(b\), respectively. The fifth term signifies the photon exchange interaction between the two cavities with strength, \(J\). The sixth term of the Hamiltonian corresponds to the interaction between the magnon and photon modes, characterized by a coupling rate \(g_{ma}\). The interaction between the magnon and phonon modes is described by the seventh term of the Hamiltonian and the coupling rate between magnon and phonon mode is \(g_{mb}\). Finally, the last two terms arise due to the interaction between the cavity field and two input fields. The cavity, magnon and phonon decay rates are characterized by \(\kappa_{1},\kappa_{m}\) and \(\kappa_{b}\), respectively. The coupling between the CMM resonator and the output port is given by \(\eta_{a}=\kappa_{c_{1}}/2\kappa_{1}\), where Figure 1: The schematic diagram of a hybrid cavity magnomechanical system. The system consists of two coupled microwave cavities. One of them is passive, and another one is active. The passive cavity contains a ferromagnetic YIG sphere inside it. The applied bias magnetic field produces the magnetostrictive interaction between magnon and phonon. The coupling rates between the magnon-photon and magnon-phonon are \(g_{ma}\) and \(g_{mb}\), respectively. Strong control field of frequency \(\omega_{l}\) and a weak probe field of frequency \(\omega_{p}\) are applied to the passive cavity. \(\kappa_{c_{1}}\) is the cavity external decay rate. In particular, we will consider the CMM resonator to be working in the critical-coupling regime where \(\eta_{a}\) is \(1/2\). At this point, it is convenient to move to a frame rotating at \(\omega_{l}\). Following the transformation \(H_{rot}=RHR^{\dagger}+i\hbar\left(\partial R/\partial t\right)R^{\dagger}\) with \(R=e^{i\omega_{l}(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}+m^{\dagger}m)t}\), the Hamiltonian in Eq. (1) can be rewritten as \[H_{rot} =\hbar\Delta_{a}(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2})+\hbar \Delta_{m}m^{\dagger}m+\hbar\omega_{b}b^{\dagger}b\] \[+\hbar J(a_{1}^{\dagger}a_{2}+a_{2}^{\dagger}a_{1})+\hbar g_{ma} (a_{1}^{\dagger}m+a_{1}m^{\dagger})\] \[+\hbar g_{mb}m^{\dagger}m(b^{\dagger}+b)+i\hbar\sqrt{2\eta_{a} \kappa_{1}}\varepsilon_{l}(a_{1}^{\dagger}-a_{1})\] \[+i\hbar\sqrt{2\eta_{a}\kappa_{1}}\varepsilon_{p}(a_{1}^{\dagger }e^{-i\delta t}-h.c), \tag{2}\] where \(\Delta_{a}=\omega_{c}-\omega_{l}\) (\(\Delta_{m}=\omega_{m}-\omega_{l}\)) and \(\delta=\omega_{p}-\omega_{l}\) are, respectively, the cavity (magnon) and probe detuning. The mean response of the system can be obtained by the Heisenberg - Langevin equation as \(\langle\hat{\mathcal{O}}\rangle=i/\hbar\langle[H_{rot},\mathcal{O}]\rangle+ \langle N\rangle\). Further, we consider the quantum fluctuations (\(N\)) as white noise. Then starting form Eq. 2, the equations of motion of the system can be expressed as \[\langle a_{1}\rangle =(-i\Delta_{a}-\kappa_{1})\langle a_{1}\rangle-ig_{ma}\langle m \rangle-iJ\langle a_{2}\rangle\] \[+\sqrt{2\eta_{a}\kappa_{1}}\varepsilon_{l}+\sqrt{2\eta_{a}\kappa _{1}}\varepsilon_{p}e^{-i\delta t},\] \[\langle\dot{m}\rangle =(-i\Delta_{m}-\kappa_{m})\langle m\rangle-ig_{ma}\langle a_{1}\rangle\] \[-ig_{mb}\langle m\rangle(\langle b^{\dagger}\rangle+\langle b \rangle),\] \[\langle\dot{b}\rangle =(-i\omega_{b}-\kappa_{b})\langle b\rangle-ig_{mb}\langle m^{ \dagger}\rangle\langle m\rangle,\] \[\langle\dot{a_{2}}\rangle =(-i\Delta_{a}+\kappa_{2})\langle a_{2}\rangle-iJ\langle a_{1}\rangle, \tag{3}\] where \(\kappa_{2}\) and \(\kappa_{b}\) respectively denote the gain of the second resonator and phonon damping rates. We note that \(\kappa_{2}>0\) corresponds to a coupled passive-active CMM resonators system and \(\kappa_{2}<0\) describes a passive-passive coupled CMM resonators system. Assuming the control field amplitude \(\varepsilon_{l}\) to be larger than the probe field \(\varepsilon_{p}\), each operator expectation values \(\langle\mathcal{O}(t)\rangle\) can be decomposed into its steady-state values \(\mathcal{O}_{s}\) and a small fluctuating term \(\delta\mathcal{O}(t)\). The steady-state values of each operator are \[a_{1s} =\frac{(-i\Delta_{a}+\kappa_{2})(-ig_{ma}m_{s}+\sqrt{2\eta_{a} \kappa_{1}}\varepsilon_{l})}{(i\Delta_{a}+\kappa_{1})(-i\Delta_{a}+\kappa_{2} )-J^{2}}, \tag{4a}\] \[m_{s} =\frac{-ig_{ma}a_{1s}}{i\Delta_{m}^{\prime}+\kappa_{m}},\] (4b) \[b_{s} =\frac{-ig_{mb}|m_{s}|^{2}}{i\omega_{b}+\kappa_{b}},\] (4c) \[a_{2s} =\frac{iJa_{1s}}{(-i\Delta_{a}+\kappa_{2})}. \tag{4d}\] While the fluctuating parts of Eq. 3 can be expressed as \[\delta\dot{a}_{1} =-\left(i\Delta_{a}+\kappa_{1}\right)\delta a_{1}-iJ\delta a_{2}- ig_{ma}\delta m\] \[+\sqrt{2\eta_{a}\kappa_{1}}\varepsilon_{p}e^{-i\delta t},\] \[\delta\dot{m} =-(i\Delta_{m}^{\prime}+\kappa_{m})\delta m-ig_{ma}\delta a_{1}- iG\delta b-iG\delta b^{\dagger},\] \[\delta\dot{b} =-\left(i\omega_{b}+\kappa_{b}\right)\delta b-iG\delta m^{\dagger }-iG^{*}\delta m,\] \[\delta\dot{a}_{2} =-\left(i\Delta_{a}-\kappa_{2}\right)\delta a_{2}-iJ\delta a_{1}, \tag{5}\] where \(\Delta_{m}^{\prime}=\Delta_{m}+g_{mb}(b_{s}+b_{s}^{*})\) is the effective magnon detuning and \(G=g_{mb}m_{s}\) is the enhanced magnon-phonon coupling strength. For simplicity, we express these fluctuation equations as \[i\frac{d}{dt}|\psi\rangle=H_{eff}|\psi\rangle+F, \tag{6}\] where the fluctuation vector \(|\psi\rangle=(\delta a_{1},\delta a_{1}^{\dagger},\delta a_{2},\delta a_{2}^{ \dagger},\delta b,\delta b^{\dagger},\delta m,\delta m^{\dagger})^{T}\), input field \(F=(\sqrt{2\eta_{a}\kappa_{1}}\varepsilon_{p}e^{-i\delta t},\sqrt{2\eta_{a} \kappa_{1}}\varepsilon_{p}e^{i\delta t},0,0,0,0,0,0)^{T}\). Next, we adopt the following ansatz to solve Eq. 5: \[\delta a_{1}(t) =A_{1+}e^{-i\delta t}+A_{1-}e^{i\delta t},\] \[\delta m(t) =M_{+}e^{-i\delta t}+M_{-}e^{i\delta t}\] \[\delta b(t) =B_{+}e^{-i\delta t}+B_{-}e^{i\delta t},\] \[\delta a_{2}(t) =A_{2+}e^{-i\delta t}+A_{2-}e^{i\delta t}. \tag{7}\] Here \(A_{i+}\) and \(A_{i-}\) correspond to the \(i^{th}\) cavity generated probe field amplitude and the four-wave mixing field amplitude, respectively. By considering \(h_{1}=-i\Delta_{a}+i\delta-\kappa_{1},\ h_{2}=-i\Delta_{a}-i\delta-\kappa_{1}, \ h_{3}=-i\Delta_{a}+i\delta+\kappa_{2},\ h_{4}=-i\Delta_{a}-i\delta+\kappa_{2}, \ h_{5}=-i\omega_{b}+i\delta-\kappa_{b},\ h_{6}=-i\omega_{b}-i\delta-\kappa_{b}, \ h_{7}=-i\Delta_{m}^{\prime}+i\delta-\kappa_{m},\ h_{8}=-i\Delta_{m}^{\prime}-i \delta-\kappa_{m}\), we obtain \(A_{1+}\) which corresponds to the output probe field amplitude from the CMM resonator as \[A_{1+}(\delta)=\frac{C(\delta)}{D(\delta)}, \tag{8}\] where \[C(\delta)=-\sqrt{2\eta_{a}\kappa_{a}}\varepsilon_{p}h_{3}(h_{5} h_{7}h_{6}^{*}(J^{2}h_{8}^{*}+h_{4}^{*}(g_{ma}^{2}+h_{2}^{*}h_{8}^{*}))\] \[+|G|^{2}(h_{5}-h_{6}^{*})(J^{2}(h_{7}-h_{8}^{*})-h_{4}^{*}(g_{ma} ^{2}+h_{2}^{2}(h_{8}^{*}-h_{7}^{*})))),\] \[D(\delta)=h_{5}h_{6}^{*}(g_{ma}^{2}h_{3}+h_{7}(h_{1}h_{3}+J^{2}))\] \[(J^{2}h_{8}^{*}+h_{4}^{*}(g_{ma}^{2}+h_{2}^{*}h_{8}^{*}))+|G|^{2 }(h_{5}-h_{6}^{*})\] \[(J^{2}(g_{ma}^{2}h_{3}+(h_{1}h_{3}+J^{2})(h_{7}-h_{8}^{*}))-h_{4} ^{*}\] \[((h_{1}h_{3}+J^{2})(g_{ma}^{2}-h_{2}^{*}(h_{7}-h_{8}^{*}))-h_{2} ^{*}h_{3}g_{ma}^{2})). \tag{9}\] The output field from the CMM resonator is obtained by the cavity input-output relation \[\varepsilon_{out}=\sqrt{2\eta_{a}\kappa_{1}}\langle a_{1}\rangle- \varepsilon_{l}-\varepsilon_{p}e^{-i\delta t}. \tag{10}\] By substituting Eq. 7 into Eq. 10, we obtain the normalized output probe field intensity from the CMM resonator as \[T=|t_{p}|^{2}=\left|\frac{\sqrt{2\eta_{a}\kappa_{1}}A_{1+}}{ \varepsilon_{p}}-1\right|^{2}. \tag{11}\] In order to numerically simulate the transmitted output probe field spectrum, we use the following experimentally realizable set of parameter values [5; 40]. The degenerate microwave cavities of frequency \(\omega_{c}/2\pi=7.86\) GHz. The decay rate of the first cavity is \(\kappa_{1}/\pi=3.35\) MHz. The spin density \(\rho=4.22\times 10^{27}\) m\({}^{-3}\) and the diameter of the YIG sphere \(D=25\)\(\mu\)m. It results in \(3\times 10^{16}\) number of spins (\(N_{m}\)) present in the YIG sphere. The phonon mode has frequency \(\omega_{b}/2\pi=11.42\) MHz with decay rate \(\kappa_{b}/\pi=300\) Hz, and the magnon-phonon coupling strength \(g_{mb}/2\pi\) is 1 Hz. The Kittel mode frequency of the YIG sphere is \(\omega_{m}=\gamma_{e}B_{0,i}\), with gyromagnetic ratio, \(\gamma_{e}/2\pi=28\) GHz/T and \(B_{0,i}\) is the input bias magnetic field amplitude. The magnon decay rate is \(\kappa_{m}=3.52\) MHz. Magnon-photon coupling strength \(g_{ma}=\gamma_{e}B_{vac}\sqrt{5N_{m}}/2\) can be controlled by changing the vacuum magnetic field amplitude as \(B_{vac}=\sqrt{2\pi\hbar\omega_{c}/V}\). ## III Results ### Stability Analysis Initially we consider the two coupled cavities which are operating under a balanced gain-loss condition. The Hamiltonian describing such coupled resonator system (\(g_{ma}=g_{mb}=0\)) can be written as \[H_{cav} = \hbar(\Delta_{a}-i\kappa_{1})\delta a_{1}^{\dagger}\delta a_{1}+ \hbar(\Delta_{a}+i\kappa_{1})\delta a_{2}^{\dagger}\delta a_{2} \tag{12}\] \[+ \hbar J(\delta a_{1}^{\dagger}\delta a_{2}+\delta a_{2}^{\dagger }\delta a_{1}).\] The eigenvalues of \(H_{cav}\) are \(\lambda_{\pm}=\Delta_{a}\pm\sqrt{J^{2}-\kappa_{1}^{2}}\). Note that the above Hamiltonian remains invariant under the simultaneous parity \(\mathcal{P}:a_{1}\leftrightarrow a_{2}\) and time-reversal operation \(\mathcal{T}:i\rightarrow-i\) operations, and, its eigenvalues are entirely real and complex for \(J>\kappa_{1}\) and \(J<\kappa_{2}\). The point \(J=\kappa_{1}\), which marks this transition from \(\mathcal{PT}\) symmetric to the \(\mathcal{PT}\) breaking phase, is known as the exceptional point (EP). One must understand the competitive behaviour between the inter-cavity field coupling and the loss/gain rates to get insight into this transition. For \(J>\kappa_{1}\) the intracavity field amplitudes can be coherently exchanged and thus give rise to a coherent oscillation between the field amplitudes. However, for \(J<\kappa_{1}\) the intracavity field can not be transferred to the other one, resulting in a strong field localization or in other words exponential growth. A quick look at Eq. 4(a) also suggests such gain-induced dynamic instability in \(a_{1}\) at \(J=\kappa_{1}\) for \(\Delta_{a}=0\). This situation becomes more complicated in the presence of magnon-photon coupling. Now, the combined system (\(g_{ma},g_{mb}\neq 0\)) ceases to become \(\mathcal{PT}\) symmetric. However, the effect of an additional gain cavity (\(\kappa_{2}>0\)) can be understood by analyzing the stability diagram of the whole system. In the following, we derive the stability condition by invoking the Routh-Hurwitz criterion which requires all the eigenvalues of \(H_{eff}\) have negative real parts. The magenta region of Fig. 2 suggests that when \(g_{ma}\) is small the instability threshold remains close to the \(J=\kappa_{1}\) (the conventional EP for a binary \(\mathcal{PT}\) symmetric system). However, with increasing \(g_{ma}\) the system reaches instability at a larger exchange interaction \(J\). Such a restriction over the choice of the photon exchange rate parameter \(J\) will be followed throughout this paper. ### Absorption and dispersion spectrum The magnomechanical system under consideration corresponds to the level diagram of Fig. 3. Application of a probe field excites the passive cavity mode and allows the transition between \(|1\rangle\) and \(|2\rangle\). The exchange interaction, \(J\), couples two degenerate excited states \(|2\rangle\) and \(|5\rangle\). The presence of the strong control field distributes the population between the two states \(|2\rangle\) and \(|3\rangle\). The magnon-phonon coupling, \(g_{mb}\), couples both the metastable ground states \(|3\rangle\) and \(|4\rangle\). Here, we consider both the microwave cavities to be passive (\(\kappa_{2}<0\)) under a weak magnon-photon coupling strength, \(g_{ma}\). In this situation, the magnon-photon hybridization is insignificant. The absorptive and dispersive response can be quantified by the real and imaginary components of \((t_{p}+1)\) that will be presented as \(\alpha\) and \(\beta\), respectively. In Fig. 4(a), we present the absorptive response of the system as a function of normalized probe detuning. The black-solid curve depicts a broad absorption spectrum of the probe field when the exchange interaction is much weaker than the cavity decay rate. One can explain it by considering the level diagram of Fig. 3, where the initial population stays in the ground state \(|1\rangle\). Applying a probe field transfers the population from the ground state to the excited state, \(|2\rangle\). In addition, the weak magnon Figure 2: The stable and unstable regions are determined as a function of normalized evanescent coupling strength (\(J/\kappa_{1}\)) and the cavity-magnon coupling strength (\(g_{ma}\)) when the loss of the CMM-resonator is perfectly balanced by the gain of the auxiliary cavity (\(\kappa_{1}=\kappa_{2}\)). We consider the control field intensity to be 10 mW. The other parameters are \(\omega_{c}=2\pi\times 7.86\) MHz, \(\omega_{b}=2\pi\times 11.42\) MHz, \(\Delta_{a}=\Delta_{m}^{\prime}=\omega_{b}=2\pi\times 11.42\) MHz, \(\kappa_{1}=\kappa_{2}=\pi\times 3.35\) MHz, \(\kappa_{m}=\pi\times 1.12\) MHz, \(\kappa_{b}=\pi\times 300\) Hz, and \(g_{mb}=2\pi\) Hz. photon coupling (with respect to \(\kappa_{1}\)) restricts a significant transition from \(|2\rangle\) to \(|3\rangle\). As a result, it allows the transfer of a fraction of the excited state's population by invoking the exchange interaction \(J\). The increase in the exchange interaction strength causes a gradual decrease in \(|2\rangle\)'s population. It reduces the absorption coefficient around the resonance condition except for \(\delta=\omega_{b}\). This phenomenon is shown by the red-dashed and the blue dotted-dashed curve of Fig. 4(a). We observe a narrow absorption peak inside the broad absorption peak for \(J=0.4\)\(\kappa_{1}\). The sharp absorption peak, exactly at \(\delta=\omega_{b}\), occurs due to the magnomechanical resonance. Further increasing the exchange interaction virtually cuts off the population distribution from \(|2\rangle\) to \(|3\rangle\). As a result, the effect of magnon-phonon resonance also decreases, and the absorption peak at \(\delta=\omega_{b}\) eventually diminishes. In Fig. 4(b), we present the dispersion spectrum as a function of normalized detuning \(\delta/\omega_{b}\). For the time being, we neglect the effect of magnomechanical coupling and observe the occurrence of anomalous dispersion around \(\delta=\omega_{b}\) for \(J=0.4\)\(\kappa_{1}\). Further, increasing the exchange interaction strength more significant than the cavity decay rate can alter the dispersive response from anomalous to normal, as shown by the red-dashed and blue-dot-dashed curves. In the inset of Fig. 4(b), we plot the slope of the temporal dispersion \(d\beta/d\delta\) at the extreme vicinity of the magnon-phonon resonance condition. The positive values of the slope of the temporal dispersion signify anomalous dispersion due to the magnomechanical coupling. However, the steepness of the dispersion curve can be reduced by increasing the exchange interaction strength, as shown by the red-dashed and blue-dot-dashed curves. Note that this dispersion curve is accompanied by absorption. Output transmission of the probe field is prohibited in the presence of huge absorption. Therefore, reducing absorption or introducing the gain to the system is mandatory for observing the group velocity phenomena. To achieve reasonable transmission at the output, we replace the auxiliary passive cavity with an active one where the second cavity's gain (\(\kappa_{2}>0\)) completely balances the first cavity's loss. In this scenario, the stability criterion for the hybrid system allows us to consider the exchange interaction strength \(J\) to be greater than \(1.053\)\(\kappa_{1}\) for \(g_{ma}=2\) MHz. We present the absorptive response of the model system in Fig. 5(a). The black solid curve of Fig. 5(a) illustrates the occurrence of a double absorption peak spectrally separated by a broad gain regime. The graphical nature is determined by the roots of \(D(\delta)\), which are, in general, complex. The real parts of the roots determine the spectral peak position, and the imaginary parts correspond to their widths. To illustrate this, we consider \(J=1.30\)\(\kappa_{1}\) with all other parameters remaining the same as earlier. The real parts of the root of \(D(\delta)\) present two distinct normal mode positions at \(\delta/\omega_{b}\) values \(0.88\) and \(1.12\). The other two normal modes are spectrally located at the same position \(\delta/\omega_{b}=1\). The interference between these Figure 3: Level diagram of the model system. \(|n_{i}\rangle\),\(|m\rangle\) and \(|b\rangle\) represents the photon number state of \(i^{th}\) cavity, magnon mode and phonon mode, respectively. The application of strong control field to the CMM resonator couples \(|n_{1}+1,n_{2},m,b\rangle\) and \(|n_{1},n_{2},m+1,b\rangle\), whereas, the presence of a weak probe field increases the photon number of CMM resonator by unity. \(g_{mb}\) couples \(|n_{1},n_{2},m+1,b\rangle\) and \(|n_{1},n_{2},m,b+1\rangle\). The hopping interaction between the two cavities directly couples \(|n_{1}+1,n_{2},m,b\rangle\leftrightarrow|n_{1},n_{2}+1,m,b\rangle\). Figure 4: (a) Absorption and (b) dispersion spectrum of the model system. The slope of the dispersion curve is shown in the inset. Here we consider both the microwave cavities are passive, with identical decay rates (\(\kappa_{1}=-\kappa_{2}\)). The magnon-photon coupling strength, \(g_{ma}\) is taken as 2 MHz. All the other parameters are mentioned earlier. two normal modes becomes significant while approaching the stability bound as depicted by the red dashed and blue dot-dashed curve of Fig. 5(a). In turn, it reduces the overall gain of the composite system. Further, we investigate the effect of a gain-assisted auxiliary cavity on the medium's dispersive response in Fig. 5(b). For \(J=1.30~{}\kappa_{1}\), the two absorption peaks produce two distinct anomalous dispersion regions separated by a broad normal dispersive window. Weakening the exchange interaction strength reveals prominent normal dispersion around the resonance condition except for \(\delta=\omega_{b}\), and the window shrinks. In the inset of Fig. 5(b), we present the slope of the dispersive response due to the magnomechanical resonance. The black solid curve of Fig. 5(b) suggests the occurrence of anomalous dispersion at the magnon-phonon resonance condition. Moreover, one can increase the steepness of the dispersion curve by simply approaching the instability threshold, as delineated by the red-dashed and blue-dotted-dashed curve of the inset of Fig. 5(b). In the consecutive section, we will discuss how the change in the dispersion curve can produce controllable group velocity of the light pulses through the medium and investigate the role of the exchange interaction. ### Output probe transmission The output probe intensity from the system depends on its absorptive response. Equation 11 dictate the transmission of the probe field and is presented in Fig. 6. For Fig. 6(a), we consider both the microwave cavities as passive ones with identical decay rates, _i.e.,_\(\kappa_{1}=-\kappa_{2}\). The black solid curve shows a broad absorptive response for \(J=0.40~{}\kappa_{1}\). Increasing the exchange interaction strength causes gradual enhancement in the output probe transmission, as delineated by the red-dashed and blue dot-dashed curve of Fig. 6(a), and the absorption window splits into two parts. A precise observation confirms the presence of extremely weak transmission dip exactly at \(\delta=\omega_{b}\) for all the three exchange interaction strengths under consideration. In Fig. 6(b), we present the advantage of using a gain-assisted auxiliary cavity along with a CMM resonator to obtain a controllable amplification of the output probe field. We begin our discussion considering the photon hopping interaction, \(J=1.30~{}\kappa_{1}\). The black solid curve of Fig. 6(b) estimates the normalized probe transmission of \(6.03\). Here the normalization is done with respect to the input probe field intensity. By decreasing the parameter \(J\), we approach the unstable re Figure 5: (a) Absorption and (b) dispersion spectrum of the model system. The slope of the dispersion curve is shown in the inset. Here we consider the second cavity as a gain cavity, with \(\kappa_{2}=\kappa_{1}\). All the other parameters are the same as before. Figure 6: Exchange interaction \(J\) dependent normalized output probe transmission is plotted as a function of normalized detuning between the control and the probe field when (a) both the cavities are passive ones, and (b) one is active and another one is passive. gion and observe the occurrence of a double transmission peak separated by a sharp and narrow transmission dip. The amplitude of the double transmission peak demonstrates the probe pulse amplification by a factor of 830, as presented by the blue dotted-dashed curve. However, an explicit observation suggests the output probe field amplification by a factor of 67 at the resonance condition \(\delta=\omega_{b}\). The physics behind the probe field amplification can be well understood as: Introduction of gain to the second cavity compensates a portion of losses in the first cavity through \(J\). This leads to an enhanced field amplitude in the first cavity. In the presence of moderate magnon-photon coupling it increases the effective magnon-photon coupling strength. Hence, we observe a higher transmission at the two sidebands but also find a large transmission dip at \(\delta=\omega_{b}\). ### Group delay Controllable group delay has gained much attention due to its potential application in quantum information processing and communication. The dispersive nature of the medium is the key to controlling the group delay of the light pulse under the assumption of low absorption or gain. The pulse with finite width in the time domain is produced by superposing several independent waves with different frequencies centered around a carrier frequency (\(\omega_{s}\)). The difference in time between free space propagation and a medium propagation for the same length can create a group delay. The analytical expression for the group delay can be constructed by considering the envelope of the optical pulse as \[f(t_{0})=\int_{-\infty}^{\infty}\tilde{f}(\omega)e^{-i\omega t_{0}}d\omega,\] where \(\tilde{f}(\omega)\) corresponds to the envelope function in the frequency domain. Accordingly, the reflected output probe pulse can be expressed as \[f^{R}(t_{0})= \int_{-\infty}^{\infty}t_{p}(\omega)\tilde{f}(\omega)e^{-i\omega t _{0}}d\omega, \tag{13}\] \[= e^{-i\omega_{s}t_{0}}\int_{-\infty}^{\infty}t_{p}(\omega_{s}+ \delta)\tilde{f}(\omega_{s}+\delta)e^{-i\delta t_{0}}d\delta,\] \[= t_{p}(\omega_{s})e^{-i\omega_{s}\tau_{g}}f(t_{0}-\tau_{g}). \tag{14}\] This expression can be obtained by expanding \(t_{p}(\omega_{s}+\delta)\) in the vicinity of \(\omega_{s}\) by a Taylor series and keeping the terms upto first order in \(\delta\). An expression for time-delay is obtained as [41, 31] \[\tau_{g}=\text{R}e\left[\frac{-i}{t_{p}(\omega_{s})}\left(\frac{dt_{p}}{d \omega}\right)\bigg{|}_{\omega_{s}}\right], \tag{15}\] which can be further simplified as \[\tau_{g}=\frac{(\alpha(\omega_{s})-1)\frac{d\beta}{d\omega}\bigg{|}_{\omega_{ s}}-\beta(\omega_{s})\frac{d\alpha}{d\omega}\bigg{|}_{\omega_{s}}}{|t_{p}( \omega_{s})|^{2}}. \tag{16}\] From Eq. 16, the slope of the medium's absorption and dispersion curves determine the probe pulse propagation delay or advancement. However, Fig. 5(b) suggests that the value of \(\beta\) is negligibly small near the magnomechanical resonance. Hence, the group delay depends on the first term of the numerator of Eq. 16. In Fig. 7, we examine the effect of photon-photon exchange interaction on the probe pulse propagation delay when both cavities operate under balanced gain-loss condition. The system produces anomalous dispersion accompanied by a gain response. The black solid curve of Fig. 7 depicts the probe pulse advancement of 2.4 \(\mu\)s for the photon hopping interaction strength, \(J=1.3\)\(\kappa_{1}\). Moreover, one can enhance the effective gain and the slope of the anomalous dispersion curve by approaching the instability threshold. That, in turn, brings out the super luminosity of the output probe pulse, characterized by the advancement of 17.9 \(\mu\)s as shown by the blue dotted-dashed curve of Fig. 7. To verify the above results, we consider a Gaussian probe pulse with a finite width around the resonance condition, _i.e.,_\(\delta=\omega_{b}\), and numerically integrate it by using Eq. 13. The shape of the input envelope is considered as, \[\tilde{f}(\omega)=\frac{\varepsilon_{p}}{\sqrt{\pi\Gamma^{2}}}e^{-\frac{( \omega-\omega_{b})^{2}}{\Gamma^{2}}}, \tag{17}\] where \(\Gamma\) is the spectral width of the optical pulse. We consider \(\Gamma\) to be 7.17 kHz, such that the Gaussian envelope is well-contained inside the gain-window around the resonance condition (\(\delta=\omega_{b}\)), as depicted in Fig. 5(a). The dispersive, absorptive as well as gain response of the system can be demonstrated by examining the effect of the probe transmission coefficient (\(t_{p}\)) on the shape of the input envelope. The gain of the auxiliary cavity manipulates the probe transmission coefficient in such a way that it amplifies the intensity of the output probe pulse. The black solid curve depicts the output probe pulse amplification of 6.2 for photon-hopping interaction strength, \(J=1.30\)\(\kappa_{1}\). A decrease in the \(J\) value Figure 7: Time delay of the probe pulse for different evanescent coupling strength \(J\) have been plotted against the normalized probe detuning \(\delta/\omega_{b}\), while the control power is 10 mW. All other parameters are taken as the same as in Fig. 5. gradually enhances the effective gain in the system. It amplifies the output probe transmission as presented by the red dashed and blue dotted-dashed curves of Fig. 8. We observe that the output field amplification can reach to a factor of 65.3 while considering the exchange interaction strength to be 1.07 \(\kappa_{1}\). Further decreasing the exchange interaction will lead to dynamical instability in our model system. Interestingly, the temporal width of the probe pulse is almost unaltered during the propagation through the magnon-assisted double cavity system. This numerical result agrees with our analytical results for the output probe transmission, as shown in Fig. 6(b). Moreover, the importance of the photon-photon exchange interaction on the probe pulse propagation advancement can be observed from the inset of Fig. 8. The peak separation between the input pulse (\(t=0\)) and the output pulse for \(J=1.30\)\(\kappa_{1}\) estimates the probe pulse advancement of 2.34 \(\mu\)s. The red dashed, and blue dashed-dotted curve of the inset estimates the probe pulse advancement of 8.75 \(\mu\)s and 13.30 \(\mu\)s for \(J=1.10\)\(\kappa_{1}\) and 1.07 \(\kappa_{1}\), respectively. ## IV Conclusion In conclusion, we have theoretically investigated the controllable output field transmission from a critically coupled cavity magnomechanical system. We drive the first cavity with a YIG sphere inside it, establishing the magnon-photon coupling. The photon exchange interaction connects the second microwave cavity with the first. An external magnetic field induces the deformation effect of the YIG sphere. In this study, the interaction between the magnon and photon modes lies under the weak coupling regime. The medium becomes highly absorb when both cavities are passive, and the output probe transmission is prohibited. We introduce a gain to the auxiliary cavity to overcome this situation. It is noteworthy that the instability threshold must be close to the conventional exceptional point for a binary \(\mathcal{PT}\)-symmetric system. At the magnomechanical resonance, the auxiliary cavity produces an effective gain associated with anomalous dispersion. Further, decreasing the photon exchange interaction strength causes gradual enhancement of the effective gain and the steepness of the dispersion spectrum. As a result, we observe a controllable superluminal microwave pulse propagation associated with amplification by a factor of 67. By studying the propagation dynamics of a Gaussian probe pulse of width 7.17 kHz, we confirm that the numerical study is consistent with the analytical results. Our study may find potential applications in weak signal sensing and communication purposes in the newly emerging field of cavity magnomechanics.
2308.15902
Photonic time-delayed reservoir computing based on series coupled microring resonators with high memory capacity
On-chip microring resonators (MRRs) have been proposed to construct the time-delayed reservoir computing (RC), which offers promising configurations available for computation with high scalability, high-density computing, and easy fabrication. A single MRR, however, is inadequate to supply enough memory for the computational task with diverse memory requirements. Large memory needs are met by the MRR with optical feedback waveguide, but at the expense of its large footprint. In the structure, the ultra-long optical feedback waveguide substantially limits the scalable photonic RC integrated designs. In this paper, a time-delayed RC is proposed by utilizing a silicon-based nonlinear MRR in conjunction with an array of linear MRRs. These linear MRRs possess a high quality factor, providing sufficient memory capacity for the entire system. We quantitatively analyze and assess the proposed RC structure's performance on three classical tasks with diverse memory requirements, i.e., the Narma 10, Mackey-Glass, and Santa Fe chaotic timeseries prediction tasks. The proposed system exhibits comparable performance to the MRR with an ultra-long optical feedback waveguide-based system when it comes to handling the Narma 10 task, which requires a significant memory capacity. Nevertheless, the overall length of these linear MRRs is significantly smaller, by three orders of magnitude, compared to the ultra-long feedback waveguide in the MRR with optical feedback waveguide-based system. The compactness of this structure has significant implications for the scalability and seamless integration of photonic RC.
Yijia Li, Ming Li, MingYi Gao, Chang-Ling Zou, Chun-Hua Dong, Jin Lu, Yali Qin, XiaoNiu Yang, Qi Xuan, Hongliang Ren
2023-08-30T09:10:39Z
http://arxiv.org/abs/2308.15902v1
Photonic time-delayed reservoir computing based on series coupled microring resonators with high memory capacity ###### Abstract On-chip microring resonators (MRRs) have been proposed to construct the time-delayed reservoir computing (RC), which offers promising configurations available for computation with high scalability, high-density computing, and easy fabrication. A single MRR, however, is inadequate to supply enough memory for the computational task with diverse memory requirements. Large memory needs are met by the MRR with optical feedback waveguide, but at the expense of its large footprint. In the structure, the ultra-long optical feedback waveguide substantially limits the scalable photonic RC integrated designs. In this paper, a time-delayed RC is proposed by utilizing a silicon-based nonlinear MRR in conjunction with an array of linear MRRs. These linear MRRs possess a high quality factor, providing sufficient memory capacity for the entire system. We quantitatively analyze and assess the proposed RC structure's performance on three classical tasks with diverse memory requirements, i.e., the Narma 10, Mackey-Glass, and Santa Fe chaotic timeseries prediction tasks. The proposed system exhibits comparable performance to the MRR with an ultra-long optical feedback waveguide-based system when it comes to handling the Narma 10 task, which requires a significant memory capacity. Nevertheless, the overall length of these linear MRRs is significantly smaller, by three orders of magnitude, compared to the ultra-long feedback waveguide in the MRR with optical feedback waveguide-based system. The compactness of this structure has significant implications for the scalability and seamless integration of photonic RC. ## 1 Introduction Machine learning (ML) is a method that leverages data to increase performance in addressing challenging issues that are always beyond the capabilities of humans [1]. The use of machine learning (ML) has exploded over the past two decades in various areas, including recommendation systems, autonomous driving, image, speech, and text processing. A class of ML algorithms known as artificial neural networks (ANNs) is based on how the human brain functions. Recurrent neural networks (RNNs) are a special kind of artificial neural networks (ANNs) that are used to handle time-dependent input [2]. However, training RNN is notoriously challenging due to the vanishing and the exploding gradient problems. The objective function of a RNN model with many hyper-parameters is extremely time-consuming to optimize [2,3]. Reservoir computing (RC) is a computational framework that enables high-speed machine learning originated from RNN models. It can balance training complexity and performance, and has recently attracted the attention of many researchers [4-6]. In RC, a set of input signals are mapped into higher dimensional computational spaces using a nonlinear dynamical system called a reservoir. By exploring temporal correlations in the data, RC performs better than traditional feedforward neural networks. Only the nodes in the readout layer are trained, while the reservoir is made up of a network of thousands of nodes sparsely connected by fixed-random weights. As a result, the entire training process is linear, allowing RC to maintain good performance with low complexity. However, in conventional RC systems, reservoir nonlinearity is frequently given by a sizable number of nonlinear nodes, which makes hardware implementation extremely challenging and results in complicated systems [7-8]. To solve this problem, RC has extended to physical systems that are continuous systems in space and/or time rather than networks in the traditional sense [7-8]. Photonic RC has the advantages of ultra-high operating bandwidth, low power consumption, parallel computing via signal multiplexing, which gives an ideal foundation for hardware acceleration [9-12]. Overall, spatial and time-delayed node reservoirs are the classifications given to photonic RC systems [13]. The structure of the former is comparable to that of an RNN whose nodes are spatially distributed and interconnected. Therefore, there are numerous photonic hardware requirements [14,15]. The time-delayed node reservoir, comprises of just one computing-related physical node and a number of time-multiplexed virtual nodes connected to an internal time-delayed feedback loop [16-20]. To implement RC, nonlinear components like photodetectors, semiconductor lasers, and electro-optic modulators have been incorporated into an optoelectronic delayed feedback loop [21-27]. These systems operate at GHz velocity and can support thousands of virtual nodes. Various disciplines, including chaotic time series prediction, image and speech recognition, nonlinear channel equalization, and chromatic dispersion compensation in optical communication systems [14,28-31], have seen considerable success because of the application of photonic RC. In order to attain good performance in the near future, time-delayed node reservoirs must exponentially increase the number of virtual nodes and the complexity of their connectivity. This is due to the increasing demand for processing capabilities. However, scaling these parameters based on optoelectronic splitting systems becomes increasingly impractical for these time-delayed node architectures [32,33]. As a result, the integrated photonic RC designs are the subject of the current research. MRR is one of fundamental building blocks in integrated photonic devices, and has been employed to construct a time-delayed node reservoir because it exhibits a variety of rich nonlinear dynamical features [34,35]. Using silicon-on-insulator (SOI) MRRs as nonlinear nodes, a 4\(\times\)4 swirl reservoir topology has been theoretically established to perform a traditional nonlinear Boolean problem in Ref. [36]. Through the use of its own linear and nonlinear memory capacities as well as virtual nodes created by time multiplexing, a single silicon-based MRR has also demonstrated its capacity to carry out tasks that need memory [37]. To create time-delayed reservoir computers, a single silicon-based MRR with an external optical feedback waveguide has recently been proposed [38]. The addition of an optical feedback waveguide allows for a large increase in the system's linear memory capacity (MC). To achieve a good memory capacity, the length of the feedback waveguide in the optical feedback system is optimized at about 20 cm. In the presence of optical feedback waveguide, the system is superior to a single silicon-based MRR for the task requiring huge amounts of memory. Nevertheless, this waveguide length is far longer than the microring waveguide's diameter. It is quite difficult to provide scalability and integrate technology into photonic RC systems with such a huge scale. In this article, we present a silicon-based main cavity coupled in series by a number of linear cavity array to build a time-delayed RC with large MC. In this approach, the reservoir is emulated by using a single main cavity as a physical nonlinear node. This time multiplexing results in a distributed set of virtual nodes along the delayed feedback loop for the single main cavity response. In this study, the system's MC is greatly improved by means of an array of series-coupled linear cavities. We numerically analyze and assess the proposed RC's performance of on three classical tasks that need different compromises between nonlinearity and memory. When the task requires more memory than a single MRR can give, the proposed system dramatically improves computing performance. The article is structured as follows. Section 2 describes the model of the main cavity coupled in series by a number of linear cavities. In section 3, we describe its implementation in a time-delayed RC scheme. In section 4, we examine and explain the numerical results obtained for the Narma 10, Mackey-Glass, and Santa Fe chaotic timeseries prediction challenges. Finally, in section 5, we conducted a tolerance analysis of the proposed single-SCMRRs-10 configuration's performance in the NARMA 10 task. While the issue of resonant wavelength drift is inevitable, current technologies demonstrate a certain degree of adaptability to tolerate specific stochastic errors. It is noteworthy that viable solutions do exist for addressing the resonant wavelength drift arising from manufacturing imperfections. ## 2 Series coupled microring resonators Figure 1 illustrates two configurations of series-coupled microring resonators (SCMRRs) for building time-delayed reservoir computers. As an all pass filter structure, both configurations consist of a waveguide and series-coupled MRRs side coupled to the waveguide. Among these series-coupled MRRs, only the MRR that directly coupled with the waveguide exhibits nonlinear behavior, while the others manifest only linear behavior. The former is referred to as the main cavity, and the latter are called the linear cavity array in this paper. In the configuration of Fig. 1(a), only one array of series-coupled MRRs is coupled to the main cavity, which is referred to as the single-SCMRRs. The structure is excited by the power signal \(E_{in}\) at the entrance to the input waveguide. By leveraging nonlinear coupled mode theory [38], we describe separately the time evolution of the light wave amplitude, free carrier density, and temperature within the main cavity, as follows, \[\frac{dU(t)}{dt}=[i(\omega_{0}(t)-\omega_{p})-\gamma(t)]U(t)+i\mu E_{in}(t)+i \mu U_{1}(t) \tag{1}\] \[\frac{d\mathbf{\Delta}N(t)}{dt}=-\frac{\mathbf{\Delta}N(t)}{\tau_{FC}}+G_{TPA}\left|U(t )\right|^{4} \tag{2}\] \[\frac{d\mathbf{\Delta}T(t)}{dt}=-\frac{\mathbf{\Delta}T(t)}{\tau_{TH}}+\frac{P_{abs}}{ M_{ring}c_{Si}}. \tag{3}\] Here, in Eq. (1), \(U(t)\) represents the light wave amplitude in the main cavity, \(U_{1}(t)\) is the light wave amplitude in the linear MRR adjacent to the main cavity, \(\gamma(t)\) is the total loss rate, Figure 1: Schematic diagram of two types of SCMRRs designed for constructing time delayed reservoir computers. (a) Schematic diagram of single SCMRRs, where only one array of series-coupled MRRs is coupled to the main cavity. (b) Schematic diagram of bilateral SCMRRs. The structure is composed of two arrays of series-coupled linear cavity arrays, which are interconnected with the main cavity. Each array of linear cavity array has the same resonance wavelength, but the two arrays of linear cavity arrays exhibit a difference in resonance wavelength. and \(\mu\), \(\mu_{1}\) are the coupling coefficients between the main cavity and the straight waveguide or the linear MRR of its nearest neighbor, respectively. Two coupling coefficients satisfy that \(\mu=\sqrt{\gamma_{{}_{e1}}}\) and \(\mu_{1}=\sqrt{\gamma_{{}_{e2}}/\Gamma_{{}_{rl}}}\), where \(\gamma_{{}_{e1}}\), \(\gamma_{{}_{e2}}\) represent separately the main cavity's extrinsic loss rates due to coupling to the waveguide and the linear MRR of its nearest neighbor, and \(\Gamma_{{}_{rl}}\) is the time it takes for the light to travel one round-trip in the main cavity. The generation of free carriers in the main cavity is attributed to two-photon absorption (TPA). And its production rate is indicated by \(G_{TPA}\), while its recombination lifetime is expressed by \(\tau_{{}_{FC}}\). Both the production and the recombination of free carriers are influenced by the phonon emission in the silicon-based waveguide, leading to a consequent change (\(\Delta T\)) in its mode-averaged temperature. Eq. (3) is derived from Newton's law, where \(\tau_{{}_{TH}}\) represents the thermal decay time due to the heat dissipation in the surrounding medium, \(P_{{}_{abs}}\) is the absorption power of the material that is heated, \(M_{{}_{ring}}\) is the mass of the main cavity, and \(c_{{}_{Si}}\) is the heat capacity of the silicon-based waveguide. The refractive index of the main cavity is adjusted by thermal and free carrier variations. This further causes the changes of the parameters \(\omega_{0}(t)\) and \(\gamma(t)\) in Eq.(1). The first term \(\omega_{0}(t)\) is expressed as \(\omega_{0}(t)=\omega_{0}+\delta\omega_{{}_{nl}}(t)\), where \(\omega_{0}\) is the resonance frequency in the absence of nonlinearity, and \(\delta\omega_{{}_{nl}}(t)\) is the resonance frequency shift due to nonlinear contribution. In the systems, the nonlinear influence \(\delta\omega_{{}_{nl}}(t)\) can be written as, \[\delta\omega_{{}_{nl}}(t)=-\frac{\Gamma_{{}_{c}}\omega_{0}}{n_{{}_{Si}}}\Bigg{(} \frac{dn_{{}_{Si}}}{dT}\,\Delta T(t)+\frac{dn_{{}_{Si}}}{dN}\,\Delta N(t) \Bigg{)}, \tag{4}\] where \(\Gamma_{{}_{c}}\) represents the modal confinement factor and \(n_{{}_{Si}}\) is the refractive index of silicon. The second term \(\gamma(t)\) contains the loss rate \(\gamma_{{}_{l}}\) in the linear condition and the loss rates due to TPA and free carrier absorption (FCA): \[\gamma(t)=\gamma_{{}_{l}}+\eta_{{}_{FCA}}\Delta N(t)+\eta_{{}_{TPA}}\mid U(t) \mid^{2}. \tag{5}\] In Eq. (5), \(\gamma_{{}_{l}}=\gamma_{{}_{rl}}+\gamma_{{}_{e1}}\), \(\gamma_{{}_{rl}}\) represents the intrinsic loss rate of the main cavity, and \(\eta_{{}_{FCA}},\eta_{{}_{TPA}}\) represent separately the efficiency of FCA and TPA. When the main cavity is operated in a linear state, we can get the conditions of \(\delta\omega_{{}_{nl}}(t)=0\) and \(\gamma(t)=\gamma_{{}_{l}}\). Then, the characteristic time-scale of the main cavity in absence of nonlinearity is decided by its photon lifetime \(\tau_{{}_{ph}}=\gamma_{{}_{l}}^{-1}\). As the nonlinear state is induced by the TPA, there are two important timescale parameters (\(\tau_{{}_{FC}}\) and \(\tau_{{}_{TH}}\)) present for the dynamic evolution of the system. \(\tau_{{}_{FC}}\) is about two orders of magnitude smaller than \(\tau_{{}_{TH}}\). When the timescale of the input signal is coincident with a timescale parameter, only corresponding nonlinear effects exert influence on the dynamics. In the paper, the input signal is encoded at the timescale of \(\tau_{{}_{FC}}\), and we emphasize the nonlinear effect activated by the free carriers in the main cavity. In the SCMRRs, the time evolution of light wave amplitudes in other linear cavity array is derived by the following set of coupled differential equations: \[\frac{dU_{{}_{1}}(t)}{dt}=i(\omega_{{}_{1}}-\omega_{{}_{p}})U_{{}_{1}}(t)+i \mu_{{}_{1}}U(t)+i\mu_{{}_{2}}U_{{}_{2}}(t) \tag{6}\] \[\frac{dU_{M}(t)}{dt}=i(\omega_{1}-\omega_{p})U_{M}(t)+i\mu_{2}U_{M-1}(t) \tag{7}\] \[\frac{dU_{m}(t)}{dt}=i(\omega_{1}-\omega_{p})U_{m}(t)+i\mu_{2}U_{m-1}(t)+i\mu_{2 }U_{m+1}(t), \tag{8}\] where these series coupled linear cavities are numbered as the indices \(m\) from 1 to \(M\) (\(M\) is the total number of these series-coupled linear cavities). For simplicity, we assumed that all the linear cavities are identical, and the coupling coefficients between any two adjacent linear cavities are equal. Here, \(\omega_{1}\) is the resonance frequency of the linear MRR, and \(\mu_{2}=\sqrt[]{\nu_{e2}/\Gamma_{i2}}\) is the coupling coefficient between two adjacent linear cavities, where \(\Gamma_{i2}\) is the time it takes for the light to travel one round-trip in the linear MRR. We designated the index of the linear MRR adjacent to the main cavity as \(m\)=1, and the variation of its light wave amplitude is described by Eq. (6). Eq. (7) describes the optical energy amplitude variation within the last linear MRR at \(m\)=\(M\). For the \(m^{\#}\) (\(m=2\),L, \(M-1\)) series coupled MRR located between the first and last linear cavities, the evolution of its light wave amplitude is given by Eq. (8). The output signal \(E_{th}\) through the waveguide can be expressed as, \[E_{th}(t)=t_{r}E_{in}(t)+\mu U(t), \tag{9}\] where \(t_{r}\) represents the field transmission from the input port to the through port. Figure 1(b) displays a SCMRRs-based structure that resembles the configurations shown in Fig.1 (a). This structure consists of two arrays of series-coupled linear cavity arrays, each coupled to the main cavity. Within each array of linear cavity array, all linear cavities in each linear cavity array share the same structural and physical parameters, and the coupling coefficients between any two adjacent linear cavities are equal. Consequently, all linear cavities in each array possess the same diameter and resonance wavelength. However, there is a difference in the resonance wavelength between the two arrays of linear cavity arrays. In the paper, this structure is referred to as bilateral SCMMRs. The detailed coupled mode equations for this configuration are provided insupplement. To simplify the nomenclature, the system with varying numbers of series-coupled linear cavities is denoted using an abbreviated version. For instance, if the single SCMRRs system has 2 series-coupled linear cavities, it is referred to as single SCMRRs-2; Similarly, if the bilateral SCMRRs system has two arrays of 2 series-coupled linear cavities, it is named bilateral SCMRRs-2. The initial wavelength detuning between the laser wavelength and the main cavity resonance is defined as \(\Delta\lambda_{s}=\lambda_{p}-\lambda_{0}\). The resonance shift caused by nonlinear effects is denoted by \(\Delta\lambda_{0}(t)=\lambda_{0}(t)-\lambda_{0}\), where \(\lambda_{0}=2\pi c/\omega_{0}\) and \(\lambda_{0}(t)=2\pi c/\omega_{0}(t)\). The set of coupled differential Eqs. (1)-(8) is numerically solved using the Runge-Kutta method with an integration time step of 2ps, which is significantly smaller than the lowest timescale effects (\(\tau_{ph}\approx 97\)ps). Before problem-solving, these equations are transformed into dimensionless form for convenience (see supplementfor details). In this model, only one-way propagation is considered, and the values of the parameters are provided in supplementwhich includes a detailed table with parameter values. ## 3 SCMRRs in time-delayed RC Figure 2 depicts the schematic of the proposed time-delayed RC system, which comprises an input layer, a reservoir, and an output layer [7]. In the input layer, the time-continuous input signals are first encoded as a sequence of bits, where \(x_{i}\) represents the amplitude of the \(i\)th bit, and \(\tau\) represents the bit period. Subsequently, a bit mask is applied by multiplying the bit stream with a set of random values \(M(t)\). \(M(t)\) is a periodic sequence with a period \(\tau\), and it follows uniform distribution. The resulting signal is then modulated onto the intensity of the optical carrier. The maximum laser input power is indicated by \(P_{M}\). Next, the modulated optical signal enters the single SCMRRs through the entrance to the input port and propagates within all the MRRs' waveguide through the coupling between the MRRs. The received optical signal at the through port is converted into an electrical signal by a photodetector (PD). Within one period of time duration \(\tau\), the electrical signal is sampled synchronously at the masking sampling interval \(\theta\), resulting in \(N\) equidistant sampling points equally spaced in time by \(\theta=\tau/N\). These \(N\) equidistant points are defined as \(N\) virtual nodes, and they play a role similar to one of the nodes in a conventional reservoir. In the reservoir layer, the system's nonlinear characteristics are generated by the main cavity and the PD, while the series coupled linear cavity array significantly enhance its MC. Consequently, the original input signal is nonlinearly transformed from the physical system into a higher dimensional space with \(N\) virtual nodes. Ultimately, in the output layer, a predicted value \(o_{i}\) corresponding to the input \(x_{i}\) is determined by a linear combination of the responses of the related virtual nodes, as follows, \[\alpha=\sum_{l=1}^{N}W_{l}N_{l,i} \tag{10}\] where \(N_{l,i}\) is an element value of an \(N\)-dimensional vector, which is the response of the virtual nodes at the \(i\)th period, and \(W_{l}\) is its corresponding readout weight. The weights of the readout layer are trained by utilizing a ridge regression method to minimize the normalized mean square error (NMSE) between \(o_{i}^{\cdot}\) and the expected value \(y_{i}^{\cdot}\), which is expressed as, \[NMSE=\frac{\langle\mathbb{P}o_{i}^{\cdot}-y_{i}^{\cdot}\mathbb{P}^{\cdot} \rangle}{\langle\mathbb{P}\,y_{i}^{\cdot}-\langle\,y_{i}^{\cdot}\mathbb{P}^{ \cdot}\rangle} \tag{11}\] Figure 2: Schematic of the time-delayed RC with single-SCMRRs. The input information \(x(t)\) is first masked by a sequence \(M(t)\). The masked signal is then modulated onto the intensity of the optical carrier excited by the laser. At the through port, the corresponding electrical signal is collected by the PD. The virtual nodes of the reservoir are created through time-multiplexing, and a predicted value \(o_{i}\) is obtained by a linear weighted sum of the responses of the related virtual nodes. During the training process, the optimal output weights are obtained by utilizing the expected values \(y_{i}\) derived from the original dataset. The main cavity has a quality factor of 1.18\(\times\)10\({}^{5}\), and self-pulsations can occur dependent on a free carrier concentration variation in the microring waveguide. The system's MC can be significantly enhanced by introducing the series coupled linear cavity array. The masking sampling interval is set to \(\theta\) = 40ps [38], which is considerably lower than three different timescales: the photon lifetime (here \(\tau_{{}_{ph}}\approx 97\,ps\) ), the thermal lifetime (here \(\tau_{{}_{TH}}\approx 83.3\)ns ) and the free carrier lifetime (here \(\tau_{{}_{FC}}\approx 3\)ns ) in the main cavity. Consequently, the system operates in a transient state, and the current internal field in the main cavity does not fully dissipate when the next mask signal arrives. As a result, the main cavity exhibits a short memory, and adjacent virtual nodes are coupled via inertia. There is no doubting the fact that the presence of the series-coupled linear cavity array strengthens inertia. The free carrier nonlinearity has a faster time response, and to achieve GHz computing rates, the bit period is set to \(\tau=1ns\). In this case, the number of virtual nodes is chosen as \(N\)=25 to ensure compatibility with delays generated by series-coupled linear cavity array. When the masked optical signal enters the main cavity, free carriers are generated, leading to a resonance shift \(\mathbf{\Delta}\lambda_{{}_{FC}}\). Simultaneously, the resonance is also shifted by \(\mathbf{\Delta}\lambda_{{}_{TH}}\) due to thermal effects. However, because the signal speed is much faster than these variations, the nonlinear transformation of the optical signal does not occur, and only resonance shifts are induced. The main cavity with high optical power always exhibit self-pulsing dynamics, and thermal effects contribute to the nonlinear transformation. On one hand, when the onbit period \(\tau\) is smaller than \(1ns\), the number of virtual nodes is restricted, resulting in a decline in the computational performance of the reservoir. On the other hand, a larger number of virtual nodes can be obtained at \(\tau>1ns\) and the computational performance can be significantly improved, but at the cost of computing speed. Thus, the bit period is selected as \(\tau=1ns\) to strike a balance between computational speed and performance. ## 4 Results Using the numerical methods described earlier, three classical computational tasks named NARMA 10, Mackey-Glass, and Santa Fe, are employed to evaluate the computational performance of the proposed RC systems based on single and bilateral SCMRRs. The NARMA 10 task is a discrete-time 10\(th\) nonlinear autoregressive moving average (NARMA) system [39]. It is widely employed for testing Echo State Networks (ESN) models, which are a type of RC utilizing a RNN with a sparsely connected hidden layer [40]. The NARMA 10 task requires considering at least the previous 10 values to predict the next value, demanding a large amount of MC. The task involves both nonlinear transformation and memory. The Mackey-Glass time series serves as a standard benchmark for chaotic time series prediction tasks [41]. The Santa Fe laser time series involves one-step-ahead prediction on data acquired by sampling the intensity of a far-infrared laser in a chaotic state [42]. For both the Mackey-Glass and Santa Fe tasks, the next value \(x_{{}_{i+1}}\) is solely related to the current value \(x_{{}_{i}}\), requiring a short MC. These tasks have varying demands for signal processing and MC, making their computing performances suitable for evaluating the overall efficiency of a neuromorphic computing system. According to RC theory, nonlinear dynamics plays a crucial role, but they might degrade MC. To achieve better computational performance, the RC system needs to strike a balance between the nonlinear transformation of the input information and MC. However, it is challenging to determine the extent of nonlinear transformation and the amount of MC required for a specific task. Therefore, it is essential to estimate or adjust the degree of nonlinearity and memory separately. The degree of nonlinearity can be evaluated indirectly by the standard deviation of resonance wavelength shift \(\sigma(\mathbf{\Delta}\lambda_{{}_{0}}(t))\) in the main cavity. A larger standard deviation indicates stronger nonlinearity, and vice versa. The MC mainly refers to the linear MC, which is one of the fundamental requirements of RC, it can be calculated by training the reservoir to reconstruct an input stream of values [0, 0.5] with a uniform distribution \(k\) timesteps later [43]. The MC can be given by: \[MC=\sum_{k=1}^{l_{max}}MC_{k}\ \, \tag{12}\] \[MC_{k}=\frac{\operatorname{cov}^{2}(x_{i-k},y_{k})}{\operatorname{var}(x_{i}) \operatorname{var}(y_{k})}=1-NMSE\ \, \tag{13}\] where \(MC_{k}\)[0,1] is the MC for a \(k\)-bit shift, which represents the theoretical upper limit for the summation in the same work. On one hand, \(MC_{k}=1\) reflects a perfect memory of the bit stream \(k\) bits later. On the other hand, \(MC_{k}=0\) means that all memory is lost, indicating no capability to recall past information. In this context, \(l_{max}\) represents the calculated maximum length of memory sequences, and \(\operatorname{var}(\cdot)\), \(\operatorname{cov}^{2}(\cdot)\) denote the variance of a random variable and the covariance between two vectors, respectively. Subsequently, we investigate the influence of the critical operational parameters on the performance of the three selected tasks based on the SCMRRs system, including the maximum input laser power \(P_{M}\), the initial wavelength detuning \(\boldsymbol{\triangle}\lambda_{s}\), the ratio (Q\({}_{2}\)/Q\({}_{1}\)) of the linear MRR's quality factor to the main cavity's quality factor, and the total number \(M\) of the series-coupled linear cavities in each array. The main cavity has a radius of 6.75 \(\mu\)m and a quality factor of 1.18\(\times 10^{5}\), while all the linear cavities have a radius of 1.51 \(\mu\)m. These parameters have a significant impact on the nonlinear dynamics in the main cavity, and the number and quality factor of the series-coupled linear cavities greatly affect the system's MC. In the paper, the maximum input laser power \(P_{M}\) is varied from 0.1mW to 7mW, the initial wavelength detuning \(\boldsymbol{\triangle}\lambda_{s}\) is adjusted from -30pm to 30pm with a step size of 5pm to cover all the main cavity resonance (full width at half maximum satisfies that _FWHM_=26pm), the radio (Q\({}_{2}\)/Q\({}_{1}\)) of the linear MRR's quality factor to the main cavity's quality factor is adjusted from 10 to 500. Additionally, the maximum total number of these linear cavities is set to 10. For each parameter variation, previous 1000 data are firstly input into the system to remove any fluctuations induced by the inclusion of inputs. Then, 2000 input data are used for the training, and the next 1000 data are used for the test data. The same data was not shared between the training set and the test set. All the simulations employ the same random mask, and the total number of virtual nodes is 25 by default in the article (\(\tau=\)1ns ). The linear classifier in RC's output layer involves the operation of the ridge regression, and the ridge regression coefficient is set to \(10^{-4}\). ### NARMRR 10 benchmark test The output of the NARMA 10 system is described as follows: \[y_{i+1}=0.3y_{i}+0.05y_{i}\sum_{k=0}^{9}y_{i-k}+1.5x_{i-9}x_{i}+0.1, \tag{14}\] where \(x_{i}\) is a random input at the \(i\)th moment, generated from a uniform distribution within the range [0, 0.5], and \(y_{i}\) is the corresponding output at the \(i\)th moment. The readout network is trained to predict \(y_{i}\) from the reservoir state and \(x_{i}\). The task requires predicting the next output value based on at least 10 output values (the current one and the previous 9 values), indicating a significant need for memory. As previously mentioned, the computing performance depends mainly on four critical parameters: the maximum input laser power \(P_{M}\), the initial wavelength detuning \(\boldsymbol{\triangle}\lambda_{s}\), the ratio (Q\({}_{2}\)/Q\({}_{1}\)) of the linear MRR's quality factor to the main cavity's quality factor, and the total number \(M\) of the series-coupled linear cavities. The first two parameters determine the nonlinear strength of the main cavity, while the latter two parameters are related to the MC of the main cavity. The NARMA 10 task requires a large MC, and does not require strong nonlinear signal transformation. For a given quality factor ratio (Q\({}_{2}\)/Q\({}_{1}\)) and the total number (\(M\)) of the linear cavities in each array, optimal performance occurs at the maximum input laser power \(P_{M}=0.1\)mW and the initial wavelength detuning \(\mathbf{\Delta}\lambda_{s}=-10\)pm. The injected laser power and detuning use the same values as those in Ref. [38], which are adopted for the remainder of Section 4.1. In this case, the main cavity operates in the linear state. Figure 3 illustrates the performance of the NARMA 10 benchmark task for the single SCMRRs system. Fig. 3 (a) and (b) display separately NMSE and MC versus the ratio (Q\({}_{2}\)/Q\({}_{1}\)) of their quality factors and the total number \(M\) of the linear cavities for the single SCMRRs system. The NMSE achieves its lowest value when the MC reaches its maximum value. With the initial increase of the linear MRR's number or quality factor, the stronger the feedback strength of the system is, and the longer the photonic lifetime becomes. This results in an extended linear memory. The minimum error \(NMSE_{min}\)=0.169 is found at the quality factor's ratio Q\({}_{2}\)/Q\({}_{1}=200\) and the total number of the linear cavities \(M=9\) for \(P_{M}=0.1\)mW and an initial wavelength detuning of \(\mathbf{\Delta}\lambda_{s}=-10\)pm. Specifically, the main cavity's resonance wavelength is 1549.66nm in the absence of nonlinearity, while the linear MRR's resonance wavelength is 1549.71nm. When the linear MRR's number or quality factor continues to increase from the position of minimum \(NMSE\) value, the power coupled into the latter series-coupled MRRs decreases. Therefore, even if the number of series-coupled MRRs exceeds 9, the actual number of series-coupled MRRs is limited to 9, where the corresponding MC reaches its maximum value. Fig. 3 (c) shows the comparison of memory function \(MC_{k}\) between the single MRR-based RC and the proposed SCMRRs-based RC. The single main cavity has a limited amount of Figure 3: Performance of the NARMR 10 benchmark task for the proposed single SCMRRs-based RC. (a) NMSE and (b) MC versus the quality factor’s ratio Q\({}_{2}\)/Q\({}_{1}\) and the total number \(M\) of the linear cavities for the single SCMRRs system. (c) MC (memory function \(MC_{k}\), with \(\mid_{rms}=45\) ) of a single SCMRRs-based system under different numbers of linear MRR. (d) The calculated weight values for the task to remember the previous input value \(x_{\perp}\) based on the single main cavity (red curve) and the proposed SCMRRs system that result in the lowest NMSE (black curve). memory, which arises from the inertia between the responses of the first virtual nodes to the current input value \(x_{i}\) and the responses of the last virtual nodes to the last input value \(x_{i\text{-}1}\). The single MRR-based RC can only remember the last input value \(x_{i\text{-}1}\) from the current input \(x_{i}\) (Fig. 3 (c), red curve). Fig. 3 (d) displays the computed readout weights for the task to remember the previous input value \(x_{i\text{-}1}\) (red curve). Since the response of the reservoir to the actual input \(x_{i}\) is considered during the training step, the calculated results indicate that only the weight values of the first several virtual nodes mainly contribute to the computation due to the limited MC. In contrast, in the single SCMRRs system, these series-coupled linear cavities act as a linear analog shift register. As depicted in Fig. 3(c), their memory storage capacities are significantly enhanced with an increase in the total number of series-coupled linear cavities in each array. The main cavity is initially excited by the optical signal injected from the input waveguide. When the optical signal's frequency is close to the resonant frequency of these series-coupled linear cavities, a portion of the optical signal is coupled gradually from the main cavity to these linear cavities. These signals propagate many round-trips through these linear cavities and are eventually coupled back to the main cavity. Due to optical signals are continuously coupled into these series-coupled linear cavities with a high quality factor, the proposed SCMRRs-based RC obtains an extended linear memory. As shown in Fig. 3 (d) (black curve), almost all virtual nodes contribute to the task computation, indicating a significant improvement in MC compared to the single MRR-based RC. Figure 4 displays the performance of the NARMA 10 benchmark task for the bilateral SCMRRs system. Fig. 4 (a) and (b) display separately NMSE and MC versus the ratio (Q\({}_{2}\)/Q\({}_{1}\)) of their quality factors and the total number \(M\) of the linear cavities in each array for the bilateral SCMRRs system. The NMSE achieves its lowest value when the \(MC\) is close to its maximum value for the bilateral SCMRRs system. In this case, the bilateral SCMRRs system works in a linear state at the maximum input laser power \(P_{M}=0.\text{ImW}\) and the initial wavelength detuning Figure 4: Performance of the NARMA 10 benchmark task for the proposed bilateral SCMRRs based RC. (a) NMSE and (b) MC versus the quality factor’s ratio Q\({}_{2}\)/Q\({}_{1}\) and the total number \(M\) of the linear cavities in each array for the bilateral SCMRRs based system. (c) MC (memory function \(MC_{i}\), with \(I_{max}=45\) ) of a bilateral SCMRRs based system under different numbers of linear cavities in each array. (d) The calculated weight values for the task to remember the previous input value \(x_{i\text{-}1}\) based on the proposed bilateral SCMRRs-1 system (red curve) and bilateral SCMRRs-8 system that result in the lowest NMSE (black curve). \(\mathbf{\Delta}\lambda_{s}=\)-20pm. Specifically, the main cavity's resonance wavelength is 1549.66nm in the absence of nonlinearity, and the resonance wavelengths of two arrays of the linear cavity array are 1549.71nm and 1549.74nm, respectively. Fig. 4 (c) shows the memory function \(\mathit{MC}_{k}\) of the proposed bilateral SCMRRs-based RC. At the quality factor's ratio Q\({}_{2}\)/Q\({}_{1}\)=150 and the total number of the linear cavities in each array \(\mathit{M}\)=1, the MC is 5.88, which is close to the minimum MC value. The MC corresponds to the maximum NMSE of 0.65. The minimum NMSE of 0.154 appears at Q\({}_{2}\)/Q\({}_{1}\)=150 and \(\mathit{M}\)=8. The corresponding MC is 16.18, which is close to the maximum MC value of 16.46. In the former, due to lack of memory, only the first virtual nodes have nonzero weight values, which are important for computation (red curve, Fig. 4 (d)). In the latter, with sufficient MC, the system has non-zero weights at almost all virtual nodes (black curve, Fig. 4 (d)). As a result, the responses of almost all virtual nodes contribute to the task computation, leading to a low prediction error. In contrast to the single SCMRRs system, the bilateral SCMRRs system possesses two arrays of series-coupled linear cavity arrays with different resonance wavelengths. Thus, its MC can be increased in the wavelength dimensions compared with the corresponding single SCMRRs. The performance of the proposed SCMRRs based RC is evaluated against several MRR-based RCs, including the single MRR without optical feedback and the MRR structure with external optical feedback. For the sake of fairness, these comparisons are made under the same environmental conditions, including the MRR's structural and material parameters and the number of virtual nodes in the reservoir. The number of virtual node has a huge impact on the NMSE of the photonic RC, with a larger number of virtual nodes leading to more accurate results. Therefore, in this paper, the number of virtual nodes is fixed at 25, utilizing the masking sampling interval \(\mathit{\theta}=\) 40ps and the bit period \(\tau=\) ln s. Table 1 shows the NMSE comparison of the proposed SCMRRs based RC and several MRR-based RCs for the NARMA 10 task. At \(P_{M}=\) 0.1mW and \(\mathbf{\Delta}\lambda_{s}=\) -20pm, the bilateral-SCMRRs-8 system achieves the lowest prediction error (NMSE=0.154). For the Narma 10 task, both memory and nonlinearity contribute to the task computation. In these photonic RC systems, nonlinearity derives from both main cavity nonlinearity and PD nonlinearity. Because of the small input laser power (\(P_{M}=\) 0.1mW ), the main cavity operates in the linear state, making the PD nonlinearity the dominant factor [38]. Large memory is a key factor in improving computational accuracy. Both the proposed SCMRRs based system and the MRR with optical feedback can provide sufficient memory, resulting in lower NMSE than others. However, the MRR with optical feedback has a notable drawback, as the length of its feedback waveguide spans approximately 20 centimeters. Such a long waveguide faces many substantial challenges in applications, including device fabrication, transmission loss, temperature control, etc. The proposed SCMRRs system greatly enhances the MC by employing multiple series-coupled MRRs, and shares the same computational performance with the MRR with optical feedback. At the same time, the proposed SCMRRs based system has an ultra-small size due to the use of resonant feedback structures. Furthermore, recent scientific breakthroughs have been made in the preparation of series-coupled MRRs [44-45], making it feasible to fabricate the proposed SCMRRs using existing fabrication techniques. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & single MRR & MRR with feedback & single-SCMRRs-9 & bilateral-SCMRRs-8 \\ \cline{2-5} Model & & & & \\ \cline{2-5} & & & & \\ \hline NMSE & 0.481 & 0.187 & 0.169 & 0.154 \\ \hline \end{tabular} \end{table} Table 1: The NMSE comparison of the proposed SCMRRs-based RC and several MRR-based RCs for the NARMA 10 task. ### Mackey-Glass benchmark test The Mackey-Glass chaotic time series is defined by the following differential equation: \[\frac{dy(t)}{dt}=\frac{0.2\,y(t-\tau)}{1+y(t-\tau)^{10}}-0.1\,y(t) \tag{15}\] where \(y(t)\) is the output at time step \(t\), and \(\tau\) is the time delay. The reservoir computing task is to predict the value \(\delta\) steps ahead for a time series stemming from a Mackey-Glass delay equation (Eq. 15) with \(\tau=17\). We solve Eq. (15) numerically by using the fourth-order Runge-Kutta method with an integration step of 0.1 to exhibit moderate chaotic dynamics [38]. After solving the differential equation, we obtain a continuous time series. To perform the RC task, the continuous time series is downsampled with a fixed time interval of \(t_{s}\)=3 to obtain a discrete time series \(y_{k}\). This discrete-time series is then used for the prediction task. The dataset of 3000 values was separated into 2000 samples for training and 1000 samples for testing. Figure 5 displays the performance of the Mackey-Glass benchmark task for the single SCMRRs based system. As shown in Fig. 5 (a), the lowest NMSE of the predicted value occurs at the quality factor's ratio Q\({}_{2}\)/Q\({}_{1}\)=20 and the total number of these linear cavities \(M\)=10 for an input laser power of \(P_{M}=6\)mW and an initial wavelength detuning of \(\mathbf{\Delta}\lambda_{z}=-30\)pm. On the whole, system nonlinearity plays a key role in the Mackey-Glass benchmark test, which is quite distinct from the NARMA 10 task. The system nonlinear response of the system is proportional to the intracavity, which is assessed by the standard deviation of the main cavity's resonance shift \(\sigma(\mathbf{\Delta}\lambda_{0}(t))\). Obviously, the initial resonance shift should be limited to a certain range so Figure 5: Performance of the Mackey-Glass benchmark task for the proposed SCMRRs based RC. (a) NMSE and (b) \(\sigma(\Delta\lambda_{0}(t))\) versus the quality factor’s ratio Q\({}_{2}\)/Q\({}_{1}\) and the total number M of these linear cavities for the single SCMRRs based system. (c) The resonance shift and the bit error versus time during the task have two types of extreme NMSE values (maximum and minimum): the black curve corresponds to the lowest NMSE (black circle in (a)) and the red curve corresponds to the largest NMSE (red circle in (a)). (d) Dynamical evolution of the single-SCMRRs based system exhibiting the self-pulsation phenomenon: light is coupled into (path 1, upper) these linear cavities or does not enter (path 2, lower) these cavities. that the input power can be injected to the resonance of the main cavity. The Mackey-Glass task does not require too much MC, and the single MRR without optical feedback achieves relatively good performance (NMSE=0.016) under the same conditions. In our system, the main cavity contributes to the nonlinearity, whereas the coupled linear cavity array serves as the memory provider. The memory is effective as long as the resonances of these cavities match each other. Meanwhile, the nonlinear response of the cavity shifts the resonance, which is also influenced by the SCMRRs. Therefore, there exists a trade-off between the nonlinearity and the MC. As shown in Fig. 5 (a), as the quality factor of the coupled linear cavity array is rather large, the single SCMRRs-based system shares similar NMSE performance with the single MRR without optical feedback. With the increase in these reservoir cavity array' quality factors, the narrow cavity resonance cannot fall into the broad resonance of the nonlinear cavity, resulting a limited MC. Compared with the single MRR without optical feedback, the single SCMRRs based system does not remarkably improve its MC. When the quality factor of the linear cavity array becomes small, the broad cavity resonance ensures significant coupling between the two MRRs, which modifies the optical power in the processing cavity and ultimately determines its nonlinearity. As shown in Figs. 5 (a) and (b), at Q\({}_{2}\)/Q\({}_{1}\)=20 and \(M\)=2, there is the highest NMSE, which corresponds to the largest \(\sigma(\mathbf{\triangle}\dot{\lambda}_{0}(t))\). In this case, the series-coupled linear cavity array leads to high power in the main cavity, which further results in a large value of \(\sigma(\mathbf{\triangle}\dot{\lambda}_{0}(t))\) and high system nonlinearity. The SCMRRs based system is in a seriously detuned state, and its NMSE performance is severely degraded. Fig. 5 (c) shows the resonance shift \(\mathbf{\triangle}\dot{\lambda}_{0}(t)\) versus time when operating the computation. At Q\({}_{2}\)/Q\({}_{1}\)=20 and \(M\)=2, the main cavity's resonance wavelength shift versus time is indicated by the red curve in Fig. 5 (a). The multiple-peaked bursts appear in the curve. In this time period, the system generates too much detuning between the resonance wavelength and the input wavelength. The self-pulsation phenomenon takes place along with a thermal warming-up step and then a thermal cool-down step [46-47]. As shown in Fig. 5 (d) (path 2), the light signals mainly propagate through the main cavity during these time intervals, and they are not coupled into these series-coupled linear cavity array. Consequently, the proposed system has higher nonlinearity, but loses a lot of MC. In this way, the proposed system eventually produces a relatively large NMSE error. For comparison, we also find the lowest NMSE value at Q\({}_{2}\)/Q\({}_{1}\)=20 and \(M\)=10, which is indicated by the black circle in Fig. 5 (a). Its resonance wavelength shift versus time is indicated by the black curve in Fig. 5 (c). The value of \(\mathbf{\triangle}\dot{\lambda}_{0}(t)\) is changed slightly with a small detuning with respect to the input wavelength. The self-pulsation phenomenon does not occur, and the light signal is coupled into series-coupled linear cavity array in the time interval (Fig. 5 (d), path 1). Table 2 shows the NMSE comparison of the proposed SCMRRs-based RC and several MRR-based RCs for the Mackey-Glass task. The single SCMRRs-10 system achieves the minimum NMSE of 0.002, and the bilateral SCMRRs-8 system achieves the minimum NMSE of 0.0018. Both of them are far lower than the minimum NMSE of 0.016 obtained by the single MRR without optical feedback. The optimal NMSE is slightly larger than the result of the MRR \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Glass task.} \\ \hline & single MRR & MRR with feedback & single-SCMRRs-10 & bilateral-SCMRRs-8 \\ \cline{2-5} Model & & & & \\ \hline NMSE & 0.016 & 0.0014 & 0.002 & 0.0018 \\ \hline \end{tabular} \end{table} Table 2: The NMSE comparison of the proposed SCMRRs-based RC and several MRR-based RCs for the Mackey-Glass task. with optical feedback (NMSE = 0.0014) [38]. The results show clearly that the Mackey-Glass task requires the combination of nonlinear transformation and MC for optimal NMSE performance. By combining the main cavity's nonlinearity with the memory provided by the series-coupled linear cavity array, the proposed system with small sizes achieves almost the same minimum NMSE with the MRR with optical feedback. ### Santa Fe benchmark test In the Santa Fe prediction task, the goal is to predict only the future step, and the memory provided by the single MRR without optical feedback is sufficient [38]. Additionally, this task also requires moderate system nonlinearity. Compared with the Narma 10 and the Mackey-Glass timeseries task, the available Santa Fe dataset contains experimental noise in its values, this noise introduces additional challenges in accurately predicting the future step. Figure 6 displays the calculated performance of the Santa Fe task in the single SCMRRs based system. As shown in Fig. 6 (a), the minimum NMSE of 0.0173 is found at \(P_{M}=2\)mW and \(\mathbf{\Delta}\lambda_{s}=0\)pm for Q\({}_{2}\)/Q\({}_{1}\)=150 and \(M\)=7, which is indicated by the black circle. Fig. 6 (b) shows the value of \(\sigma\big{(}\mathbf{\Delta}\lambda_{0}(t)\big{)}\) versus the input laser power \(P_{M}\) and the initial wavelength detuning \(\mathbf{\Delta}\lambda_{s}\) for the single SCMRRs based system. Correspondingly, \(\sigma\big{(}\mathbf{\Delta}\lambda_{0}(t)\big{)}\) at the minimum NMSE shows a moderate value, which verifies that the system needs moderate nonlinearity for a low NMSE. As shown in Figs. 6 (c) and (d), we also separately discuss the influence of the quality factor's ratio Q\({}_{2}\)/Q\({}_{1}\) and the total number \(M\) of these linear cavities on the performance at \(P_{M}=2\)mW and \(\mathbf{\Delta}\lambda_{s}=0\)pm. Under these conditions, the single SCMRRs system still achieves the minimum NMSE value at Q\({}_{2}\)/Q\({}_{1}\)=150 and \(M\)=7. And then the NMSE performance shows a slight change to the quality factor's ratio and the linear MRR's number. It is noted that \(\mathbf{\Delta}\lambda_{0}(t)\) oscillates with the change of the linear MRR's number at Q\({}_{2}\)/Q\({}_{1}\)\(<\)100, which is similar to the behavior of the single SCMRRs based system for the Mackey-Glass task under the same conditions. In Fig. 6 (d), \(\sigma\big{(}\Delta\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! token, this vouchsafes the feasibility of redressing resonant wavelength perturbation issues engendered by manufacturing imperfections. ## 6 Conclusion In this paper, we conducted numerical investigation of a series-coupled MRRs system as a versatile computational platform in time-delay RC. Compared with the previous work based on waveguides of tens of centimeters [38], our scheme introduces series-coupled linear cavity array with micrometer scale footprint to provide enough MC. For on-chip waveguides with length of tens centimeters, not only the fabrication is challenging, but also the optical loss must be considered. Our scheme thus holds higher practicality and scalability. To evaluate its computational performance, we computed three typical tasks that have different memory requirements. For the NARMA 10 task, the proposed system achieves better performance than the MRR with optical feedback due to the great MC provided by these series-coupled MRRs. For the Mackey-Glass prediction task, because of the satisfaction of the system nonlinearity and the linear MC, the proposed system obtains almost the same lowest prediction error as the MRR with optical feedback. Ultimately, because the Santa Fe task does not need too much MC, the proposed system achieves a little better performance than the MRR with optical feedback [38]. All simulation results are calculated under the same bit period (\(\tau=1ns\) ) and the same number of virtual nodes (N=25). The proposed SCMRRs-based system exhibited nearly the same computational properties as the MRR with optical feedback, but with significantly smaller footprint. With existing fabrication techniques, this proposed system provides a route to scalable photonic RC based integrated chips. ## Funding National Natural Science Foundation of China (NSFC) (60907032, 61675183, 61675184); Natural Science Foundation of Zhejiang Province (LY16F050009, LY20F050009); Open Fund of the State Key Laboratory of Advanced Optical Communication Systems and Networks, China (2020GZKF013). Horizontal projects of public institution ( KY-H-20221007.KYY-HX-20210893). **Acknowledgment.** This work was partially carried out at the USTC Center for Micro and Nanoscale Research and Fabrication. The authors thank Dr. G. Donati (IFISC institute for cross disciplinary physics and complex systems (CSIC-UIB), Spain) for the fruitful discussions. **Discclosures.** The authors declare no conflicts of interest. Figure 7: Performance of Ten Sets of Linear Cavities within the single SCMRRs-10 system on the NARMA 10 Task. (a) MC and (b) NMSE versus the index of a random choice in different error range. Operated at \(P_{M}=0.1mW\)\(\Delta\lambda_{s}=-10pm,Q_{z}/Q_{t}=300\). ## Data Availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2306.03414
DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views
Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images. 2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity. To address these problems, we propose DreamSparse, a framework that enables the frozen pre-trained diffusion model to generate geometry and identity-consistent novel view image. Specifically, DreamSparse incorporates a geometry module designed to capture 3D features from sparse views as a 3D prior. Subsequently, a spatial guidance model is introduced to convert these 3D feature maps into spatial information for the generative process. This information is then used to guide the pre-trained diffusion model, enabling it to generate geometrically consistent images without tuning it. Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and scene-level images and generalising to open-set images. Experimental results demonstrate that our framework can effectively synthesize novel view images from sparse views and outperforms baselines in both trained and open-set category images. More results can be found on our project page: https://sites.google.com/view/dreamsparse-webpage.
Paul Yoo, Jiaxian Guo, Yutaka Matsuo, Shixiang Shane Gu
2023-06-06T05:26:26Z
http://arxiv.org/abs/2306.03414v4
# DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views ###### Abstract Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images. 2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity. To address these problems, we propose _DreamSparse_, a framework that enables the frozen pre-trained diffusion model to generate geometry and identity-consistent novel view image. Specifically, DreamSparse incorporates a geometry module designed to capture 3D features from sparse views as a 3D prior. Subsequently, a spatial guidance model is introduced to convert these 3D feature maps into spatial information for the generative process. This information is then used to guide the pre-trained diffusion model, enabling it to generate geometrically consistent images without tuning it. Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and scene-level images and generalising to open-set images. Experimental results demonstrate that our framework can effectively synthesize novel view images from sparse views and outperforms baselines in both trained and open-set category images. More results can be found on our project page: [https://sites.google.com/view/dreamsparse-webpage](https://sites.google.com/view/dreamsparse-webpage). Figure 1: Qualitative results on novel view synthesis of real-world objects from the CO3D dataset. Introduction "How could they see anything but the shadows if they were never allowed to move their heads?" - Plato's Allegory of the Cave Plato's Allegory of the Cave raises a thought-provoking question about our perception of reality. Human perception of 3D objects is often limited to the projection of the world as 2D observations. We rely on our prior experiences and imagination abilities to infer the unseen views of objects from these 2D observations. As such, perception is to some degreee a creative process retrieving from imagination. Recently, Neural Radiance Fields (NeRF) [25] exhibited impressive results on novel view synthesis by utilizing implicit functions to represent volumetric density and color data. However, NeRF requires a large amount of images from different camera poses and additional optimizations to model the underlying 3D structure and synthesize an object from a novel view, limiting its use in real-world applications such as AR/VR and autonomous driving. In most practical applications, typically only a few views are available for each object, in which case leads NeRF to output degenerate solutions with distorted geometry [29, 67, 70]. Then recent works [77, 29, 67, 70, 13, 6, 64, 16] started to explore sparse-view novel view synthesis, specifically focusing on generating novel views from a limited number of input images (typically 2-3) with known camera poses. Some of them [29, 67, 70, 13, 6] tried to introduce additional priors into NeRF, _e.g._ depth information, to enhance the understanding of 3D structures in sparse-view scenarios. However, due to the limited information available in few-view settings, these methods struggle to generate clear novel images for unobserved regions. To address this issue, SparseFusion [77] and GenNVS[3] propose learning a diffusion model as an image synthesizer for inferring high-quality novel-view images and leveraging prior information from other images within the same category. Nevertheless, since the diffusion model is only trained within a single category, it faces difficulties in generating objects in unseen categories and needs further distillation for each object, rendering it still impractical. In this paper, we investigate the utilization of 2D image priors from pre-trained diffusion models, such as Stable Diffusion [37], for generalizable novel view synthesis **without** further per-object training based on sparse views. However, since pre-trained diffusion models are not designed for 3D structures, directly applying them can result in geometrically and textually inconsistent images, compromising the object's identity in Figure 6. To address this issue, we introduce _DreamSparse_, a framework designed to leverage the 2D image prior from pre-trained diffusion models for novel view image synthesis using a few (2) views. In order to inject 3D information into the pre-trained diffusion model and enable it to synthesize images with consistent geometry and texture, we initially employ a geometry module [54] as a 3D geometry prior inspired by previous geometry-based works [36, 35, 27, 19, 67, 54, 16], which is capable of aggregating feature maps across multi-view context images and learning to infer the 3D features for the novel view image synthesise. This 3D prior allows us to render an estimate from a previously unseen viewpoint while maintaining accurate geometry. However, due to the modality gap, the extracted 3D features cannot be directly used as the input to the pre-trained diffusion model for synthesizing geometry-consistent novel view images. Alternatively, we propose a spatial guidance module which is able to convert the 3D features into meaningful guidance to change the spatial features [72, 60, 2] in the pre-trained diffusion model, thus enabling the pre-trained diffusion model to generate geometric consistency novel view image [72, 60, 2] without altering its parameters. Nevertheless, the spatial guidance from 3D features alone cannot completely overcome the hallucination problem of the pre-trained models, as the information encoded in 3D features is limited. This means it cannot guarantee identity consistency in synthesised novel view images. To overcome the limitation, we further propose a noise perturbation method, where we denoise the result with the pre-trained diffusion model from the noise added from the blurry novel estimate instead of random ones, so that we can further utilize the identity information from the estimate in 3D geometry model. In this way, the frozen pre-trained diffusion model is able to both effective synthesis high-quality novel view images with consistent geometry and identity. With the strong image synthesis capacity of the frozen pre-trained diffusion model, our approach offers several benefits: 1) The ability to infer unseen regions of objects without additional training, as pre-trained diffusion models already possess strong image priors learned from large-scale image-text datasets. 2) A strong generalization capability, allowing the generation of images across various categories and even in-the-wild images using the strong image priors in the pre-trained diffusion models. 3) The ability to synthesize high-quality and even scene-level images without additional per-object optimization. 4) Since we do not modify the parameters or replace the textual embedding [20] of the pre-trained text-to-image diffusion model, the textual control capability of the pre-existing model is preserved. This allows us to alter the style/texture of the synthesized novel view image with textual control. The comparisons with other methods are given in Table 1. In our experiments, we applied our framework to the real-world CO3D dataset [33]. The extensive qualitative and quantitative results demonstrated that our approach outperformed baselines in both object-level and scene-level novel view synthesis settings by a large margin (about 50% in FID and 20% in LPIPS). Specifically, the results in open-set categories of DreamSparse can even achieve competitive performance with those of the baselines in training domains, demonstrating the advantage of exploiting prior from pre-trained 2D diffusion model in open-set generalization. ## 2 Related Works Geometry-based Novel View Synthesis.Prior research on Novel View Synthesis (NVS) largely focuses on recovering the 3D structure of a scene. This is achieved by estimating the parameters of the input images' camera and subsequently applying a multi-view stereo (MVS) technique, as indicated by several studies [52; 44; 9; 1]. These methods use explicit geometry proxies to facilitate NVS. However, it often fails to synthesize novel views that are both photo-realistic and comprehensive, particularly in the case of occluded areas. In order to address this issue, recent strategies [35; 36] have attempted to integrate the 3D geometry derived from an MVS pipeline with NVS approaches based on deep learning. Despite its progress, the overall quality may deteriorate if the MVS pipeline encounters failures. The utilization of other explicit geometric representations has also been explored by various recent NVS techniques. These include the usage of depth maps [8; 59], multi-plane images [7; 75], or voxels [49; 21]. Sparse-view 3D Reconstruction.Novel View Synthesis (NVS) from fewer views aims to generate a new image from a novel viewpoint using a limited number of 2D images [56]. Because of the limited information available in this setting, most works [57; 76; 51; 14; 47; 55] need the per-object or per-category test-time optimization, which makes them impractical. [59; 30; 55; 67; 58; 5; 64; 16; 50; 39; 34; 13; 54] tried to encode observation into incorporate 3D information, by using 3D information, _e.g._, depth and volume or stronger neural backbone, _e.g._, transformer [62]. In order to synthesise high-quality novel view images, several recent approaches have utilized diffusion priors, such as 3DiM [65], SparseFusion [77], NeRDi [6], Zero-1-to-3 [20] and GenNVS [3]. Diffusion Model for 3D ReconstructionIn order to achieve high-quality novel view synthesis, recent works tried to introduce diffusion model [12; 53; 37; 41; 22; 26; 61; 69; 74; 6; 17; 18; 24; 66; 32; 28; 26] in this area. In the context of novel view synthesis, 3DiM[65] performs novel view synthesis only conditioned on input images and poses without 3D information, so it is hard to generate 3D-consistent images. Then SparseFusion [77] and NVS-Fusion [3] proposed to integrate additional geometric information as training conditions for the diffusion model, thereby enabling the generation of 3D-consistent images. However, due to the absence of strong 2D prior information \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & \\ \hline 1) Sparse-Views & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\ 2) 3D consistent & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ \\ 3) Generate Unseen & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ 4) Open-Set Generalization & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ 5) Train-Free for NVS & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ 6) Textual Control & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons with prior works on 1) works with sparse (2-6) input views, 2) generates geometrically consistent views, 3) hallucinates unseen regions, 4) generalizes to instances in unseen categories because of the pre-trained backbones, and 5) free of training during inference time for novel view synthesis. 6) The ability to edit with textual control. in the diffusion models they employed, these approaches are challenging to generalize to objects in open-set categories. In contrast, our approach utilizes a frozen diffusion model pre-trained on a large-scale dataset [45], enhancing its generalization ability for objects in open-set categories. ## 3 Method Given a few context images \(\{C_{i}^{inputs}\}_{i=1}^{N}\) and their poses \(\boldsymbol{\pi}_{i}\), we aim to leverage the 2D image prior from pre-trained diffusion models to synthesize a novel view image at a target pose \(\boldsymbol{\pi}_{target}\). Because pre-trained diffusion models are not 3D-aware, we first employ a geometry-aware module as a 3D prior to extract features in a 3D volume encoded from given context images. In order to leverage 3D features for the pre-trained diffusion model, we further propose a spatial guidance model to convert geometry-grounded features into spatial features [60; 2] with consistent shape in diffusion models, guiding it to synthesize novel view images with correct geometry. However, we discover that relying solely on the diffusion model to sample an accurate reconstruction from random noise is inadequate at maintaining the object's identity due to the hallucination problem [15; 32] as shown in 6. Therefore, we propose a noise perturbation method to alleviate it, guiding the diffusion model to synthesise a novel view image with correct geometry and identity. The overall pipeline is illustrated in Fig. 2. ### 3D Geometry Module In order to infuse 3D awareness into 2D diffusion models, we propose a geometry model to extract 3D features with geometric information for 2D diffusion models. In order to obtain geometry grounding, the process begins by casting a query ray from \(\boldsymbol{\pi}^{target}\) and sampling uniformly spaced points along the ray. For each 3D point, we aim to learn density weighting for computing a weighted linear combination of features along the query ray. Subsequently, this per-ray feature is aggregated across multiple context images, yielding a unified perspective on the 3D structure we aim to reconstruct. Lastly, we render a feature map at \(\boldsymbol{\pi}^{target}\) by raycasting from the target view. Next, we will present the details of this model. **Point-wise density weighting for each context image.** For each input context image \(C_{i}^{inputs}\), our geometry model first extracts semantic features using a ResNet50 [10] backbone and then reshapes the encoded feature into a 4 dimensional volumetric representation \(V_{i}\in\mathbb{R}^{c\times d\times h\times w}\), where \(h\) and \(w\) are the height and width of the feature volume, respectively, \(d\) is the depth resolution, and \(c\) is the feature dimension. We pixel-align the spatial dimensions of the volume to that of the original input image via bilinear upsampling. To derive benefit from multi-scale feature representation, we draw feature maps from the first three blocks of the backbone and reshape them into volumetric representations capturing the same underlying 3D space. Given a 3D query point \(\boldsymbol{p}_{j}\) along a query ray \(\boldsymbol{r}^{i}\), we sample feature vectors from all three scales of feature volumes using trilinear interpolation concatenating them together. To calculate the point-wise density weighting, we employ a transformer [62] with a linear projection layer at last followed by a softmax operation to determine a weighted linear combination of point features, resulting in a per-ray feature vector. Further implementation details are reserved for Appendix. Figure 2: The illustration of the method. The first stage involves utilizing a 3D geometry module to estimate 3D structure and aggregate features from context views.In the next stage, a pre-trained 2D diffusion model conditioned on the aggregate features is leveraged to learn a spatial guidance model that guides the diffusion process for accurate synthesis of the underlying object. Aggregate features from different context images.To understand the unified structure of the \(3\)D object, we consolidate information from all given context images. More specifically, we employ an extra transformer, enabling us to dynamically consolidate ray features from a varying number of context images that correlate with each query ray. The final feature map rendering at a query view is constructed by raycasting from the query view and computing per-ray feature vector for each ray. We render the feature map \(\mathbf{F}\) at a resolution of \(\mathbb{R}^{32\times 32}\), compositing features sampled from a 3D volume with geometry awareness with respect to the target view. We denote \(g\) as the feature map rendering function and \(\mathbf{F}\) as the resulting aggregate feature map. \[\mathbf{F}=g_{\phi}(\mathbf{\pi}^{inputs},\mathbf{C}^{inputs},\mathbf{\pi}^{target}) \tag{1}\] where \(\mathbf{F}\in\mathbb{R}^{d\times 32\times 32}\) with \(d=256\), and \(\phi\) is trainable parameters. **Color Estimation** To enforce geometric consistency, we directly obtain aggregation weights from the transformer outputs and linearly combine RGB color values drawn from the context images to render a coarse color estimate \(E\) at the query view. \[E=g_{\phi,color}(\mathbf{\pi}^{inputs},\mathbf{C}^{inputs},\mathbf{\pi}^{target}) \tag{2}\] We impose a color reconstruction loss on the coarse image against the ground-truth image. \[\mathcal{L}_{recon}=\sum_{\mathbf{\pi}^{target}}\left\|g_{\phi,color}(\mathbf{\pi}^{ inputs},\mathbf{C}^{inputs},\mathbf{\pi}^{target})-C^{target}\right\|^{2} \tag{3}\] ### Spatial Guidance Module Because of the modality gap between the 3D features \(\mathbf{F}\) and the input of the pre-trained diffusion model, 3D features cannot be directly used as the input of the pre-trained diffusion model. To leverage the 3D information in the 3D features, we propose the spatial guidance module to convert the 3D features into guidance to rectify the spatial features [60; 72; 2] that have a role in forming fine-grained spatial information in the diffusion process (normally the feature maps after the 4-th layer). To derive this guidance from 3D features, we construct our spatial guidance module following ControlNet [72] which trains a separate copy of all the encoder blocks as well as the middle block from Stable Diffusion's U-Net with 1x1 convolution layers initialized with zeros between each block. Let \(T_{\theta}\) be the spatial guidance module, and intermediate outputs from each block \(j\) of \(T_{\theta}\) as \(T_{\theta,j}(\mathbf{F})\) with weight \(\lambda\). In order to change the spatial features in the pre-trained diffusion model, we directly add \(T_{\theta,j}(\mathbf{F})\) into the corresponding decoder block of the pre-trained diffusion model's U-Net. By optimizing \(T_{\theta}\) with gradients backpropagated from the pre-trained diffusion model's noise prediction objective. \[\mathcal{L}_{diffusion}=\mathbb{E}_{x_{0},t,\mathbf{F},\epsilon\sim\mathcal{N}(0, 1)}\left[\left\|\epsilon-\epsilon_{\phi}(x_{t+1},t,T_{\theta}(\mathbf{F}))\right\| ^{2}\right] \tag{4}\] \(T_{\theta}\) will be optimized to learn how to semantically meaningful convert 3D features from the geometry model into the guidance to rectify spatial features in the diffusion process, enabling it to generate geometry-consistent images. In Section 4.5, we visualize the spatial features after adding the spatial guidance to show the effects of the spatial guidance model. During training, we jointly optimize \(g_{\phi}\) and \(T_{\theta}\) using the overall loss. \[\min_{\phi,\theta}\mathcal{L}_{recon}(g_{\phi})+\mathcal{L}_{diffusion}(T_{ \theta}) \tag{5}\] While in training time, we use a ground-truth image as \(x_{0}\) to optimize \(\mathcal{L}_{diffusion}\), in inference time, we initialize \(x_{0}\) with an image rendered from \(g_{\phi,color}\). Noise PerturbationWhile spatial guidance module by itself is able to guide the pre-trained diffusion model to synthesize novel view images with consistent geometry. It still not always can synthesise images with the same identity as context views because of the hallucinate problem [15; 32] in the pre-trained models. To alleviate this problem, we propose adding noise perturbation to the novel view estimate \(E\) from the geometry model and denoising the result with the pre-trained diffusion model, _e.g._ Stable Diffusion [38], so that it can leverage the identity information from the estimate. As shown by [23], applying the denoising procedure can project the sample to a manifold of natural images. We use the formulations from denoising diffusion models [12] to perturb an initial image \(x_{0}=E\) with Gaussian noise to get a noisy image \(x_{t}\) as follows: \[x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon \tag{6}\] where \(\bar{\alpha}_{t}\) depends on scheduling hyperparameters and \(\epsilon\sim\mathcal{N}(0,\,1)\). During the training time, the noise is still randomly initialized, and we use the Noise Perturbation method in the inference time to improve the identity consistency. We show its ablation study in Section 4.5. Experiments In this section, we first validate the efficacy of our DreamSparse framework on zero-shot novel view synthesis by comparing it with other baselines. Then, we perform ablation studies on important design choices, such as noise perturbation and visualization of spatial features, to understand their effects. We also present qualitative examples of our textual control ability and include a discussion on observations. ### Dataset and Training Details Following SparseFusion [77], we perform experiments on real-world scenes from the Common Objects in 3D (CO3Dv2) [33], a dataset with real-world objects annotated with camera poses. We train and evaluate our framework on the CO3Dv2 [33] dataset's fewview_train and fewview_dev sequence sets respectively. We use Stable Diffusion v1.5 [38] as the frozen pre-trained diffusion model and DDIM [53] to synthesize novel views with 20 denoising steps. The resolutions of the feature map for the spatial guidance module and latent noise are set as 32 \(\times\) 32, and the spatial guidance weight \(\lambda=2\). The three transformers used in the geometry module are all 4 layers, and the output 3D features are set as 32 \(\times\) 32 to match the latent noise dimensions. We jointly train the geometry and the spatial models on 8 A100-40GB GPUs for 3 days with a batch size of 15. To demonstrate our model's generalization capability at object-level novel view synthesis, we trained our framework on a subset of 10 categories as specified in [33]. During each training iteration, a query view and one to four context views of an object were randomly sampled as inputs to the pipeline. To further evaluate scene-level novel view synthesis capability, we trained our framework on the hydrant category, incorporating the full background, using the same training methodology as above. ### Competing Methods We compare against previous state-of-the-art (SoTA) methods for which open-source code is available. We have included PixelNeRF [68], a feature re-projection method, in our comparison. Additionally, we compare our methods against SparseFusion [77], the most recently published SoTA method that utilizes a diffusion model for NVS. We train our framework and SparseFusion on 10 categories of training sets. The PixelNeRF training was conducted per category due to its category-specific hyperparameters. For a fair comparison, all methods perform NVS **without** per-object optimization during the inference time. Because we do not replace the textual embedding in the pre-trained diffusion model, we use the prompt 'a picture of <class_name>' as the default prompt for both training and inference. ### Main Results Analysis Given 2 context views, we evaluate novel view synthesis quality using the following metrics: FID [11], LPIPS [73], and PSNR 2. We believe that the combination of FID, LPIPS, and PSNR provides a comprehensive evaluation of novel view synthesis quality. FID and LPIPS measure the perceptual quality of the images, while PSNR measures the per-pixel accuracy. We note that PSNR has some drawbacks as a metric for evaluating generative models. Specifically, PSNR tends to favor blurry Figure 3: Novel view synthesizing results on **open-set** category objects with the same context image inputs, where SF denotes SparseFusion [77] and GT denotes Ground-Truth image. More results are given at our project webpage and appendix. images that lack detail. This is because PSNR only measures the per-pixel accuracy of an image, and does not take into account the overall perceptual quality of the image. By using all three metrics, we can get a more complete picture of the quality of the images generated by our model. #### 4.3.1 Object Level Novel View Synthesis In-Domain EvaluationWe evaluate the performance of unseen objects NVS in training 10 categories. The quantitative results are presented in Table 5, which clearly demonstrates that our method surpasses the baseline methods in terms of both FID and LPIPS metrics. More specifically, DreamSparse outperforms SparseFusion by a substantial margin of 53% in the FID score and 28% in LPIPS. This significant improvement can be attributed to DreamSparse's capacity to generate sharper, semantically richer images, as depicted in Figure 1. This indicates the benefits of utilizing the potent image synthesis capabilities of pre-trained diffusion models. Open-Set EvaluationWe also evaluate the performance of objects NVS in open-set 10 categories, because PixelNerf is per-category trained, we do not report its open-set generalization results. According to Table 6, it is evident that our method surpasses the baseline in both evaluation metrics in all categories, surpassing the second-best method by 28% in LPIPS and 43% in FID. Moreover, the results derived from our method are not just competitive, but can even be compared favourably to the training category evaluations of the baseline in Table 5 (122.2 vs 172.2 in FID and 0.24 vs 0.29 in LPIPS). This clearly illustrates the benefits of utilizing 2D priors from a large-scale, pre-trained 2D diffusion model for open-set generalization. We also show the qualitative results in Figure 3, and it shows that the novel view image synthesised by our method can still achieve sharp and meaningful results on objects in open-set categories. #### 4.3.2 Scene Level Novel View Synthesis We report our evaluation results about scene-level NVS in Table 7. As shown in the table, DreamSparse significantly outperforms the baselines in terms of FID and LPIPS scores, surpassing the second-best performance by approximately 70% in FID and 24% in LPIPS, respectively. This underscores the effectiveness of our method in the context of scene-level NVS tasks. Despite our method showing comparable performance to the baseline in terms of Peak Signal-to-Noise Ratio (PSNR), it's worth mentioning that PSNR often favors blurry images lacking in detail [42; 40; 3]. This becomes evident in Figure 4, where despite our sharp and consistent synthesis results, PSNR still leans towards the blurry image produced by PixelNeRF. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{1 View} & \multicolumn{4}{c}{2 Views} & \multicolumn{4}{c}{5 Views} \\ \cline{2-13} & FID\(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & FID\(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & FID\(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & FID\(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) \\ \hline PixelNeRF [68] & 343.89 & 0.75 & **13.31** & 319.96 & 0.74 & **13.94** & 286.30 & 0.71 & 14.59 \\ SparseFusion [77] & 272.72 & 0.81 & 13.05 & 255.05 & 0.78 & 13.55 & 231.73 & 0.71 & **14.91** \\ Ours & **75.63** & **0.59** & 13.02 & **73.47** & **0.56** & 13.48 & **70.62** & **0.54** & 14.15 \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative evaluation metrics for scene-level novel view synthesis on the hydrant category from CO3D, where SF denotes SparseFusion [77]. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Arple}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Cake}} & \multicolumn{2}{c}{\multirow{2}{*}{Dount}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Polar}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} & \multicolumn{2}{c}{\multirow{2}{*}{Apple}} \\ \cline{2-13} \cline{6-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13} \cline{13-13-13} \cline{13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13} \cline{13-13-13-13-13} \cline{1 ### Textual Control Style Transfer As we do not replace or remove text conditioning in the pre-trained diffusion model, our method is additionally capable of controlling the image generation with text. We demonstrate an example use case where we conduct both novel view synthesis and style transfer via text in Figure 5. the geometry of the ground truth image. This consistency enables the pre-trained diffusion model to generate images that accurately mirror the original geometry. **Spatial Guidance Weight** We investigate the effects of spatial guidance weight on the quality and consistency of synthesised novel view images. Our study varies the spatial guidance weight \(\lambda\), and the results in Fig 6 showed that when \(\lambda=0\) (indicating no spatial guidance), the pre-trained diffusion model failed to synthesise a novel view image that was consistent in terms of geometry and identity. However, as the weight increased, the synthesised images exhibited greater consistency with the ground truth. It is important to note, though, that an excessively high weight could diminish the influence of features in the pre-trained diffusion model, potentially leading to blurry output. Given the trade-off between quality and consistency, we set \(\lambda=2\) as the default hyperparameter. **Effect of Noise Perturbing Color Estimation** The impact of the Noise Perturbation method is showcased in Figure 8. It is evident that when the diffusion process begins from random noise, the spatial guidance model can successfully guide the pre-trained diffusion model to synthesize images with consistent geometry. However, the color or illumination density information is partially lost, leading to distortions in the synthesized novel view. In contrast, synthesizing the image from noise that is added to the color estimation in the geometry model yields better results. As depicted in '+20 Noise' in Figure 8, the pre-trained diffusion model can effectively utilize the color information in the estimates, resulting in a more consistent image synthesis. We also experimented with varying the noise level added to the estimate. Our observations suggest that if the noise added to the blurry estimation is insufficient, the pre-trained diffusion model struggles to denoise the image because of the distribution mismatch between the blurry color estimate and Gaussian distribution, thereby failing to produce a sharp and consistent output. ## 5 Conclusion In this paper, we present the _DreamSparse_ framework, which leverages the strong 2D priors of a frozen pre-trained text-to-image diffusion model for novel view synthesis from sparse views. Our method outperforms baselines in both training and open-set object-level novel view synthesis. Our technique surpasses existing benchmarks in both the training and the open-set object-level novel view synthesis. Further results corroborate the benefits of utilizing a pre-trained diffusion model in scene-level NVS as well as in the generation of text-controlled scenes style transfer, clearly outperforming existing models and demonstrating the potential of leveraging the 2D pre-trained diffusion models for novel view synthesis. **Limitations and Negative Social Impact** Despite its capabilities, we discovered that our 3D Geometry Module struggles with generating complex scenes, especially ones with non-standard Figure 8: Effect of noise perturbation level on novel view image synthesis. The images in the top row are visualizations of noised images as input to the diffusion process, and the images in the bottom row are the synthesized images from diffusion model. +5 noise denotes the addition of 5 steps of noise to the color estimate from the geometry model, and random noise denotes randomly sampled Gaussian noise. All images were generated using a DDIM [53] sampler with 20 inference steps. Figure 7: The spatial feature visualization with spatial guidance model, where context denotes the input context image, 2nd block denotes the visualization of feature maps from the 2nd block of the decoder and output denotes the synthesised novel view image. geometry or intricate details. This is due to the limited capacity of the geometry module and limited data, and we will introduce a stronger geometry backbone and train it on larger datasets. On the social impact front, our technology could potentially lead to job displacement in certain sectors. For instance, professionals in fields such as graphic design or 3D modelling might find their skills becoming less in demand as AI-based techniques become more prevalent and advanced. It's important to note that these negative implications are not exclusive to this study, and should be widely considered and addressed within the realm of AI research.
2301.09820
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Fine-tuning a pre-trained model (such as BERT, ALBERT, RoBERTa, T5, GPT, etc.) has proven to be one of the most promising paradigms in recent NLP research. However, numerous recent works indicate that fine-tuning suffers from the instability problem, i.e., tuning the same model under the same setting results in significantly different performance. Many recent works have proposed different methods to solve this problem, but there is no theoretical understanding of why and how these methods work. In this paper, we propose a novel theoretical stability analysis of fine-tuning that focuses on two commonly used settings, namely, full fine-tuning and head tuning. We define the stability under each setting and prove the corresponding stability bounds. The theoretical bounds explain why and how several existing methods can stabilize the fine-tuning procedure. In addition to being able to explain most of the observed empirical discoveries, our proposed theoretical analysis framework can also help in the design of effective and provable methods. Based on our theory, we propose three novel strategies to stabilize the fine-tuning procedure, namely, Maximal Margin Regularizer (MMR), Multi-Head Loss (MHLoss), and Self Unsupervised Re-Training (SURT). We extensively evaluate our proposed approaches on 11 widely used real-world benchmark datasets, as well as hundreds of synthetic classification datasets. The experiment results show that our proposed methods significantly stabilize the fine-tuning procedure and also corroborate our theoretical analysis.
Zihao Fu, Anthony Man-Cho So, Nigel Collier
2023-01-24T05:11:17Z
http://arxiv.org/abs/2301.09820v2
# A Stability Analysis of Fine-Tuning a Pre-Trained Model ###### Abstract Fine-tuning a pre-trained model (such as BERT, ALBERT, RoBERTa, T5, GPT, etc.) has proven to be one of the most promising paradigms in recent NLP research. However, numerous recent works indicate that fine-tuning suffers from the instability problem, i.e., tuning the same model under the same setting results in significantly different performance. Many recent works have proposed different methods to solve this problem, but there is no theoretical understanding of why and how these methods work. In this paper, we propose a novel theoretical stability analysis of fine-tuning that focuses on two commonly used settings, namely, full fine-tuning and head tuning. We define the stability under each setting and prove the corresponding stability bounds. The theoretical bounds explain why and how several existing methods can stabilize the fine-tuning procedure. In addition to being able to explain most of the observed empirical discoveries, our proposed theoretical analysis framework can also help in the design of effective and provable methods. Based on our theory, we propose three novel strategies to stabilize the fine-tuning procedure, namely, Maximal Margin Regularizer (MMR), Multi-Head Loss (MHLoss), and Self Unsupervised Retaining (SURT). We extensively evaluate our proposed approaches on 11 widely used real-world benchmark datasets, as well as hundreds of synthetic classification datasets. The experiment results show that our proposed methods significantly stabilize the fine-tuning procedure and also corroborate our theoretical analysis. ## 1 Introduction Fine-tuning a pre-trained model (such as BERT (Devlin et al., 2019), ALBERT (Lan et al., 2020), and RoBERTa (Liu et al., 2019)) has proven to be one of the most promising paradigms for tackling Natural Language Processing (NLP) tasks. Many cutting edge NLP models achieve state-of-the-art results by fine-tuning pre-trained models. However, it has been observed by many researchers that existing fine-tuning procedures suffer from the instability problem (Devlin et al., 2019; Phang et al., 2018; Lee et al., 2019; Zhu et al., 2020; Dodge et al., 2020; Pruksachatkun et al., 2020; Mosbach et al., 2020; Zhang et al., 2020; Zhao et al., 2021; Han et al., 2021), i.e., fine-tuning a model with the same setting results in significantly different performance. Such instability problem substantially impairs the model performance and makes different fine-tuned models incomparable with each other. Many different approaches have been proposed to solve this problem. Mosbach et al. (2020) propose to use a smaller learning rate and more iteration steps, while Arora et al. (2018); Sanyal et al. (2019); Hua et al. (2021); Aghajanyan et al. (2020) propose to control the Lipschitz constant with different noise regularizations. However, there is no unified theoretical framework to help understand the effectiveness of these methods. In this paper, we give a unified theoretical stability analysis of two most widely used fine-tuning paradigms, namely, the full fine-tuning (Devlin et al., 2019) and the head tuning (Peters et al., 2018; Wei et al., 2021) (also called linear probing by Peters et al. (2019); Chen et al. (2021); Kumar et al. (2021)). Full fine-tuning means tuning all the parameters initialized with the pre-trained encoder while head tuning means freezing the pre-trained encoder and only tuning the specific task head layer (Kumar et al., 2021) on top of that encoder. Different from training from scratch, a fine-tuned pre-trained model naturally possesses many good properties and is amenable to theoretical analysis. Specifically, as empirically indicated by Radiya-Dixit and Wang (2020), the pre-trained parameters and the fine-tuned parameters are very close to each other. This observation motivates us to approximate the original function with its second-order Taylor expansion, which provides great convenience for theoretical analysis. To analyze the full fine-tuning, we first define the leave-one-out model stability following Bousquet and Elisseeff (2002); Schliserman and Koren (2022) and then prove a stability upper bound by analyzing the model's second-order Taylor expansion. This bound explains why increasing the training sample size or reducing the Lipschitz constant can help stabilize the fine-tuning procedure. Moreover, following Wei et al. (2021); Kumar et al. (2021), we further give a theoretical analysis of the head tuning paradigm where only a linear head is trained. Our theoretical analysis shows that increasing the iteration number, increasing the training sample size, or using a smaller learning rate stabilizes the fine-tuning procedure. This observation is also consistent with many empirical results (Mosbach et al., 2020; Hua et al., 2021). We list these widely used stabilizing methods with their corresponding theoretical basis in Table 1. We also conduct comprehensive experiments to further verify these conclusions. Our theoretical analysis can not only explain the principle behind majority of known empirical facts but also contribute to the design of novel techniques to help stabilize the fine-tuning procedure. These methods are also shown in Table 1. First, we propose a novel Maximal Margin Regularizer (MMR) that maximizes the margin between the encoded features from different classes by adding a new regularization term to the original loss. Minimizing this term increases the distance between the features. We show both theoretically and empirically that increasing this margin can help improve fine-tuning stability. Afterwards, we propose a novel Multi-Head Loss (MHLoss), where we train several linear heads simultaneously and combine them together to predict the label. We theoretically prove that such a combination accelerates the convergence rate of the training procedure and thus improves the fine-tuning stability. Finally, we propose a novel Self Unsupervised Re-Training (SURT) method to show that fine-tuning on a model with weights closer to the final weights helps stabilize the fine-tuning procedure. We re-train the model with the masked language model task on the training data without labels and then fine-tune this model on the training data. This method originates from our theoretical prediction that reducing the distance between the pre-trained parameters and the fine-tuned parameters help improve the fine-tuning stability. We also conduct extensive experiments with our methods on both the full fine-tuning setting and the head tuning setting to demonstrate that these methods are empirically applicable to both settings. Our contributions (also shown in Table 1) can be summarized as follows: (1) We give a theoretical stability analysis of two most popular fine-tuning settings, namely, the full fine-tuning and the head tuning. (2) Our theory explains the effectiveness of many existing works that stabilize the fine-tuning. (3) We design three novel methods to stabilize the fine-tuning based on our theory. (4) We conduct extensive experiments on 11 real-world NLP datasets, as well as a bunch of synthetic classification tasks to show the effectiveness of our theory and our newly proposed methods. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Methods** & **Theoretical Basis** & **Our Experimental Verification** & **Reference** \\ \hline Increase sample size \(n\) & Theorem 2.2, Theorem 2.4 & \(\$4.25\), \(\$4.3.2\) & Devlin et al. (2019) \\ Decrease Lipschitz constant \(L\) & Theorem 2.2, Corollary 2.5 & \(\$4.21\), \(\$4.26\), \(\$4.27\) & Hua et al. (2021) \\ Increase Iteration steps \(T\) & Theorem 2.4 & \(\$4.23\) & Mosbach et al. (2020) \\ Use smaller learning rate \(\eta\) & Theorem 2.4 & \(\$4.24\) & Mosbach et al. (2020) \\ Max Margin Regularizer (MMR) & Theorem 2.4 & \(\$4.21\), \(\$4.26\), \(\$4.27\), \(\$4.3\) & **Our New Method** (\$3.1) \\ Multi-Head Loss (MHLoss) & Corollary 3.1 & \(\$4.21\), \(\$4.22\), \(\$4.26\), \(\$4.27\), \(\$4.3\) & **Our New Method** (\$3.2) \\ Self Unsupervised Re-Training (SURT) & Theorem 2.2 & \(\$4.21\), \(\$4.26\), \(\$4.27\), \(\$4.3\) & **Our New Method** (\$3.3) \\ \hline \hline \end{tabular} \end{table} Table 1: Methods for stabilizing fine-tuning. We provide a novel theoretical analysis of existing methods. Based on our theory, we also propose three new methods to stabilize the fine-tuning procedure. We conduct extensive experiments to verify our theoretical observations while more detailed empirical verifications of existing methods can also be found in the corresponding reference. Theoretical Analysis ### Notation We introduce the notation used throughout this paper. Here, we focus on the classification tasks to simplify our analysis, as most of the NLP tasks can be described as classification tasks. Let \(S=\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\}\) be a training set, where \(x_{i}\in\mathbb{R}^{d_{x}}\) is an input feature vector, \(y\in\{-1,1\}\) is the corresponding output label, \(n\) is the sample size, and \(d_{x}\) is the dimention size for vector \(x_{i}\). We denote the feature encoder as \(E\), which is usually a pre-trained encoder. The augmented encoded representation of \(x_{i}\) is calculated as \(\tilde{x}_{i}=[E(x_{i})^{\mathsf{T}},-1]^{\mathsf{T}}\). We define \(S^{i}=\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,\)\((x_{i-1},y_{i-1}),(x_{i+1},y_{i+1}),\cdots,(x_{n},y_{n})\}\) as a pertubation (Bousquet and Elisseeff, 2002) of the training set \(S\) by removing sample \((x_{i},y_{i})\). We denote the trainable parameter as \(w=\mathcal{A}(S)\in\mathbb{R}^{d_{w}}\), which is obtained by training the dataset \(S\) with algorithm \(\mathcal{A}\). Here, \(w\) is a column vector and \(d_{w}\) represents the dimension of the parameter vector \(w\). We denote the pre-trained initialization of the trainable model parameter as \(w_{0}\), the parameter at the \(t\)th iteration step as \(w_{t}\), and the optimal solution as \(w_{*}\). We use \(M\succeq 0\) (resp., \(M\succ 0\)) to indicate that the matrix \(M\) is positive semidefinite (resp., positive definite). Let \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a function. We say that \(f\) is convex if \(f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)\) for all \(x,y\in\mathbb{R}^{d}\) and \(t\in[0,1]\); \(f\) is \(L\)-Lipschitz if \(\|f(x)-f(y)\|\leq L\|x-y\|\) for all \(x,y\in\mathbb{R}^{d}\); \(f\) is \(\beta\)-smooth if \(\|\nabla f(x)-\nabla f(y)\|\leq\beta\|x-y\|\) for all \(x,y\in\mathbb{R}^{d}\); \(f\) is \(\mu\)-strong convexity if \(f(y)\geq f(x)+\langle\nabla f(x),(y-x)\rangle+\frac{\mu}{2}\|y-x\|_{2}^{2}\) for all \(x,y\in\mathbb{R}^{d}\), where \(\langle a,b\rangle\) represents the inner product of the vectors \(a\) and \(b\) while \(\nabla f(x)\) is the gradient of \(f\) at \(x\). We use \(M^{\mathsf{T}}\) to denote the transpose of matrix \(M\). ### Stability Analysis for Full Fine-Tuning Full fine-tuning tunes all parameters of a given pre-trained model. However, directly giving a theoretical analysis of such a complex function is quite challenging. Fortunately, based on the empirical observation by Radiya-Dixit and Wang (2020) that the final fine-tuned weights are close to the pre-trained weights, we can approximate the function by its second-order Taylor expansion. Then, we apply Schliserman and Koren (2022)'s stability bound and show that it converges to a finite bound. This bound gives us a powerful tool to analyze many existing methods. Specifically, this bound theoretically explains the effectiveness of increasing training sample size (Devlin et al., 2019) and lowering the Lipschitz constant (Hua et al., 2021) in stabilizing the fine-tuning procedure. We will also use this theorem to help design our new SUT method. Before we delve into the stability analysis, we should first give a formal definition of model stability. Stability theory for general learning algorithms has been extensively explored by many previous works (Bousquet and Elisseeff, 2002; Shalev-Shwartz et al., 2010; Charles and Papailiopoulos, 2018). They propose to use the pointwise hypothesis stability to measure the output variation after removing one of the training samples. On the other hand, Schliserman and Koren (2022) propose to directly use the distance between model parameters as a measure of stability. This leads to the following definition. **Definition 2.1** (Leave-one-out Model Stability (Schliserman and Koren, 2022)).: We say that a learning algorithm \(\mathcal{A}\) has leave-one-out model stability \(\epsilon\) if \(\forall i\in\{1,\cdots,n\}\), \[\mathbb{E}_{S}\left[\big{\|}\mathcal{A}(S^{i})-\mathcal{A}(S)\big{\|}\big{\}} \right]\leq\epsilon. \tag{1}\] With a slight abuse of notation, we denote \(\epsilon\) as the infimum over all feasible \(\epsilon\)'s for which Definition 2.1 holds. To analyze the behavior of training the model on the dataset \(S\), we first assume that the overall loss function \(f\) is \(L\)-Lipschitz and \(\beta\)-smooth. These two assumptions are widely used in the analysis of the behavior of neural networks (Shalev-Shwartz and Ben-David, 2014; Nesterov et al., 2018; Schliserman and Koren, 2022). Moreover, as empirically indicated by Radiya-Dixit and Wang (2020), the pre-trained parameter \(w_{0}\) and the fine-tuned parameter \(w_{*}\) are very close to each other. Therefore, around the optimal solution \(w_{*}\), \(f(w,x)\) can be approximated by its second-order Taylor expansion as \[f(w,x)= f(w_{*},x)+(w-w_{*})^{\mathsf{T}}\nabla f(w_{*},x)+\frac{1}{2}(w-w_{*})^{ \mathsf{T}}\nabla^{2}f(w_{*},x)(w-w_{*}), \tag{2}\] where \(x\) stands for given fixed input data while \(w\) is the function parameter, Since \(w_{*}\) is the optimal solution, the Hessian matrix \(\nabla^{2}f(w_{*},x)\) is positive semidefinite. To simplify our analysis, we focus on the scenario where the Hessian matrix is positive definite and satisfies \(\beta I\succeq\nabla^{2}f(w_{*},x)\succeq\mu I\) with \(\mu>0\). Recall that the gradient descent method iterates as \(w_{t+1}=w_{t}-\eta\frac{1}{n}\sum_{i=1}^{n}\nabla f(w_{t},x_{i})\), where \(w_{t+1}\) denotes the weight at the \((t+1)\)st iteration and \(\eta\) is the learning rate. Moreover, as indicated in Nakkiran et al. (2021); Belkin et al. (2019); Ishida et al. (2020), big pre-trained models almost achieve zero training loss. Hence, we assume that \(f(w_{*},x)=0\). Now, we are ready to present the following stability bound for the Taylor expansion of full fine-tuning. **Theorem 2.2** (Stability Bound for Full Fine-Tuning).: _Suppose that the loss function \((w,x)\mapsto f(w,x)\) is non-negative, \(L\)-Lipschitz, and \(\beta\)-smooth with respect to \(w\), \(\mu I\preceq\nabla^{2}f(w_{*},x)\) with \(\mu>0\), and \(f(w_{*},x)=0\). If \(\mathcal{A}\) is the gradient descent method with learning rate \(\eta=\frac{1}{\beta}\), then the leave-one-out model stability satisfies_ \[\mathbb{E}_{S}\left[\big{\|}\mathcal{A}(S^{i})-\mathcal{A}(S) \big{\|}\right]\leq\frac{\sqrt{2L\|w_{0}-w_{*}\|/\beta}}{n(1/\sqrt[4]{1-\frac{ \mu}{\beta}}-1)}. \tag{3}\] The proof can be found in Appendix A.1. It can be observed from Theorem 2.2 that increasing the training sample size \(n\) can reduce the term on the right-hand side of Equation (3). Therefore, it brings down the stability upper bound and potentially stabilizes the fine-tuning procedure. This insight is consistent with the empirical findings from numerous earlier papers (Devlin et al., 2019; Lee et al., 2019; Dodge et al., 2020). In addition, Cattan et al. (2022) observe that data augmentation can strengthen the stability, which further supports the theoretical conclusion. The intuition behind this prediction is that as the sample size \(n\) increases, the impact of the noise introduced by the training set perturbation is mitigated and thus the stability improves. We can also conclude from Theorem 2.2 that reducing the Lipschitz constant \(L\) for the function \(f\) can similarly diminish the leave-one-out model stability, hence stablizing the training procedure. This phenomenon has also been examined by numerous recent works (Arora et al., 2018; Sanyal et al., 2019; Hua et al., 2021; Aghajanyan et al., 2020), which propose to impose a noise perturbation component on the input features and then minimize the distance between the original output and the noise output. Our theoretical analysis explains why controlling the Lipschitz constant might enhance stability. Lastly, we note that reducing the distance \(\|w_{0}-w_{*}\|\) between the initial parameter \(w_{0}\) and the optimal parameter \(w_{*}\) can also improve stability. Intuitively, if the start point and the endpoint are close to each other, the optimization procedure is less likely to jump to some other local minima, thus making the training procedure more stable. We will propose a novel SURT (SS3.3) method based on this observation. ### Stability Analysis for Head Tuning In Theorem 2.2, we assume some properties of the loss function \(f\) and study how the sample size \(n\), Lipschitz constant \(L\), and parameter distance \(\|w_{0}-w_{*}\|\) control the stability. In this section, we further unveil more influencing factors under the head tuning setting. The intuition is also very straightforward. According to Soudry et al. (2018), when training a linear classifier with gradient descent, it tends to be a maximal margin classifier. Based on this conclusion, we use the telescope sum to bound the stability by the distance between the current iterate's weights and the final maximal margin classifier's weights together with the distance between the original and perturbed maximal margin classifiers. This bound theoretically explains why increasing training sample size (Devlin et al., 2019), increasing iterations number (Mosbach et al., 2020), or using a smaller learning rate (Mosbach et al., 2020) can help stabilize the fine-tuning procedure. We also derive a new corollary to show that limiting the Lipschitz constant (Hua et al., 2021) can help stabilize the fine-tuning procedure under this scenario. We will further propose our new MMR and MHLoss method based on the same theory. Formally, similar to SS2.2, following the observation of Radiya-Dixit and Wang (2020), the parameters of the pre-trained model and the final results are very close to each other. We can assume that the parameters of the encoder are fixed during fine-tuning. This assumption results in a very popular fine-tuning method called head tuning (Peters et al., 2018; Wei et al., 2021), where the encoder parameters are frozen and only the linear head is tunable. Moreover, following Soudry et al. (2018); Schliserman and Koren (2022), we assume that the encoded features are linearly separable. We refer readers to the detailed discussion in Ji and Telgarsky (2019) for an analysis of non-separable data. Under the above setting, given a dataset \(S\), each sample \(x_{i}\in S\) is first encoded as \(\tilde{x}_{i}=[E(x_{i})^{\mathsf{T}},-1]^{\mathsf{T}}\) and then \(\tilde{x}_{i}\) is classified by a linear head layer. Following Soudry et al. (2018), we define the head tuning loss as \(\ell(w^{\mathsf{T}}\tilde{x}_{i}y_{i})\), where \(w=[v^{\mathsf{T}},b]^{T}\) and \(\ell(u)\) is an arbitrary non-negative, differentiable, \(\beta\)-smooth, and monotonically decreasing to zero function. The linear bias term has already been merged into the weight \(w\)(Soudry et al., 2018). Therefore, the overall loss function can be written as \[\mathcal{L}(E,w)=\frac{1}{n}\sum_{i=1}^{n}\ell(w^{\mathsf{T}}\tilde{x}_{i}y_{ i}). \tag{4}\] As proved in Lemma 1 of Soudry et al. (2018), if we train a linear model on a linearly separable dataset with the loss \(\ell\), the norm of \(w\) must diverge toward infinity as \(\lim_{t\to\infty}\|w_{t}\|=\infty\). Therefore, calculating the leave-one-out model stability in Definition 2.1 is not feasible as the parameters \(\mathcal{A}(S)\) and \(\mathcal{A}(S^{i})\) both diverge toward infinity. Fortunately, under this circumstance, only the direction of the predictor, namely the normalized weight \(w/\|w\|\), is important. As a result, we define a new stability measure called normalized leave-one-out model stability focusing on the discrepancy of the normalized weight \(w/\|w\|\). **Definition 2.3** (Normalized Leave-one-out Model Stability).: We say that a learning algorithm \(\mathcal{A}\) has normalized leave-one-out model stability \(\epsilon\) if \(\forall i\in\{1,\cdots,n\}\), \[\mathbb{E}_{S}\left[\left\|\frac{\mathcal{A}(S^{i})}{\|\mathcal{A}(S^{i})\|}- \frac{\mathcal{A}(S)}{\|\mathcal{A}(S)\|}\right\|\right]\leq\epsilon. \tag{5}\] Different from Definition 2.1, Definition 2.3 normalizes the parameters trained on the corresponding training data and focuses on the direction gap between them. This definition is more reasonable for analyzing the tunable linear head as it also works even if \(\lim_{t\to\infty}\|w_{t}\|=\infty\). To facilitate a theoretical analysis of head tuning, we further denote \(\tilde{X}=[\tilde{x}_{1},\cdots,\tilde{x}_{n}]^{\mathsf{T}}\in\mathbb{R}^{d_ {x}\times n}\) and denote \(\sigma_{\max}(\tilde{X})\) as the largest singular value of \(\tilde{X}\). The head tuning approach aims to find a separation plan \(w^{\mathsf{T}}\tilde{x}=0\) to classify the encoded features \(\tilde{x}_{i}\) into two categories. Here, \(w=[v^{\mathsf{T}},b]^{\mathsf{T}}=\mathcal{A}(S)\) is the classifier parameter. We denote \(\hat{w}_{S}=[\tilde{v}_{S}^{\mathsf{T}},\hat{b}_{S}]^{\mathsf{T}}\) as the SVM solution trained on dataset \(S\) and denote \(\gamma_{S}\) as the maximal margin between separation plans and encoded features, which can be calculated as \(\gamma_{S}=\frac{1}{\|\tilde{v}_{S}\|}\)(Bishop and Nasrabadi, 2006). Similarly, we denote \(\hat{w}_{S^{i}}=[\tilde{v}_{S^{i}}^{\mathsf{T}},\hat{b}_{S^{i}}]^{\mathsf{T}}\) as the SVM solution trained on the dataset \(S^{i}\). Here, \(\hat{v}_{S}^{\mathsf{T}},\hat{v}_{S_{i}}^{\mathsf{T}}\in\mathbb{R}^{d_{x}}\) are the weights while \(\hat{b}_{S},\hat{b}_{S^{i}}\) are the intercepts. Then, we present the theorem for head tuning as follows. **Theorem 2.4** (Stability Bound for Head Tuning).: _Given a linearly separable dataset \(S\), suppose that the encoded features \(E(x_{i})\) are bounded as \(\|E(x_{i})\|\leq B\), \(\forall i\in\{1,\cdots,n\}\). Let \(\gamma_{S}\) be the maximal margin between separation plan \(\hat{w}_{S}^{\mathsf{T}}\tilde{x}=0\) and encoded features \(E(x_{i})\). Suppose further that the model parameter \(w\) is optimized by gradient descent with \(t\) iterations and learning rate \(\eta<2\beta^{-1}\sigma_{\max}^{-1}(\tilde{X})\). Then, for some constants \(C,\lambda,\nu\), the normalized leave-one-out model stability is upper bounded as_ \[\mathbb{E}_{S}\left[\left\|\frac{\mathcal{A}(S^{i})}{\|\mathcal{A}(S^{i})\|}- \frac{\mathcal{A}(S)}{\|\mathcal{A}(S)\|}\right\|\right]\leq\frac{C\log\log t} {\log t}+\nu\max\big{\{}\sqrt{\frac{2}{\lambda n}\left(1+\frac{B}{\gamma_{S}} \right)},\frac{B+\sqrt{B^{2}+8n\lambda(1+B/\gamma_{S})}}{2n\lambda}\big{\}}. \tag{6}\] The proof can be found in Appendix A.2. This theorem is based on Soudry et al. (2018)'s theory that training a linear model on linearly separable data will converge to the direction of a max-margin classifier (SVM) if it is trained with the gradient descent method. To give a whole picture of analyzing the whole procedure, we use the telescope sum to incorporate the gap between two max-margin classifiers when trained on different datasets (\(S\) and \(S^{i}\)), as well as the gap between parameter \(w_{t}\) and the corresponding max-margin classifier \(\hat{w}_{S}\). Specifically, the first term (\(\frac{C\log\log t}{\log t}\)) in Equation (6) indicates how the parameter \(w_{t}\)'s direction converges to the corresponding max-margin classifier \(\hat{w}_{S}\)'s direction at the \(t\)th step. The second term \(\nu\max\left\{\sqrt{\frac{2}{\lambda n}\left(1+\frac{B}{\gamma s}\right)},\frac{B+ \sqrt{B^{2}+8n\lambda(1+B/\gamma s)}}{2n\lambda}\right\}\) measures the direction discrepancy between the two max-margin classifiers trained with the datasets \(S\) and \(S^{i}\). It can be observed from Theorem 2.4 that increasing the number of iterations \(t\) can help stabilize the training procedure. This phenomenon has already been empirically observed by Mosbach et al. (2020) in extensive experiments. Our theory further gives the intuition and theoretical understanding of why increasing the iteration number stabilizes the training procedure. The first term in Theorem 2.4 indicates that, as the training step increases, the model parameter \(w_{t}\)'s direction is closer to the corresponding max-margin classifier \(\hat{w}_{S}\)'s direction. Unfortunately, the rate \(\mathcal{O}(\frac{C\log\log t}{\log t})\) converges relatively slowly (Soudry et al., 2018). We will derive a new corollary based on Theorem 2.4 and design a novel multi-head loss to accelerate the convergence rate in SS3.2. In addition, increasing the sample size \(n\) can help stabilize the model. This observation is the same as that observed in Theorem 2.2. It shows that this method is theoretically effective under both settings. Moreover, if the encoded representation \(E(x_{i})\) has large margin \(\gamma_{S}\), the model will be more stable. This observation is also very intuitive. If the margin is large, it becomes easier to find a separation plane to separate the data, while a small perturbation of the plane can hardly interfere with the classification results. We will propose a novel max margin regularizer (SS3.1) based on this observation. Lastly, using a smaller learning rate is a necessary condition for Theorem 2.4 to be held. This is because only if the stepsize is small enough, the weight can be guaranteed to converge to the max-margin classifier weight. It should also be noted that the widely used Descent Lemma (Lemma A.5.1) implies that a smaller learning rate helps improve the stability, as it is a sufficient condition to guarantee that the model has sufficient descent for each iteration step. It prevents the parameter from jumping to other local minima, making it more likely to converge to the same local minimum. We further derive a corollary of Theorem 2.4 to show how the Lipschitz constant controls the stability in the head tuning setting. We simplify the distance between parameters trained on two datasets to incorporate the Lipschitz constant \(L\). **Corollary 2.5**.: _Given a linearly separable dataset \(S\), suppose that the encoded features \(E(x_{i})\) are bounded as \(\|E(x_{i})\|\leq B\), \(\forall i\in\{1,\cdots,n\}\). Suppose further that the model parameter \(w\) is optimized by gradient descent with \(t\) iterations and learning rate \(\eta<2\beta^{-1}\sigma_{\max}^{-1}(\tilde{X})\). For some constants \(C,\lambda,\nu\), the normalized leave-one-out model stability is upper bounded as_ \[\mathbb{E}_{S}\left[\left\|\frac{\mathcal{A}(S^{i})}{\|\mathcal{A}(S^{i})\|}- \frac{\mathcal{A}(S)}{\|\mathcal{A}(S)\|}\right\|\right]\leq\frac{C\log\log t} {\log t}+\nu\frac{L}{\lambda n}. \tag{7}\] The proof can be found in Appendix A.3. It can be observed from Corollary 2.5 that if the function \(f\) has smaller Lipschitz consistent \(L\), training a model can be more stable. Intuitively, if the Lipschitz constant is small, it becomes less sensitive to data perturbation. As a result, the directions for the max-margin classifiers will be quite close to each other, which increases the stability. This conclusion is consistent with Theorem 2.2 and has also been empirically examined by many previous works (Arora et al., 2018; Sanyal et al., 2019; Hua et al., 2021; Aghajanyan et al., 2020) with extensive experiments. As a result, this strategy, in conjunction with the discussion in SS2.2, is theoretically justified in both full fine-tuning and head tuning settings. ## 3 Methods In addition to being able to explain most of the observed empirical discoveries, our theory can also help in the design of effective and provable methods. Armed with our new theory, we propose three novel methods to stabilize the fine-tuning procedure, which further verify the correctness of our theory. First, based on Theorem 2.4, we propose a Max Margin Regularizer (MMR) to maximize the representation margin between samples from different classes. Then, we prove Corollary 3.1 based on Theorem 2.4, which establishes the theoretical basis for our novel Multi-Head Loss (MHLoss). It utilizes a multi-head layer to accelerate the convergence rate and thus stabilize the fine-tuning procedure. Finally, based on Theorem 2.2, we propose a Self Unsupervised Re-Training (SURT) method to initiate fine-tuning from a point closer to \(w_{*}\). We will conduct extensive experiments in SS4 to verify our proposed methods. ### Max Margin Regularizer It can be observed from Theorem 2.4 that if the margin \(\gamma_{S}\) between the encoded representation and the separation plane is large, the model will have better stability. Based on this intuition, we propose a novel Max Margin Regularizer (MMR) for fine-tuning to help maximize the margin. However, calculating the margin is quite computationally costly and the margin itself is not differentiable. To tackle this problem, we propose to maximize the distance between the center points of the two classes. Intuitively, if the distance between the class centers increases, the margin between the encoded representation and the separation plane will also likely to increase. We recall that given a training set \(S\), the input \(x_{i}\in S\) will be first encoded with the encoder \(E\) as \(E(x_{i})\). Each category should contain at least one sample and MMR can then be represented as \[\mathcal{R}(S)=\frac{1}{1+\left\|\sum_{i=1}^{n}E(x_{i})y_{i}\left(\frac{1+y_{i }}{\sum_{j=1}^{n}(1+y_{j})}+\frac{1-y_{i}}{\sum_{j=1}^{n}(1-y_{j})}\right) \right\|}. \tag{8}\] Intuitively, the above calculates the center points for different classes and then gets the distance between them. We use the reciprocal of the distance as the regularization term to ensure that minimizing the regularization term will result in increased distance. We add a constant 1 on the denominator to increase numerical stability. Therefore, the final optimization target can be written as \[\mathcal{L}_{\text{MMR}}(E,w)=\frac{1}{n}\sum_{i=1}^{n}\ell(w^{\mathsf{T}} \tilde{x}_{i}y_{i})+\alpha\mathcal{R}(S), \tag{9}\] where \(\alpha\) is the weight parameter for \(\mathcal{R}(S)\). ### Multi-Head Loss As indicated in Soudry et al. (2018), the convergence rate \(\mathcal{O}(\frac{\log\log t}{\log t})\) for the first term in Equation (6) is quite slow as \(t\) grows. As a result, the effect of raising \(t\) to lower the bound gradually loses its apparent effect especially when \(t\) is already very large. To further reduce this term, we propose a novel Multi-Head Loss (MHLoss). Specifically, instead of using one linear head to calculate the loss, we propose to use \(H\) linear headers with the same shape simultaneously to calculate the loss and take the average of the outputs. In the training stage, the \(h\)th head (\(h\in\{1,\cdots,H\}\)) with parameter \(w_{h}\) is trained separately by minimizing the loss \(\ell(w_{h}^{\mathsf{T}}\tilde{x}_{i}y_{i})\). The overall loss can be calculated as \[\mathcal{L}_{\text{MH}}(E,w_{1},\cdots,w_{H})=\frac{1}{nH}\sum_{h=1}^{H}\sum_ {i=1}^{n}\ell(w_{h}^{\mathsf{T}}\tilde{x}_{i}y_{i}). \tag{10}\] In the testing stage, we can calculate the result for an input \(x\) by averaging all the heads as \(\frac{1}{H}\sum_{h=1}^{H}(w_{h}^{\mathsf{T}}\tilde{x}_{i})=(\frac{1}{H}\sum_{ h=1}^{H}w_{h}^{\mathsf{T}})\tilde{x}_{i}=\bar{w}^{\mathsf{T}}\tilde{x}_{i}\), where \(\bar{w}^{\mathsf{T}}\) is the mean average of all \(w_{h}\)'s. It is interesting to note that the final model is a combination of several linear models, which is still a linear model without any extra structure added. We argue that this loss helps improve the stability, because it accelerates the convergence speed for the first term in Equation (6). To theoretically prove this claim, we establish Corollary 3.1, which is based on Theorem 2.4. Here, as indicated by Soudry et al. (2018) that all the weights \(w_{h}\)'s converge to the same direction as \(\hat{w}_{S}\), we mainly focus on the case where \(\bar{w}\) is not orthogonal to \(\hat{w}_{S}\) during the training procedure, namely, \(\bar{w}^{\mathsf{T}}\hat{w}_{S}\neq 0\). It shows that this combination accelerates the converging speed and improves the stability if the training step is fixed. **Corollary 3.1** (Stability Bound for Multi-Head Loss).: _Consider a mulit-head loss with \(H\) heads, where \(H>2+8\ln\frac{1}{\delta}\), \(\delta\in(0,1)\), and \(\bar{w}^{\mathsf{T}}\hat{w}_{S}\neq 0\). With the same assumptions as in Theorem 2.4, for some constants \(C,\xi,\nu\), with probability \(1-\delta\), we have_ \[\mathbb{E}_{S}\left[\left\|\frac{\mathcal{A}(S^{t})}{\left\|\frac{\mathcal{A} (S^{t})}{\left\|\mathcal{A}(S^{t})\right\|}}-\frac{\mathcal{A}(S)}{\left\| \mathcal{A}(S)\right\|}\right\|\right]\leq\sqrt{\frac{2+8\log\frac{1}{\delta} }{H}}\frac{C\xi\log\log t}{\log t}+\nu\max\big{\{}\sqrt{\frac{2}{\lambda n} \left(1+\frac{B}{\gamma_{S}}\right)},\frac{B+\sqrt{B^{2}+8n\lambda(1+B/\gamma _{S})}}{2n\lambda}\big{\}}. \tag{11}\] The proof can be found in Appendix A.4. It can be observed from Corollary 3.1 that the stability is bounded by a term that involves the head number \(H\). As \(H\) increases, the first term in Equation (11) decreases at the rate of \(\mathcal{O}(\frac{1}{\sqrt{H}})\), which is better than simply using one head. The intuition behind the multi-head loss is also very straightforward. Lemma A.4.1 shows that the expectation of the head parameter \(w_{h}\) is SVM. As implied by the concentration property, if we take the average of these classifiers, we can get an averaged linear classifier closer to the max-margin classifier. ### Self Unsupervised Re-Training As discussed in Theorem 2.2, reducing the distance \(\|w_{0}-w_{*}\|\) between the initialization weight \(w_{0}\) and the solution weight \(w_{*}\) reduces the stability upper bound. This observation inspires us to fine-tune a pre-trained model that is very close to the final model. To get such a pre-trained model, a straightforward idea is to utilize a model pre-trained on the same domain as the backbone model, because the feature encoder may have already been well-adapted to the specific domain and will not change too much to adapt to that domain during fine-tuning. Unfortunately, it is not possible for us to get such a well pre-trained model for an arbitrary domain. To solve this problem, we propose a novel Self Unsupervised Re-Training (SURT) method to first re-train the given pre-trained model with the same training corpus as the one used in the fine-tuning task. It is re-trained with the unsupervised mask language model (Devlin et al., 2019) task, which only needs the given training corpus without any annotated label. Then, we fine-tune the model based on the re-trained model with the given labeled data. It should be noted that many previous works have proposed domain-adaptive pre-training (Gururangan et al., 2020; Aghajanyan et al., 2021; Hendrycks et al., 2019). They re-train the model with an extra domain-specific corpus that is not always guaranteed to exist. Different from these models, our proposed SURT method directly re-trains the model with the original training set without the labels. It does not require finding extra corpus and is thus applicable to more domains. Also, to the best of our knowledge, our theoretical analysis is the first to show why using a re-trained model helps stabilize fine-tuning. ## 4 Experiments ### Setup We evaluate our theoretical conclusions and newly proposed methods on both real-world GLUE/SuperGLUE datasets and synthetic classification datasets. Specifically, in SS4.2, we evaluate the methods with the widely used NLP benchmark datasets GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019). They contain several tasks including Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), Microsoft Research paraphrase Corpus (MRPC) (Dolan and Brockett, 2005), Recognizing Textual Entailment (RTE) (Dagan et al., 2005; Giampiccolo et al., 2007; Bentivogli et al., 2009), Commitment Bank (CB) (De Marneffe et al., 2019), Winograd Schema Challenge (WSC) (Levesque et al., 2012), Quora Question Pairs (QQP) (Wang et al., 2018), Stanford Sentiment Treebank (SST-2) (Socher et al., 2013), Winograd NLI (WNLI) (Levesque et al., 2012), BoolQ (Boolean Questions) (Clark et al., 2019), Multi-Sentence Reading Comprehension (MultiRC) (Khashabi et al., 2018) and Word-in-Context (WiC) (Pilehvar and Camacho-Collados, 2019). We follow the setting of many previous works (Chang et al., 2018; Lee et al., 2019; Dodge et al., 2020; Xu et al., 2021) to use the original development set as our testing set since the original test set is only accessible via the submission link which contains a hard limit of the submission number. Moreover, we follow Fu et al. (2022) to split 90% data from the original training set to serve as the new training set and we use the remaining 10% samples as the new development set to tune model hyper-parameters like regularizer's weight, early stop epoch, etc. We use the early stop strategy to stop training if the performance on the new development set has no improvement for 20 consecutive evaluation epochs. We set \(H=50\) for MHLoss in the main experiment (SS4.2.1) and report the impact of using different \(H\) in SS4.2.2. The experiments are evaluated following the setting of Wang et al. (2018, 2019). In order to check the stability of the experiments, we run each experiment for 10 runs with different random seeds and report the mean scores and the standard deviations. Although our proposed methods are theoretically supported under different scenarios (full tuning and head tuning), we test them experimentally on both settings to show their capabilities of handling more scenarios. All the code is implemented based on the jiant framework (Chang et al., 2020) with the RoBERTa-base (Liu et al., 2019) model as the backbone model which is provided by the transformers1 toolkit (Wolf et al., 2020). All the experiments are running on a cluster with NVIDIA A100 GPU cards. On the other hand, in SS4.3, we also conduct several experiments on synthetic classification tasks to show more details of how each factor affects the results. As discussed in SS2.3, the head tuning paradigm only tunes a linear head classifier on top of the embedded features. To give a clearer picture of how the methods work, we randomly generate several synthetic classification datasets containing features with respect to different requirements and validate some of our conclusions with them. These experiments can be used to show more details of the head tuning behaviors. We compare our model with several widely used baseline models focusing on fine-tuning stability. **FineTune** model is the standard full fine-tuning model (Devlin et al., 2019) that directly fine-tunes all the parameters of the backbone model together with the task-specific head on each task. **MixOut** model proposed by Lee et al. (2019) is another widely used fine-tuning method. It mixes the trained model parameters and the original model parameters according to a specific ratio. **LNSR** model proposed by Hua et al. (2021) uses a regularizer to diminish the Lipschitz constant of the prediction function to improve the stability. #### 4.2.2 Impact of Head Number Corollary 3.1 theoretically predicts that using MHLoss helps improve the fine-tuning stability, which has already been experimentally verified in SS4.2.1. To further analyze how the head number \(H\) affects the stability, we report standard deviations on several GLUE tasks with respect to different head numbers \(H\) ranging in \(\{1,5,50,100,200\}\). The results are shown in Figure 1. It can be concluded from the results that as the head number \(H\) increases, the standard deviation decreases. It shows the effectiveness of our proposed MHLoss and also empirically verifies the correctness of our theoretical prediction. #### 4.2.3 Impact of Training Epoch Theorem 2.4 indicates that training more epochs stabilizes fine-tuning, which has also been empirically verified by Mosbach et al. (2020). We further experimentally verify these results by fine-tuning the model with different epochs on several GLUE tasks and the results are shown in Figure 2. We can conclude from the results that as the training epochs increase, the stability of nearly all tasks correspondingly improves. This observation further corroborates our theoretical conclusions. #### 4.2.4 Impact of Learning Rate Different from other factors that directly reduce the stability upper bound, using a smaller learning rate is a necessary condition for Theorem 2.4 to hold. We conduct new experiments to show how the stability changes as different learning rates are applied. The results are shown in Figure 3. It can be concluded from the experiments that as the learning rate \(\eta\) decreases, the model achieves better stability. This observation further justifies the correctness of our theoretical prediction and also validates the results observed in Mosbach et al. (2020). #### 4.2.5 Impact of Sample Count It has been indicated in both Theorems 2.2 and 2.4 that using more training samples helps stabilize the fine-tuning procedure. We conduct a new experiment to verify this prediction by sampling the training set with ratios ranging in \(\{40\%,50\%,60\%,70\%,80\%,90\%,100\%\}\). Then, we fine-tune the model with the sampled data and the results are shown in Figure 4. It can be concluded that as we get more and more training samples, the models become more and more stable, which also corroborates our theory. #### 4.2.6 Head Tuning Stability Analysis To show that our proposed methods are also applicable to the head tuning settings, we run the same experiments again in the head tuning manner and the results are shown in Table 3. In this setting, the pre-trained encoders are fixed and only the head layer's parameters are fine-tuned. It can be observed from the results that in the head tuning setting, our proposed methods also improve stability. This result also validates our theoretical prediction. Figure 4: Influence of sample count. Figure 5: Influence of head number \(H\). \begin{table} \begin{tabular}{l|c c c c|c c} \hline \hline & CoLA & MRPC & RTE & CB & WSC & AVG \\ \hline FineTune & **40.634**\_1.697 & **77.774**\_0.73 & 56.72e**\_1.16 & 75.52e**\_6.31 & 58.75e**\_38.36 & 61.64e**\_2.67 \\ MixOut & 39.62e & **77.090**\_4.05 & 65.28e**\_1.19 & 76.34e**\_35.58 & 58.75e**\_38.36 & 61.64e**\_2.67 \\ LNSR & 39.43e & **37.74**\_40.92 & 76.96e**\_1.05 & 58.85e**\_34.33 & 62.03e**\_2.53 \\ MHLoss & **39.014**\_0.67** & 77.42e**\_1.01 & 59.64e**\_1.09 & **78.75e**\_5.50 & **63.46e**\_11.14 & **63.19e**\_1.86 \\ MMR & 39.11e & **22.08**\_7.63e**\_56 & 56.68e**\_0.75 & **78.57e**\_1.36 & 61.63e**\_1.90 & 62.74e**\_2.28 \\ SURT & 39.25e & **11.27**\_44.45 & **55.51e**\_1.14 & **74.78**\_55.65 & 62.21e**\_2.55 & 62.55e**\_2.33 \\ \hline \hline \end{tabular} \end{table} Table 3: Head tuning stability analysis. #### 4.2.7 Data Perturbation Stability In the main experiment (SS4.2), we perturb the training data by switching the random seeds. It is still unknown whether it will remain stable if we impose some perturbation on the training data. To verify this kind of stability, we conduct a new experiment by training the model on several datasets with 10% of their training samples randomly removed. The results are shown in Table 4. It can be concluded from the results that our proposed new methods including MHLoss, MMR, and SURT can help stabilize the training procedure with perturbation on the training data. This observation further extends our methods to scenarios with perturbation on input data. Furthermore, existing methods MixOut and LNSR stabilize fine-tuning compared with the FineTune model, which also supports our theoretical prediction. #### 4.2.8 Large Pre-Trained Model Stability In the above discussion, we conduct extensive experiments on RoBERTa base model. To further explore whether our theoretical results are applicable for fine-tuning a large backbone model, we run several tasks with the RoBERTa-large model (Liu et al., 2019). The results are shown in Table 5. It can be concluded from the results that the scores for most of the experiments improve as we use a much larger backbone model. Besides, all stabilizing methods including MHLoss, MMR, SURT, and previously proposed LNSR reduce the variance of the FineTune model. This observation shows that our proposed new methods are also applicable to a large pre-trained model. This experiment also extends the application scenarios of our proposed methods. ### Experiment Results for Synthetic Classification To provide a more complete picture of how internal factors (such as margins and distances) affect stability, we manually create a series of synthetic binary classification tasks. These factors can be manually controlled in these datasets. The training samples for each task are randomly generated with regard to a particular factor level and are classified with a linear classifier. The synthetic datasets have many advantages. First, the value of each factor is controllable and we can easily show how the stability is influenced by different factor values. Besides, we can generate large numbers of different datasets to achieve better statistical significance. If not particularly specified, we randomly generate 500 training sets and each training set contains 2,000 data points. As discussed in SS2.3, the normalized leave-one-out stability is more suitable for analyzing linear classifiers. We write \(\|\hat{\Delta}\|=\left\|\frac{\hat{w}_{\varnothing}}{\|\hat{w}_{\hat{s}}\|}- \frac{\hat{w}_{\hat{s}^{i}}}{\|\hat{w}_{\hat{s}^{i}}\|}\right\|\), where \(\|\hat{\Delta}\|\) measures the normalized leave-one-out model stability as defined in Definition 2.3. We train linear regression models on each training set with the gradient descent method and report metrics accordingly. #### 4.3.1 Impact of Head Number We train the model with a linear regression model with the MHLoss. The head number \(H\) ranges in \(\{5,10,20,50,100\}\). The results are shown in Figure 5, where the SVM model is a max-margin classifier. It can be concluded that using more heads leads to faster convergence than using fewer heads. This observation justifies the discussion of Corollary 3.1 that MHLoss accelerates the convergence of the first term in Equation (11) \begin{table} \begin{tabular}{l|c c c c c|c} \hline & CoLA & MRPC & RTE & CB & WSC & AVG \\ \hline FineTune & 64.63\(\pm\)0.21 & 90.41\(\pm\)0.84 & 83.42\(\pm\)0.58 & 84.95\(\pm\)0.53 & 78.24\(\pm\)0.25 & 78.20\(\pm\)0.29 \\ MixOut & 65.91\(\pm\)1.87 & 90.48\(\pm\)0.88 & 88.40\(\pm\)0.42 & 33.85\(\pm\)0.44 & 66.35\(\pm\)0.24 & 78.32\(\pm\)0.24 \\ LNSR & 64.80\(\pm\)0.20 & **99.54\(\pm\)0.71** & 82.95\(\pm\)0.41 & 84.14\(\pm\)0.79 & 66.06\(\pm\)0.37 & 77.70\(\pm\)2.28 \\ MHLoss & **65.48\(\pm\)1.24** & 90.44\(\pm\)**0.54** & 84.26\(\pm\)**0.97** & 86.61\(\pm\)4.87 & 63.43\(\pm\)2.90 & 78.22\(\pm\)0.29 \\ MMR & 63.45\(\pm\)1.75 & 90.54\(\pm\)50.56 & **84.51\(\pm\)**1.49 & 85.36\(\pm\)**4.06** & 68.17\(\pm\)2.49 & **78.41\(\pm\)2.07** \\ SURT & 60.75\(\pm\)**1.04** & 90.09\(\pm\)0.40 & 84.20\(\pm\)1.57 & **87.76\(\pm\)**5.18** & **68.94\(\pm\)**4.10** & 78.24\(\pm\)2.52 \\ \hline \end{tabular} \end{table} Table 4: Results for data perturbation stability experiments. \begin{table} \begin{tabular}{l|c c c c|c} \hline & CoLA & MRPC & RTE & CB & WSC & AVG \\ \hline FineTune & 64.63\(\pm\)0.21 & 99.04\(\pm\)0.84 & 83.42\(\pm\)0.58 & 84.95\(\pm\)0.53 & 78.20\(\pm\)0.25 & 78.20\(\pm\)0.29 \\ MixOut & 65.91\(\pm\)1.87 & 90.48\(\pm\)0.88 & 88.40\(\pm\)0.42 & 33.85\(\pm\)0.44 & 66.35\(\pm\)**2.43** & 78.32\(\pm\)2.40 \\ LNSR & 64.80\(\pm\)0.20 & **99.54\(\pm\)0.71** & 82.95\(\pm\)0.41 & 84.14\(\pm\)0.79 & 66.06\(\pm\)3.77 & 77.07\(\pm\)2.28 \\ MHLoss & **65.48\(\pm\)1.24** & 90.44\(\pm\)**0.54** & 84.26\(\pm\)**0.97** & 86.61\(\pm\)4.87 & 63.43\(\pm\)2.90 & 78.22\(\pm\)2.09 \\ MMR & 63.45\(\pm\)1.75 & 90.54\(\pm\)50.56 & **84.51\(\pm\)**1.49 & 85.36\(\pm\)**4.06** & 68.17\(\pm\)2.49 & **78.41\(\pm\)2.07** \\ SURT & 60.75\(\pm\)**1.04** & 90.09\(\pm\)0.40 & 84.20\(\pm\)1.57 & **87.76\(\pm\)**5.18** & **68.94\(\pm\)**4.10** & 78.27\(\pm\)2.52 \\ \hline \end{tabular} \end{table} Table 5: Results for fine-tuning a RoBERTa large model. to improve stability. Besides, all the linear regression models converge to a max-margin classifier. This observation gives a quick verification of the theory in Soudry et al. (2018), which forms the basis of our theoretical analysis. #### 4.3.2 The Impact of Sample Count. To further show how increasing the sample count contributes to stabilizing the fine-tuning procedure, we train models on synthetic datasets with different sample sizes and the results are shown in Figure 6. It can be concluded that \(\|\hat{\Delta}\|\) decreases as the sample size increases, which indicates a more stable training procedure. This observation further verifies Theorem 2.2 and 2.4's prediction. #### 4.3.3 Impact of Sample Margin. Theorem 2.4 indicates that increasing the margin between features can improve stability. To give a more intuitive view of how the margin influences the stability of the linear head, we conduct a new experiment to illustrate the relationship between the margin and stability. We manually adjust the data distribution to control the distance between the center points of the two generated sample classes and thus control the margin. Then, we calculate the margin with a simple SVM model and plot the relationship between the margin and the corresponding stability metric. The results are shown in Figure 7. It can be concluded that as the margin increases, the stability improves, which also justifies our theoretical prediction. #### 4.3.4 Impact of Parameters Distance. In Theorem 2.2, we show that if the distance (\(\|w_{0}-w_{*}\|\)) between the initialization parameter \(w_{0}\) and the optimal parameter \(w_{*}\) is reduced, the stability can be improved. We conduct a new experiment to further verify this prediction. As shown in Figure 8, we start from a random initialization point \(w_{0}\) and use gradient descent to get the optimal point \(w_{*}\). We also calculate \(\|\hat{\Delta}\|\) to measure the stability. It can be observed that if the distance \(\|w_{0}-w_{*}\|\) is small, the training procedure becomes more stable. This observation further shows the effectiveness of our proposed SURT model, as well as verifies the prediction of our theory. #### 4.3.5 The Impact of Sample Bound. In Theorem 2.4, we find that the sample bound \(B\) also influences the stability. To verify this prediction, we train the classifiers on a bunch of generated datasets. We draw a plot of the stability score with respect to different sample bound \(B\). The results are shown in Figure 9. It can be observed that as \(B\) decreases, the model becomes more stable, which further verifies the correctness of our theory. ## 5 Related Works Many works have been proposed to stabilize the fine-tuning procedure. Cattan et al. (2021); Mosbach et al. (2020) show that fine-tuning the model with more iterations can make it more stable, while Arora et al. (2018); Sanyal et al. (2019); Hua et al. (2021); Aghajanyan et al. (2020) propose to use a noise stability regularization to stabilize fine-tuning. On the other hand, Zhang et al. (2020); Cattan et al. (2021); Mosbach et al. (2020) find that using a small dataset leads to instability, while Cattan et al. (2022) show that data augmentation helps improve the stability. Moreover, Han et al. (2021) propose to train the adapter separately to improve the stability, while Yang and Ma (2022) propose a componentwise gradient norm clipping method to improve it. Besides, He et al. (2021); Lee et al. (2019); Houlsby et al. (2019); Zaken et al. (2021); Sung et al. (2021); Liu et al. (2021) find that tuning part of the pre-trained parameters also helps stabilize the model. However, to the best of our knowledge, there are no theoretical works that analyze the effectiveness of these fine-tuning methods. Stability of training a model has been studied for decades. Bousquet and Elisseeff (2002); Shalev-Shwartz et al. (2010); Shalev-Shwartz and Ben-David (2014); Charles and Papailiopoulos (2018) propose the standard stability definition for general machine learning models making it analyzable. Hardt et al. (2016); Kuzborskij and Lampert (2018) propose to analyze the stability of stochastic gradient methods while Lei and Ying (2020); Schliserman and Koren (2022) propose the Leave-one-out Model Stability which directly checks the distance between trained parameters. We extend the stability analysis to the fine-tuning regime and design several effective methods based on our new theory. ## 6 Conclusion In this paper, we propose a novel theoretical analysis of the stability of fine-tuning a pre-trained model. We first define theoretical stability bounds in two commonly used settings, namely, the full fine-tuning and the head tuning. Then, we give a theoretical analysis that provides the basis for four existing and widely used methods proposed by previous works. In addition to being able to explain most of the observed empirical discoveries, our theory can help in the design of efficient and provable methods. Based on our theory, we propose Max Margin Regularizer (MMR), Multi-Head Loss (MHLoss), and Self Unsupervised Re-Training (SURT) methods to stabilize fine-tuning. We conduct extensive experiments on 11 widely used real-world datasets together with extensive experiments on a bunch of synthetic classification datasets. The experiment results show the effectiveness of our proposed methods and hence validate our theory as well.
2306.05065
Resolving nonclassical magnon composition of a magnetic ground state via a qubit
Recently gained insights into equilibrium squeezing and entanglement harbored by magnets point towards exciting opportunities for quantum science and technology, while concrete protocols for exploiting these are needed. Here, we theoretically demonstrate that a direct dispersive coupling between a qubit and a noneigenmode magnon enables detecting the magnonic number states' quantum superposition that forms the ground state of the actual eigenmode - squeezed-magnon - via qubit excitation spectroscopy. Furthermore, this unique coupling is found to enable control over the equilibrium magnon squeezing and a deterministic generation of squeezed even Fock states via the qubit state and its excitation. Our work demonstrates direct dispersive coupling to noneigenmodes, realizable in spin systems, as a general pathway to exploiting the equilibrium squeezing and related quantum properties thereby motivating a search for similar realizations in other platforms.
Anna-Luisa E. Römling, Alejandro Vivas-Viaña, Carlos Sánchez Muñoz, Akashdeep Kamra
2023-06-08T09:30:04Z
http://arxiv.org/abs/2306.05065v2
# Resolving nonclassical magnon composition of a magnetic ground state via a qubit ###### Abstract Recently gained insights into equilibrium squeezing and entanglement harbored by magnets point towards exciting opportunities for quantum science and technology, while concrete protocols for exploiting these are needed. Here, we theoretically demonstrate that a direct dispersive coupling between a qubit and a noneigenmode magnon enables detecting the magnonic number states' quantum superposition that forms the ground state of the actual eigenmode - squeezed-magnon - via qubit excitation spectroscopy. Furthermore, this unique coupling is found to enable control over the equilibrium magnon squeezing and a deterministic generation of squeezed even Fock states via the qubit state and its excitation. Our work demonstrates direct dispersive coupling to noneigenmodes, realizable in spin systems, as a general pathway to exploiting the equilibrium squeezing and related quantum properties thereby motivating a search for similar realizations in other platforms. _Introduction._--Quantum superposition is a central concept and ingredient underlying diverse phenomena from entanglement to the quantum speed up in computing [1; 2]. A bosonic mode, such as a photon, can be driven into a so-called nonclassical superposition of its eigenstates - number or Fock states - thereby admitting various quantum advantages [3; 4], such as enhancement in its coupling to a qubit via squeezing [5; 6; 7; 8; 9]. At the same time, engineering a dispersive effective interaction \(\sim\hat{c}^{\dagger}\hat{c}\hat{\sigma}_{z}\) between the boson (annihilation operator \(\hat{c}\)) and the qubit \(\hat{\sigma}_{z}\) leads to the latter's excitation frequency becoming multivalued and providing information on the boson's wavefunction [10; 11; 12]. This has been exploited to measure the quantum superposition of the number states that constitutes a given bosonic state [10; 12; 13; 14; 15; 16]. Since such bosons are also the interconnects in quantum computers, this interplay between their nonclassical states and qubits bears a high relevance for emerging quantum technologies [2; 17]. The bosonic spin excitations of magnets, broadly called magnons, potentially offer advantages in realizing quantum properties [18; 19; 20; 15]. Magnets have been shown to naturally harbor nonclassical squeezed states in _equilibrium_[21] arising from an interplay between energy minimization and the Heisenberg uncertainty principle [18; 19; 22]. For example, the ground state and eigenmodes of an anisotropic ferromagnet are constituted by nonclassical superpositions of states with different number of spin flips or, equivalently, magnons [18; 23]. The latter are not the eigenmodes but represent the natural or physical basis for the magnet. Hence, the question arises if and how one can measure such nonclassical superpositions of noneigenmode basis states, that constitute the system eigenmodes. An answer to this is also desirable for harnessing the concomitant _equilibrium_ entanglement harbored by these spin systems for useful quantum information tasks. In this Letter, taking inspiration from the successful detection of nonequilibrium nonclassical superpositions via a qubit [10; 13; 14; 15; 16] and building upon recent advances in probing magnets via qubits [13; 15; 24; 25; 26; 27; 28; 29], we address the question posed above. We theoretically demonstrate a protocol for measuring the intrinsic nonclassical superposition that forms the squeezed-magnon vacuum ground state of an anisotropic ferromagnet. We find that the conventional qubit spectroscopy employing a coherent qubit-magnon coupling [10; 11; 30] fails in this goal. However, we show that achieving a direct dispersive interaction (Fig. 1) between the qubit and the noneigenmode magnon is the key to achieving this goal. Such a coupling may result from, e.g., the exchange interaction between the magnet and a spin qubit [31; 32]. Furthermore, our proposed qubit-magnon coupling enables a deterministic protocol to generate nonequilibrium squeezed even Fock states [33; 34] by driving the qubit at specific frequencies (Fig. 2). _Direct dispersive coupling between magnon and Figure 1: Schematic depiction of the system. The bosonic uniform magnon mode in a ferromagnet (FM, green) is coupled to a spin qubit (blue) through a spin-spin (e.g., exchange) interaction. The ferromagnetic eigenmode is squeezed-magnon \(\hat{\alpha}\), while the qubit \(\hat{\sigma}_{z}\) interacts dispersively with the spin-flip or magnon \(\hat{a}\) via \(\chi\hat{\sigma}_{z}\hat{a}^{\dagger}\hat{a}\). This direct dispersive coupling originates from the qubit energy depending on the total FM spin, which is governed by the number of spin-flips or magnons (compare upper and lower panels). qubit_.--We consider a ferromagnetic insulator with its equilibrium spin order along the z axis and a spatially uniform (wavevector \(\mathbf{k}=\mathbf{0}\)) magnonic mode, represented by the annihilation operator \(\hat{a}\). The ferromagnet is coupled to a spin qubit, represented by the operator \(\hat{\sigma}_{z}\), via a spin-spin interaction such as dipolar or exchange coupling (Fig. 1) [35; 36; 37; 38; 39]. The \(\sigma_{z}S_{z}\) contribution of the spin-spin interaction provides a direct dispersive coupling \(\sim\hat{a}^{\dagger}\hat{a}\hat{\sigma}_{z}\) (see Supplemental Material (SM) [40]). For the moment, we disregard any coherent coupling returning to it later. Due to magnetic anisotropy in the x-y plane, magnons are not the eigenexcitations [22; 36] and the total Hamiltonian reads (\(\hbar=1\)) \[\hat{\mathcal{H}}_{\text{sys}}=A\hat{a}^{\dagger}\hat{a}+B\hat{a}^{2}+B^{*} \hat{a}^{\dagger 2}+\frac{\omega_{q}}{2}\hat{\sigma}_{z}+\chi\hat{a}^{\dagger} \hat{a}\hat{\sigma}_{z} \tag{1}\] where \(A\) and \(B\) parametrize the anisotropic ferromagnet [36] with \(B\) resulting from the x-y plane anisotropy, \(\omega_{q}\) is the excitation energy of the uncoupled qubit, and \(\chi\) (assumed positive here) is the direct dispersive coupling strength. A derivation of Eq. (1) is presented in the SM [40]. The ferromagnet only part of the Hamiltonian in Eq. (1) can be diagonalized to \(\omega_{\alpha}\hat{\alpha}^{\dagger}\hat{\alpha}\) with \(\hat{\alpha}=\hat{a}\cosh r+\hat{a}^{\dagger}\sinh re^{i\theta}\)[22; 36] and \[\omega_{\alpha} =\sqrt{A^{2}-4\mid B\mid^{2}}, \tag{2}\] \[2r =\text{arctanh}\!\left(\frac{2\mid B\mid}{A}\right). \tag{3}\] We refer to the eigenmode \(\hat{\alpha}\) as bare squeezed-magnon, since it is related to the magnon \(\hat{a}\) via the single-mode squeeze operator [3; 22; 36]. The squeezing variables \(r\) and \(\theta\) are determined by \(A\) and \(B\) of Eq. (1) (see SM [40] for further details), noting that squeezing and \(r\) vanish for \(B=0\). As a result, the ferromagnet ground state is vacuum of the squeezed-magnon \(\hat{\alpha}\), which is formed by a quantum superposition of the even magnon \(\hat{a}\) number states [18; 19]. Since, the \(\hat{a}\) magnons are not the eigenmodes, it is not clear how to detect this nonclassical superposition. _Magnon number dependent qubit excitation energy_.--The nonequilibrium superpositions of eigenmode number states have been investigated via measurement of multiple peaks in a qubit excitation spectroscopy [10; 12; 14]. Here, each peak comes from a different number state contribution to the superposition. Despite a similar motivation, this should be clearly contrasted with our goal and challenge of resolving the noneigenmode magnon number state composition of the equilibrium/eigenmode state - the squeezed-magnon vacuum [18; 19; 22]. We hypothesize that the desired resolution can be accomplished in our considered model (Fig. 1) when the qubit energy depends directly on the noneigenmode magnon number (\(\sim\chi\hat{a}^{\dagger}\hat{a}\hat{\sigma}_{z}\)), by spectroscopically probing the qubit excitation energies. We now evaluate the latter to examine this hypothesis. We first project the total Hamiltonian Eq. (1) onto the qubit ground state \(\ket{g}\). The reduced Hamiltonian \(\hat{\mathcal{H}}_{g}=\bra{g}\hat{\mathcal{H}}_{\text{sys}}\ket{g}\) is obtained as \[\hat{\mathcal{H}}_{g}=\left(A-\chi\right)\hat{a}^{\dagger}\hat{a}+B\hat{a}^{2} +B^{*}\hat{a}^{\dagger 2}-\frac{\omega_{q}}{2}\,. \tag{4}\] In a direct analogy with the discussion and analysis following Eq. (1), the reduced Hamiltonian Eq. (4) can be diagonalized to \(\omega_{\alpha}^{g}\hat{\alpha}_{g}^{\dagger}\hat{\alpha}_{g}\) with a different squeezed-magnon \(\hat{\alpha}_{g}\) eigenmode characterized by a frequency \(\omega_{\alpha}^{g}<\omega_{\alpha}\) and squeezing factor \(r_{g}>r\). \(\omega_{\alpha}^{g}\) and \(r_{g}\) are obtained from Eqs. (2) and (3) by substituting \(A\to A-\chi\)[41]. We will refer to \(\hat{\alpha}_{g}\) as the ground state squeezed-magnon harboring a different magnetic vacuum as compared to the isolated ferromagnet [Fig. 2(a)]. The projection \(\hat{\mathcal{H}}_{e}=\bra{e}\hat{\mathcal{H}}_{\text{sys}}\ket{e}\) onto the qubit excited state \(\ket{e}\) can be obtained from Eq. (4) by changing the sign of \(\chi\) and \(\omega_{q}\). Analogous to the discussion above, the bosonic eigenmode of \(\hat{\mathcal{H}}_{e}\) becomes the excited state squeezed-magnon \(\hat{\alpha}_{e}\) characterized by eigenenergy \(\omega_{\alpha}^{e}>\omega_{\alpha}\) and squeezing factor \(r_{e}<r\) [Fig. 2(a)], with \(\omega_{e}^{e}\) and \(r_{e}\) obtained from Eqs. (2) and (3) on replacing \(A\to A+\chi\). Altogether, we have diagonalized our Hamiltonian Eq. (1) denoting the eigenstates by \(\ket{n}_{e}\) and \(\ket{n}_{g}\), where the subscript \(g\) or \(e\) indicates the qubit state and \(n\in\mathbb{N}\) labels the different Fock states. The key point is that the magnonic eigenmodes and their respective squeezing are different in three cases: (i) isolated ferromagnet, (ii) Figure 2: Qubit excitation spectroscopy of squeezed-magnon vacuum. (a) The ferromagnet (FM) hosts equilibrium-squeezed magnons and corresponding vacuums. As a result, the zero-point quantum fluctuations depicted in the spin phase space bear elliptical profiles [18], indicative of their squeezing. The degree of squeezing is different in three cases: (i) qubit not coupled to the FM (red), (ii) qubit in excited state \(\ket{e}\) (blue), and (iii) qubit in ground state \(\ket{g}\) (green). When one spectroscopically probes the qubit excitation energy (\(\ket{g}\rightarrow\ket{e}\)), the squeezed-magnon number can change from \(0\) to any number state available in the superposition, due to the differing magnon-squeezings in the qubit excited and ground states. (b) This effectively allows to probe the squeezed-magnon vacuum as a superposition of _even_ magnon number states, with each peak (only first two depicted here) in the qubit excitation spectroscopy measuring a term in the superposition. qubit in its ground state, and (iii) qubit in its excited state [see Fig. 2(a)]. The typical qubit excitation spectroscopy measures qubit energy corresponding to the transition \(\left|g\right\rangle\rightarrow\left|e\right\rangle\), while the boson number state remains the same [10; 11]. Consequently, when we have a nonequilibrium superposition of multiple number states, the result is observation of boson number-dependent qubit energy that manifests itself as multiple spectroscopy peaks. In sharp contrast, our system has a boson mode whose squeezing depends on the qubit state. Hence, the excitation of qubit need not preserve the boson number. Thus, transitions \(\left|0\right\rangle_{g}\rightarrow\left|n\right\rangle_{e}\) will take place with probability \(p_{n}=\left|c_{n}\right|^{2}\equiv\left|e\langle n|0\right\rangle_{g}\)\(\left|^{2}\) resulting in correspondingly high spectroscopy peaks. As demonstrated in the SM [40], the ground state \(\left|0\right\rangle_{g}\) is squeezed with respect to the excited state squeezed-magnon vacuum \(\left|0\right\rangle_{e}\) with effective squeezing factor of \(r_{\text{eff}}=r_{g}-r_{e}\) [Eq. (3)]. Thus, we may express \(\left|0\right\rangle_{g}=\sum_{n}c_{n}\left|n\right\rangle_{e}\) with [3; 4] \[c_{2n}=\frac{1}{\sqrt{\cosh r_{\text{eff}}}}\left(-e^{i\theta}\tanh r_{\text{ eff}}\right)^{n}\frac{\sqrt{(2n)!}}{2^{n}n!} \tag{5}\] and \(c_{2n+1}=0\) for \(n\in\mathbb{N}\). To sum up, the qubit spectroscopy should yield a peak for each of the superposition contributions [Fig. 2(b)], as intuitively hypothesized above. However, it resolves the ground state squeezed-magnon vacuum \(\left|0\right\rangle_{g}\) in terms of the excited state squeezed-magnon number states \(\left|n\right\rangle_{e}\) [Eq. (5)]. In Fig. 3(a), we plot the squeezing factors \(r_{g}\), \(r_{e}\) and \(r_{\text{eff}}\) as a function of the dispersive coupling strength \(\chi\). Only at a certain value of \(\chi\), \(r_{\text{eff}}\) is equal to the squeezing \(r\) of the bare squeezed-magnon. In this case, the spectroscopy would probe the "true" distribution of the bare squeezed-magnon \(\hat{\alpha}\) vacuum in terms of the magnon \(\hat{a}\) Fock states. Nevertheless, employing our analysis above, a knowledge of \(\chi\) and \(\omega_{\alpha}\) allows one to translate an observed superposition into any desired basis. We now examine the position of the spectroscopy peaks. As per energy conservation, the transition \(\left|0\right\rangle_{g}\rightarrow\left|2n\right\rangle_{e}\) occurs when the drive frequency matches the energy difference between the two states. As detailed in the SM [40], this is evaluated as \(\omega_{2n}\): \[\omega_{2n}=\omega_{q}+\frac{\omega_{\alpha}^{e}-\omega_{\alpha}^{g}}{2}-\chi +2n\cdot\omega_{\alpha}^{e}\,. \tag{6}\] For \(\chi\ll\min\left[\left|A\right|,\left|B\right|\left(A/2\left|B\right|-2\left| B\right|/A\right)\right|\right]\), Eq. (6) becomes \(\omega_{2n}\approx\omega_{q}+2\chi\cdot\sinh^{2}r+2n\left[\omega_{\alpha}+ \chi\cosh\left(2r\right)\right]\). The different peaks are now well separated by multiples of the bare squeezed-magnon frequency \(\omega_{\alpha}\), potentially making them easier to detect [42]. In order to guide and quantify the measurability of multiple peaks resulting from the superpositions, we define "contrast" as the ratio \(c=p_{2}/p_{0}\) evaluating it as \[2c=\tanh^{2}(r_{\text{eff}}). \tag{7}\] The contrast \(c\), plotted in Fig. 3(b), generally characterizes the reduction of subsequent peaks expected in the qubit spectroscopy. For small coupling strengths \(\left|\chi\right|\ll\min\left[\left|A\right|,\left|A\left(A/2\left|B\right|-2 \left|B\right|/A\right)\right|\right]\), we obtain \(c\approx 2\left|B\right|^{2}\chi^{2}/\left(A^{2}-4\left|B\right|^{2}\right)^{2}\). For small \(\left|B\right|\ll\min\left[\left|A-\chi\right|,\left|A+\chi\right|\right]\) and thus squeezing, the contrast can be expanded as \(c\approx 2\chi^{2}\left|B\right|^{2}/\left(A^{2}-\chi^{2}\right)^{2}\). Thus, the equilibrium superposition peaks can be observed in the qubit spectroscopy when both the direct dispersive interaction strength \(\chi\) and squeezing \(r\) are nonzero, with the resolvability of the peaks increasing with both these parameters. _Simulation of qubit spectroscopy.--_We now corroborate and complement our analytic considerations above by simulating a qubit spectroscopy setup using the QuTip package [43; 44]. While different experimental methods can be employed to probe the qubit excitation energy [10; 14], here we consider a microwave qubit drive described by \(\hat{\mathcal{H}}_{\text{d}}=\Omega_{d}\cos\left(\omega_{d}t\right)\left(\hat{ \sigma}_{+}+\hat{\sigma}_{-}\right)\) where \(\Omega_{d}\) denotes the Rabi frequency quantifying the drive strength, while \(\omega_{d}\) becomes the drive frequency. As detailed in the SM [40], we consider Eq. (1) and \(\hat{\mathcal{H}}_{\text{d}}\) to describe our system and account for qubit dissipation [45] via one collapse Figure 3: (a) Squeezing factors _vs._\(\chi\) for the magnonic eigenmodes in the qubit ground state \(r_{g}\) (solid), the qubit excited \(r_{e}\) (dotted) and effective squeezing \(r_{\text{eff}}=r_{g}-r_{e}\) (dashed) considering bare magnon squeezing of \(r=0.5\) (blue) and \(r=1\) (red). (b) Contrast \(c=p_{2}/p_{0}\) [Eq. (7)] as a function of \(\chi\) for several values of the squeezing factor \(r\). Its vanishing in the limit \(r\to 0\) signifies that more than 1 peak in the spectroscopy is observed only for nonzero magnon squeezing. We consider \(\omega_{\alpha}/\omega_{q}=0.5\) here. operator \(\hat{C}=\sqrt{\gamma_{q}}\hat{\sigma}_{-}\) with qubit decay rate \(\gamma_{q}\). Solving the Lindblad master equation [45; 46; 47] numerically, we investigate the steady state qubit excitation \(\langle\hat{\sigma}_{+}\hat{\sigma}_{-}\rangle\). \(\Omega_{d}\) is chosen small enough for the qubit excitation to remain small and in the linear regime [45]. With this protocol, the qubit excitation should manifest a peak whenever the drive frequency \(\omega_{d}\) is resonant with a qubit excitation transition. In Fig. 4, we show simulations (solid curves) of the qubit spectroscopy for two squeezing factors \(r=0.2\) and \(r=0.45\), comparing them with our analytic results plotted as bars at \(\omega_{d}=\omega_{2n}\) [Eq. (6)] with heights \(\propto p_{2n}=|c_{2n}|^{2}\) [Eq. (5)]. Thus, our analytics agree well with the simulations. We therefore conclude that the first non-trivial peak indeed stems from the equilibrium squeezing [48]. Due to a large separation (\(\sim\omega_{\alpha}\)) between the peaks, experiments may further employ higher values of the drive \(\Omega_{d}\) in measuring the smaller peaks. _Consideration of coherent coupling._--Until now, we have considered a magnet coupled to a spin qubit that offers a direct dispersive coupling \(\chi\) [Eq. (1)], found to be essential for the key phenomena addressed here. We now examine the role of coherent or Rabi interaction [49] parameterized by \(g\), such that the system Hamiltonian becomes: \[\hat{\mathcal{H}}_{\text{sys,SC}}=A\hat{a}^{\dagger}\hat{a} +B\hat{a}^{2}+B^{*}\hat{a}^{\dagger 2}+\frac{\omega_{q}}{2}\hat{ \sigma}_{z}\] \[+g\left(\hat{a}^{\dagger}+\hat{a}\right)\left(\hat{\sigma}_{+}+ \hat{\sigma}_{-}\right). \tag{8}\] This interaction is universally present in qubits, such as with spin [31; 32] and superconducting qubits [50; 29; 29], while the direct dispersive coupling is not always available. When the boson and qubit are strongly detuned i.e., \(g\ll|\omega_{q}-\omega_{\alpha}|\), the coherent coupling also results in an effective dispersive interaction \(\sim\tilde{\chi}\hat{\alpha}^{\dagger}\hat{\alpha}\hat{\sigma}_{z}\)[3; 10; 11; 40; 51] which has been exploited in observing nonequilibrium superpositions in terms of the eigenmode number states. It is not clear whether one can employ this effective dispersive coupling to resolve an equilibrium superposition. Via numerical simulations of qubit spectroscopy employing Eq. (8) (see SM [40]), we find that the effective dispersive interaction \(\sim\tilde{\chi}\hat{\alpha}^{\dagger}\hat{\alpha}\hat{\sigma}_{z}\) does not resolve the nonclassical magnon composition of the equilibrium squeezed-magnon vacuum. This can be understood a posteriori since such an effective coupling may address only the eigenmodes \(\hat{\alpha}\), and not any internal nonigenemodes. Thus, a direct dispersive interaction \(\sim\chi\hat{a}^{\dagger}\hat{a}\hat{\sigma}_{z}\) offered by, e.g., a spin qubit is needed for resolving equilibrium superpositions. We also show that any influence of the coherent coupling \(g\) when employing a spin qubit system can be suppressed via an adequately large detuning \(|\omega_{q}-\omega_{\alpha}|\)[40; 51]. _Discussion._--In the conventional qubit spectroscopy for dispersively sensing a nonequilibrium quantum superposition of eigenmode Fock states, the peaks are separated in frequency by \(\sim\tilde{\chi}\) which is typically small [10; 11; 12]. In our demonstrated protocol for detecting the equilibrium superposition of noneigenmode Fock states, the corresponding peaks are well-separated \(\sim\omega_{\alpha}\), which makes it feasible to detect them [52] even when they are relatively small. The direct dispersive interaction offered by a spin qubit becomes large for small size of the magnet (see the SM [40]) making our proposal better suited for nanoagnets. Furthermore, detection of the \(n\)th nontrivial peak in the qubit spectroscopy is accompanied by the transition \(\left|0\right\rangle_{g}\rightarrow\left|2n\right\rangle_{e}\) which provides a new deterministic approach to generate nonequilibrium squeezed Fock states (\(\left|2n\right\rangle_{e}=S^{-1}(r_{\text{eff}})\left|2n\right\rangle_{g}\)[33; 34; 22; 18]) by driving the qubit. _Conclusion._--We have theoretically demonstrated how a direct dispersive interaction between a qubit and a noneigenmode boson (here, a magnon) enables detection of the quantum superposition that makes up the actual eigenmodes (here, squeezed-magnon and its vacuum). The same coupling is shown to allow for a control of the equilibrium magnon squeezing and a deterministic generation of squeezed even Fock states via the qubit state and its resonant excitation. Thus, this direct dispersive interaction, readily available in spin systems, opens new avenues for exploiting the equilibrium squeezing and entanglement harbored by magnets. At the same time, our work inspires a search for the realization of direct dispersive interaction in other, such as optical [53] and mechanical, platforms that could enable access to equilibrium superpositions. _Acknowledgements._--We thank Frank Schlawin for valuable discussions. We acknowledge financial support from the Spanish Ministry for Science and Inno Figure 4: Numerical simulation of qubit spectroscopy using a Rabi drive. Steady state qubit excitation \(\langle\hat{\sigma}_{+}\hat{\sigma}_{-}\rangle\) is plotted against the Rabi drive frequency \(\omega_{d}\) for two different values of bare magnon-squeezing \(r\). The first two qubit excitation frequencies \(\omega_{0}\) and \(\omega_{2}\) are observed. The shaded bars depict the analytically evaluated excitation distributions [Eqs. (5) and (6)], underlining their good agreement with the simulations. Parameters employed in the simulation are \(\omega_{\alpha}/\omega_{q}=0.5\), \(\chi/\omega_{q}=0.2\), \(\gamma_{q}/\omega_{q}=0.1\) and \(\Omega_{d}/\omega_{q}=0.014\). The numerical method is detailed in the SM [40]. vation - AEI Grant CEX2018-000805-M (through the "Maria de Maeztu" Programme for Units of Excellence in R&D) and grant RYC2021-031063-I funded by MCIN/AEI/10.13039/501100011033 and "European Union Next Generation EU/PRTR". A. E. R. acknowledges that the project that gave rise to these results received the support of a fellowship from "la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/DI22/11940029. C. S. M. acknowledges that the project that gave rise to these results received the support of a fellowship from "la Caixa" Foundation (ID 100010434) and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 847648, with fellowship code LCF/BQ/PI20/11760026, and financial support from the Proyecto Sinergico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM).
2304.04502
Energy Efficient Resource Allocation for Demand Intensive Applications in a VLC Based Fog Architecture
In this paper, we propose an energy efficient passive optical network (PON) architecture for backhaul connectivity in indoor visible light communication (VLC) systems. The proposed network is used to support a fog computing architecture designed to allow users with processing demands to access dedicated fog nodes and idle processing resources in other user devices (UDs) within the same building. The fog resources within a building complement fog nodes at the access and metro networks and the central cloud data center. A mixed integer linear programming (MILP) model is developed to minimize the total power consumption associated with serving demands over the proposed architecture. A scenario that considers applications with intensive demands is examined to evaluate the energy efficiency of the proposed architecture. A comparison is conducted between allocating the demands in the fog nodes and serving the demands in the conventional cloud data center. Additionally, the proposed architecture is compared with an architecture based on state-of-art Spine-and-Leaf (SL) connectivity. Relative to the SL architecture and serving all the demands in the cloud, the adoption of the PON-based architecture achieves 84% and 86% reductions, respectively.
Wafaa B. M. Fadlelmula, Sanaa H. Mohamed, Taisir E. H. El-Gorashi, Jaafar M. H. Elmirghani
2023-04-10T10:39:34Z
http://arxiv.org/abs/2304.04502v1
Energy Efficient Resource Allocation for Demand Intensive Applications in a VLC Based Fog Architecture ###### Abstract In this paper, we propose an energy efficient passive optical network (PON) architecture for backhaul connectivity in indoor visible light communication (VLC) systems. The proposed network is used to support a fog computing architecture designed to allow users with processing demands to access dedicated fog nodes and idle processing resources in other user devices (UDs) within the same building. The fog resources within a building complement fog nodes at the access and metro networks and the central cloud data center. A mixed integer linear programming (MILP) model is developed to minimize the total power consumption associated with serving demands over the proposed architecture. A scenario that considers applications with intensive demands is examined to evaluate the energy efficiency of the proposed architecture. A comparison is conducted between allocating the demands in the fog nodes and serving the demands in the conventional cloud data center. Additionally, the proposed architecture is compared with an architecture based on state-of-art Spine-and-Leaf (SL) connectivity. Relative to the SL architecture and serving all the demands in the cloud, the adoption of the PON-based architecture achieves 84% and 86% reductions, respectively. **Keywords**: Energy Efficient Networks, Fog computing, Mixed Integer Linear Programming (MILP), Passive Optical Networks (PON). ## 1 Introduction Recently, we have witnessed an unpresented number of devices being connected to the Internet. Based on Cisco Annual Internet Report (2018-2023) [1], 29.3 billion devices will be connected to the Internet by 2023. This increase will be associated with a demand for high date rates and timeliness exceeding the capabilities of the emerging 5G networks. The 6G networks vision promises increased data rates by further exploitation of the electromagnetic spectrum. Optical wireless frequencies offer a potential bandwidth exceeding 540 THz that can complement the Radio Frequency (RF) spectrum in access networks [2]. Several studies have proposed techniques to improve the achievable data rates of optical wireless communication (OWC) systems including adaptation of the beam power, angle, and delay [3, 4]. Visible light communication (VLC) is one of the promising OWC systems that uses light emitting diode (LEDs) or laser diodes (LDs) for indoor lighting and communication. For indoor applications, VLC provides high data rates of 25 Gbps and beyond [5, 6, 7] and enhanced security as light does not penetrate walls. VLC can also provide low-cost communication as it remains unregulated and unlicensed. Furthermore, VLC is an energy efficient technology as existing lighting infrastructure can be used for communication. The exponential growth in traffic and processing demands is accompanied by an increase in power consumption. Network energy efficiency have been investigated extensively in the literature including proposing energy efficient architectures for data centers and core networks [8, 9], virtualization [10, 11], integration of renewable energy sources [12, 13], and content distribution optimization [14]. In the access network, passive optical networks (PONs) have proven their efficiency in reliably supporting high data rates at low power consumption. PONs have been proposed to provide backhaul connectivity in 5G network between the radio base stations and the network gateway [15, 16]. PONs were also proposed for data center interconnection (i.e., inter-rack communication and intra-rack communication) in [17]-[19]. Furthermore, PONs have the potential to improve the energy efficiency of fog computing [20] where 75% of the processing will be performed in fogs by 2025 [21]. In this paper, we investigate the use of PONs to provide backhaul connectivity for the VLC based fog architecture proposed in [16]. The aim of this study is to utilize the processing nodes adjacent to users, to save network power that otherwise will be consumed by serving high data rate demands in remote conventional cloud data center. We develop a mixed integer linear programming (MILP) model to minimize the power consumption of both processing and networking by optimizing the allocation of processing demand in the fog computing architecture. The rest of this paper is organized as follows. Section 2 describes the proposed PON based backhaul network architecture and introduces the MILP model to optimize the allocation of processing demands. Section 3 presents the results and discuss them, and Section 4 provides the conclusions. ## 2 PON Backhaul Network for VLC Based Fog Computing Architecture In this work, we extend the fog architecture studied in [22] by increasing the number of rooms in a building to four and introducing PON-based backhaul architecture to support the communication within the building. As shown in Figure 1, each room has eight VLC access points (APs) serving eight users. As in [22], all APs use the red wavelength and each AP provides a data rate of 2.5 Gbps for each user. Note that for the shade of white illumination (in VLC) selected during day time, the red colour dominates which offers a higher transmit power. Therefore, the red colour is considered to connect the user devices (UDs) with the APs. Each AP is attached to an optical network unit (ONU) equipped with tuneable transceivers. The fog computing resources consists of the idle processing resources in UDs, a single fog server in each room, a building fog node, a campus fog node and fog nodes at the access and metro networks. Processing resources are also available at the cloud data center. In this work, we consider a PON based architecture adopted from the PON design in [13]. A hybrid wavelength division multiplexing (WDM) - time division multiplexing (TDM) PON is deployed where several wavelengths are used to facilitate communication inside the building and each wavelength is shared through TDM among the ONUs in each room. In each room, the ONUs are connected to a splitter and a coupler for upstream and downstream communication, respectively. Two 4\(\times\)4 arrayed waveguide grating router (AWGRs) are used to provide connectivity between the rooms. The splitters and couplers of each room are connected to an AWGR input port and an AWGR output port, respectively. The AWGRs facilities connectivity between APs within a room or in different rooms. Additionally, The AWGRs are connected to an OLT port to connect the building access network to higher network levels (i.e., metro and core networks). Five wavelengths are used to provide the communication within the access network. A distinct wavelength is used for communication between the APs in the same room. Additionally, a wavelength is used to connect the APs in the rooms to the OLT. The remaining three wavelengths are used to provide connectivity between the rooms. We developed a MILP model to optimally allocate the processing resources to serve demands with minimum processing and networking power consumption. The developed MILP model is subject to a number of constraints. Theses constraints include a constraint to ensure that at each demand is served by one processing node. However, more than one task can be assigned to the same processing node. Moreover, a constraint ensures that the demands served by a node does not exceed its processing capacity. The model is also subject to the traffic flow conservation constraint, a set of constraints to ensure wavelength continuity in connections between source and destination pairs. ## 3 Results and Discussion In this section, we evaluate the performance of the model by examining a scenario with eight users in each room where two UDs are generating demands and the rest of the UDs offer their idle devices to work as processing nodes. Each room has eight VLC APs, each connecting a single user. Each user generates a single task. The demands processing load takes values in the range of 6 - 20 GFLOPs. The traffic demand is related to the processing demand by the Data Rate Ratio (DRR) (the ratio of the traffic demand in Gbps to the processing demand in GFLOPs). In this work, we study applications that require intensive communication and processing such as Figure 1: The proposed PON backhaul network architecture. video gaming applications. To represent these applications, a DRR of 0.05 is considered (i.e., the traffic demands take values in the range 0.3 - 1 Gbps). Table 1 and Table 2 summarize the parameters of the processing nodes and networking devices, respectively. The performance of the PON backhaul network is compared with a Spine-and-Leaf (SL) backhaul based network, where a leaf switch is used to connect the APs and the room fog server in each room as shown in Figure 2. A total of four leaf switches are connected with two spine switches. The spine switches are then connected to a gateway router to link the access network with the metro network. Furthermore, serving processing demand in the proposed fog computing architecture is compared to the case when the central cloud serves all the demands. Figure 3 shows the processing workload allocation to the UDs, fog servers in rooms 1, room 2, room 3, and room 4 (i.e., r1RF, r2RF, r3RF, and r4RF, respectively), the building fog (BF), the campus fog (CF), the metro fog (MF) and cloud resources (CC) in both PON-based architecture and the SL architecture. As observed in Figure 3 for both architectures, all the demands are exclusively served within the rooms, without the need to activate further remote fog units in the access or metro networks. At 6 GFLOPs in the PON-based architecture, the demands are served only in the fourth room fog server (r4RF). Accessing the room fogs and the UDs results in a similar network power consumption due to the nature of the passive network. However, consolidating demands into a single fog room server is more efficient than activating multiple UDs as it results in using less total idle power. In other words, when a single room fog serves all demands, the amount of idle power consumed is reduced. It is worth noting that the selection of which room fog to activate is random and will result in the same total power consumption due to the use of homogeneous specifications for the room fogs. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline **Parameter** & **Value** \\ \hline Processing capacity of the user device (ARM Cortex A53) & 12.888 GFLOPs [24] \\ \hline Processing capacity of the room fog server (Core i3-6000U) & 64 GFLOPs [25] \\ \hline Processing capacity of the building fog server (Intel Xeon Processor E3-1220 v2) & 99 GFLOPs [26] \\ \hline Processing capacity of the campus fog server (Intel Xeon Processor E5-2440 v2) & 121.6 GFLOPs [26] \\ \hline Processing capacity of the metro fog server (Intel Xeon Processor E5-4650 v3) & 403.2 GFLOPs [26] \\ \hline Processing capacity of the cloud server (Intel Xeon Platinum 8280 Processor) & 1612.8 \\ \hline Maximum power consumption of each user device & 18 Watts [27] \\ \hline Maximum power consumption of the boilding fog server & 65 Watts [28] \\ \hline Maximum power consumption of the campus fog server & 305 Watts [29] \\ \hline Maximum power consumption of the metro fog server & 350 Watts [30] \\ \hline Maximum power consumption of the metro fog server & 750 Watts [31] \\ \hline Maximum power consumption of the cloud server & 1100 Watts [32] \\ \hline \end{tabular} \end{table} TABLE 1: PROCESSING DEVICES PARAMETERS \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Network device** & **Maximum power consumption (Watts)** & **Idle power consumption (Gbps)** \\ \hline Access Point & 7.2 [22] & 4.32 & 2.5 [22] \\ \hline ONU & 15 [33] & 9 & 10 [33] \\ \hline OLT Line card & 300 [34] & 180 & 160 [34] \\ \hline Ethernet switch & 435 [35] & 261 & 240 [35] \\ \hline Aggregation switch & 435 [35] & 261 & 240 [35] \\ \hline Edge router & 750 [36] & 450 & 480 [36] \\ \hline Optical switch & 63.2 [37] & 37.92 & 100 [37] \\ \hline Core router & 344 [38] & 206.4 & 3200 [38] \\ \hline Leaf switch & 508 [39] & 304.8 & 480 [39] \\ \hline Spine switch & 660 [40] & 360 & 1440 [40] \\ \hline router & 344 [38] & 206.4 & 3200 [38] \\ \hline \end{tabular} \end{table} TABLE 2: NETWORKING DEVICES PARAMETERS Figure 2: Spine-and-Leaf architecture. In contrast, the SL backhaul network shows a different trend in allocating processing resources at 6 GFLOPs. While room fog servers are more efficient in terms of processing, their idle power consumption is significantly higher compared to UDs. Because of the high networking power consumption required to access a single room fog, the optimal allocation decision involves activating UDs in all rooms instead of relying on a single room fog. Therefore, one UD is activated in each room to host two tasks instead of activating four room fogs. At 7 and 8 GFLOPs, the allocation in the PON-based architecture remains the same (i.e., demands allocated to one room fog server). However, for SL at 7 GFLOPs, all room fog servers (r1RF, r2RF, r3RF, r4RF) are activated. Although consolidating the demands in one fog room server can save processing power, it will lead to a significant increase in the network power consumption due to the need to pass through a second level of spine switches. Additionally, the UDs are avoided because of the need to activate more than one UD in each room. To justify the decision of the model, it is important to highlight that the processing capacity of each UD is 12.88 GFLOPs. Since each device will only be able to accommodate one task, instead of activating two UDs in each room, it is more efficient to activate the room fog server in each room. The allocation in the SL architecture continues to follow the same trend (i.e., the demands are served from all room fog servers) until 20 GFLOPs. In the PON-based architecture at 9 GFLOPs, the room fog server capacity is exhausted, and hence, one idle user UD is activated to serve the remaining demands in addition to the room fog server. Note that at 10 GFLOPs, another room fog server is utilized to serve the demands (r4RF) instead of activating another two UDs, and this trend remains the same for the demands up to 15 GFLOPs. At 16 GFLOPs, a third room fog server is activated, and three fog room servers are sufficient to serve demands up to 20 GFLOPs. Figure 4 shows a comparison of the power consumption for optimized workload allocation between the PON-based architecture, SL architecture and serving the demands from the cloud only. Figure 4.a presents a comparison of the processing power consumption. The obtained results show that the PON-based architecture is notably more efficient in terms of processing compared to the SL architecture as a result of consolidating the workloads into fewer processing nodes. The processing savings achieved are up to 38%. The processing in the cloud is extremely efficient, nevertheless, it increases the networking power consumption by 92% compared to serving the demand locally with the support of the PON architecture as can be observed in Figure 4.b. Compared to the SL based architecture, the PON-based architecture saves 90% of the networking power consumption. Figure 4.c shows the total power consumption including both processing and networking. Total savings of up to 86% and 84% can be achieved when deploying PON-based architecture compared to serving the demands from the cloud and using SL, respectively. Figure 3: Processing workload allocation in PON-based architecture and SL architecture for the considered scenario. ## 4 Conclusions In this paper, we proposed an energy efficient PON backhaul network for a VLC based fog architecture where fog resources within a building complement fog nodes at the access and metro networks and the central cloud data center. We studied the allocation of processing and data rate intensive applications in the proposed architecture. We developed a MILP model to minimize the total power consumption of serving demands by optimizing of allocation of resources to demands. The resource allocation results show the ability of the PON backhaul network to consolidate demands in fewer nodes as a resulting of its ability to connect users and room fogs efficiently compared to an architecture based on a SL based backhaul. Total savings of up to 84% can be achieved by deploying the proposed PON architecture compared to SL architecture. Additionally, relative to allocating the demands in the cloud, the optimal allocation of the proposed architecture can save 86% of the total power consumption. ## Acknowledgements The authors would like to acknowledge funding from the Engineering and Physical Sciences Research Council (EPSRC) INTERNET (EP/H040536/1), STAR (EP/K016873/1) and TOWS (EP/S016570/1) projects. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. All data are provided in full in the results section of this paper.
2307.02423
Matroids Arising From Nested Sequences of Flats In Projective And Affine Geometries
Targets are matroids that arise from a nested sequence of flats in a projective geometry. This class of matroids was introduced by Nelson and Nomoto, who found the forbidden induced restrictions for binary targets. This paper generalizes their result to targets arising from projective geometries over $GF(q)$. We also consider targets arising from nested sequences of affine flats and determine the forbidden induced restrictions for affine targets.
Matthew Mizell, James Oxley
2023-07-05T16:45:02Z
http://arxiv.org/abs/2307.02423v1
# Matroids arising from nested sequences of ###### Abstract. Targets are matroids that arise from a nested sequence of flats in a projective geometry. This class of matroids was introduced by Nelson and Nomoto, who found the forbidden induced restrictions for binary targets. This paper generalizes their result to targets arising from projective geometries over \(GF(q)\). We also consider targets arising from nested sequences of affine flats and determine the forbidden induced restrictions for affine targets. 2020 Mathematics Subject Classification: 05B35 ## 1. Introduction Throughout this paper, we follow the notation and terminology of [3]. All matroids considered here are simple. This means, for example, that when we contract an element, we always simplify the result. An _induced restriction_ of a matroid \(M\) is a restriction of \(M\) to one of its flats. Let \(M\) be a rank-\(r\) projective or affine geometry represented over \(GF(q)\). We call \((F_{0},F_{1},\ldots,F_{k})\) a _nested sequence of projective flats_ or a _nested sequence of affine flats_ if \(\emptyset=F_{0}\subseteq F_{1}\subseteq\cdots\subseteq F_{k-1}\subseteq F_{k }=E(M)\) and each \(F_{i}\) is a, possibly empty, flat of \(M\). Let \((G,R)\) be a partition of \(E(M)\) into, possibly empty, subsets \(G\) and \(R\). We call the elements in \(G\)_green_; those in \(R\) are _red_. A subset \(X\) of \(E(M)\) is _monochromatic_ if \(X\subseteq G\) or \(X\subseteq R\). For a subset \(X\) of \(E(PG(r-1,q))\), we call \(PG(r-1,q)|X\) a _projective target_, or a _target_, if there is a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of projective flats such that \(X\) is the union of all sets \(F_{i+1}-F_{i}\) for \(i\) even. It is straightforward to check that \(PG(r-1,q)|G\) is a target if and only if \(PG(r-1,q)|R\) is a target. Because \(GF(q)\)-representable matroids are not necessarily uniquely \(GF(q)\)-representable, we have defined targets in terms of \(2\)-colorings of \(PG(r-1,q)\). When \(X\subseteq E(AG(r-1,q))\), we call \(AG(r-1,q)|X\) an _affine target_ if there is a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of affine flats such that \(X\) is the union of all sets \(F_{i+1}-F_{i}\) for \(i\) even. For affine targets in \(AG(r-1,q)\), we follow the same convention of defining targets in terms of \(2\)-colorings. Consider an analogous construction for graphs, that is, take a sequence \((K_{0},K_{1},\ldots,K_{n})\) of complete graphs where \(K_{i+1}\) has \(K_{i}\) as a subgraph for each \(i\) in \(\{1,2,\ldots,n-1\}\). Moreover, for each such \(i\), color the vertex \(v\) of \(V(K_{i+1})-V(K_{i})\) either green or red and color all the edges of ## 1. Introduction Let \(G\) be a connected graph. Let \(G\) be a connected graph. Let \(G\) be a connected graph. Let \(G^{\prime}\) be a connected graph. **Theorem 1.4**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,3)\). Then \(AG(r-1,3)|G\) is an affine target if and only if it does not contain any of \(U_{3,3},U_{3,4},\)\(U_{2,3}\oplus U_{1,1},U_{2,3}\oplus_{2}U_{2,4},P(U_{2,3},U_{2,3}),\) or \(\mathcal{W}^{3}\) as an induced restriction._ **Theorem 1.5**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\), for \(q\geq 4\). Then \(AG(r-1,q)|G\) is an affine target if and only if it does not contain any of \(U_{2,2},U_{2,3},\ldots,U_{2,q-3},\) or \(U_{2,q-2}\) as an induced restriction._ ## 2. Preliminary Results Throughout the paper, we will refer to flats and hyperplanes of \(PG(r-1,q)\) as _projective flats_ and _projective hyperplanes_, respectively. Let \(M\) be a restriction of \(PG(r-1,q)\). For a subset \(X\) of \(E(M)\), its _projective closure_, \(\mathrm{cl}_{P}(X)\), is the closure of \(X\) in the matroid \(PG(r-1,q)\). We first show that if \(PG(r-1,q)|G\) is a target, then the matroid \(PG(r-1,q)|G\) is uniquely determined by the sequence \((r_{0},r_{1},\ldots,r_{k})\) of ranks of the nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of projective flats. Note that we shall often write \(G\) and \(R\) for the matroids \(PG(r-1,q)|G\) and \(PG(r-1,q)|R\), respectively. This means that we will be using \(G\) and \(R\) to denote both matroids and the ground sets of those matroids. **Proposition 2.1**.: _Let \((E_{0},E_{1},\ldots,E_{k})\) and \((F_{0},F_{1},\ldots,F_{k})\) be nested sequences of flats in \(PG(r-1,q)\) such that \(r(E_{i})=r(F_{i})\) for all \(i\) in \(\{0,1,\ldots k\}\). Let \(G_{E}\) and \(G_{F}\) be the union, respectively, of all \(E_{i+1}-E_{i}\) and of all \(F_{i+1}-F_{i}\) for the even numbers \(i\) in \(\{0,1,\ldots,k\}\). Then \(PG(r-1,q)|G_{E}\cong PG(r-1,q)|G_{F}\)._ Proof.: Let \(h\) be the smallest \(i\) such that \(r(E_{i})>0\). Let \(\{b_{h,1},b_{h,2},\ldots,b_{h,m_{h}}\}\) and \(\{d_{h,1},d_{h,2},\ldots,d_{h,m_{h}}\}\) be bases \(B_{h}\) and \(D_{h}\) of \(PG(r-1,q)|E_{h}\) and \(PG(r-1,q)|F_{h}\), respectively. Let \(B_{0}=B_{1}=\cdots=B_{h-1}=\emptyset\) and \(D_{0}=D_{1}=\cdots=D_{h-1}=\emptyset\). For \(j\geq h\), assume that \(B_{0},B_{1},\ldots,B_{j}\) and \(D_{0},D_{1},\ldots,D_{j}\) have been defined. Let \(B_{j+1}\) and \(D_{j+1}\) be bases of \(E_{j+1}\) and \(F_{j+1}\), respectively, such that \(B_{j}\subseteq B_{j+1}\) and \(D_{j}\ \subseteq\ D_{j+1}\). Let \(B_{j+1}-B_{j}=\{b_{j+1,1},b_{j+1,2},\ldots,b_{j+1,m_{j+1}}\}\) and \(D_{j+1}-D_{j}=\{d_{j+1,1},d_{j+1,2},\ldots,d_{j+1,m_{j+1}}\}\). Define the automorphism \(\phi\) on \(PG(r-1,q)\) by \(\phi(b_{s,t})=d_{s,t}\) for all \(s\) and \(t\) such that \(s\geq h\). Then \(\phi(E_{i})=F_{i}\) for all \(i\), so \(\phi(E_{i+1}-E_{i})=\phi(E_{i+1})-\phi(E_{i})=F_{i+1}-F_{i}\), for all \(i\). Therefore, \(PG(r-1,q)|G_{E}\cong PG(r-1,q)|G_{F}\). The last result means that we can refer to a simple \(GF(q)\)-representable matroid \(M\) as being a target exactly when some, and hence all, of the \(GF(q)\)-representations of \(M\) are targets. Note that in a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of flats defining a target, it is convenient to allow equality of the flats. A nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of flats is the _canonical nested sequence_ defining a projective or affine target if \(F_{0}=\emptyset\), and \(F_{1},F_{2},\ldots,F_{k-1}\), and \(F_{k}\) are distinct. Observe that allowing \(F_{1}\) to be empty accommodates the requirement that the target is the union of all sets \(F_{i+1}-F_{i}\) for \(i\) even. Lemma 2.15 of Nelson and Nomoto [2] proved that binary targets are closed under induced restriction. Using the same proof, their result can be extended to targets represented over \(GF(q)\). **Lemma 2.2**.: _The class of targets over \(GF(q)\) is closed under induced restrictions._ **Lemma 2.3**.: _Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Assume that \(G\) is a target and \(F\) is a projective flat of \(PG(r-1,q)\). Then exactly one of \(G\cap F\) and \(R\cap F\) has rank \(r(F)\)._ Proof.: By Lemma 2.2, \(PG(r-1,q)|(G\cap F)\) is a target corresponding to a nested sequence \((F_{0}^{\prime},F_{1}^{\prime},\ldots,F_{k-1}^{\prime},F)\) of projective flats. By, for example, [4, Lemma 2.1], \(r(G\cap F)\) or \(r(R\cap F)\) is \(r(F)\). Either \(G\cap F\) or \(R\cap F\) is contained in some proper projective flat of \(F\). Therefore, either \(r(G\cap F)<r(F)\) or \(r(R\cap F)<r(F)\). We refer to the rank of the set of green elements in a projective flat \(F\) as the _green rank_ of \(F\). If \(F\) has green rank \(r(F)\), we say that \(F\) is a _green flat_. Furthermore, if a projective hyperplane has green rank \(r-1\), then it is a _green hyperplane_. Red rank, red flats, and red hyperplanes are defined analogously. From the last lemma, it follows that a projective flat can either be a green flat or a red flat, but not both. We now show that every contraction of a target is a target. Consider contracting a green element \(e\) in \(M\). If a parallel class in the contraction contains at least one green point, then, after the simplification, the resulting point will be green. If there are only red points in the parallel class, then, after the simplification, the resulting point is red. **Proposition 2.4**.: _The class of targets over \(GF(q)\) is closed under contractions._ Proof.: Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Assume that \(G\) is a target. Then there is a canonical nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of projective flats such that \(G\) is the union of all sets \(F_{i+1}-F_{i}\) for \(i\) even. Let \(e\) be an element of \(F_{m}-F_{m-1}\) where \(F_{m}\) is a green flat. Then the elements of \(F_{m}-F_{m-1}\) are green. Suppose \(x\) is a red point in \(F_{m}\). Then \(x\in F_{m-1}\). If \(y\in\operatorname{cl}_{P}(\{e,x\})\), then \(y\not\in F_{m-1}\), otherwise the circuit \(\{e,x,y\}\) gives the contradiction that \(e\) is an element of \(F_{m-1}\). Since \(\{e,x\}\subseteq F_{m}\), we must have that \(y\) is in \(F_{m}\), so \(y\) is in \(F_{m}-F_{m-1}\). Hence \(y\) is green. We deduce that, in the contraction of \(e\), every element of \(F_{m}-e\) is green. Now assume \(F_{j}\) is a red flat containing \(F_{m}\). Then \(F_{j}-F_{j-1}\subseteq R\). Consider a point \(z\) in \(F_{j}-F_{j-1}\). Using a symmetric argument to that given above, we deduce that \(e\) is the only point of \(\operatorname{cl}_{P}(\{e,z\})\) not in \(F_{j}-F_{j-1}\). Therefore, the points in \((F_{j}-F_{j-1})-e\) are red. Clearly, if \(F_{k}\) is a green flat containing \(F_{m}\), then the points in \((F_{k}-F_{k-1})-e\) are green. Thus, in \(\operatorname{si}(PG(r-1,q)/e)\), we have \((\operatorname{si}(F_{m}-e),\operatorname{si}(F_{m+1}-e),\ldots,\operatorname {si}(F_{k}-e))\) as a nested sequence of projective flats. Writing this new nested sequence of projective flats in \(PG(r-2,q)\) as \((F^{\prime}_{m},F^{\prime}_{m+1},\dots,F^{\prime}_{k})\), we see that \(F^{\prime}_{m}\) is entirely green and, for each \(i\geq 1\), the set \(F^{\prime}_{m+i}-F^{\prime}_{m+i-1}\) is entirely red if \(i\) is odd and is entirely green if \(i\) is even. Hence \(\operatorname{si}(G/e)\) is a target. Combining Lemma 2.2 and Proposition 2.4, we get the following. **Corollary 2.5**.: _The class of targets over \(GF(q)\) is closed under induced minors._ **Lemma 2.6**.: _Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). If \(G\) is a target, then \(G\) and \(R\) are connected unless \(q=2\) and \(G\) or \(R\) is \(U_{2,2}\)._ Proof.: Assume that the exceptional case does not arise and that \(r(G)\geq r(R)\). If \(G=PG(r-1,q)\), then the result holds. Assume \(G\) is not the whole projective geometry. Then \(G\) contains \(AG(r-1,q)\), so \(G\) is connected. Similarly, \(R\) will also have an affine geometry as a restriction. Thus \(R\) is certainly connected when \(r(R)=r(G)\). Assume \(r(R)<r(G)\). Take a projective flat \(F\) that has \(R\) as a spanning restriction. Then \(r(R)\geq r(G\cap F)\) so, as above, we deduce that \(R\) is connected. If \((G,R)\) is a \(2\)-coloring of \(PG(r-1,q)\), then \(G\) is a _minimal non-target_ if \(G\) is not a target but every proper induced restriction of \(G\) is a target. Clearly, if \(G\) is a minimal non-target, then \(R\) is not a target. But if \(r(R)>r(G)\), then \(R\) is not a minimal non-target. **Lemma 2.7**.: _Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Suppose \(PG(r-1,q)|G\) is a minimal non-target of rank \(r\). Then \(r(R)=r\)._ Proof.: Assume \(r(R)<r\). Then there is a hyperplane \(H\) containing \(R\). Since \(PG(r-1,q)|(G\cap H)\) is a target, \(R\) is a target. However, this implies that \(G\) is a target, a contradiction. Therefore \(r(R)=r\). ## 3. Forbidden Induced Restrictions of Target Matroids This section contains a common proof of Theorems 1.1 and 1.2. This proof closely follows the proof of Theorem 4.7 of Singh and Oxley [4]. Proof of Theorems 1.1 and 1.2.: Assume that \(G\) is a target. First, suppose \(q=2\). If there is a projective flat \(F\) such that \(PG(r-1,2)|(G\cap F)\cong U_{3,3}\), then \(PG(r-1,2)|(R\cap F)\cong U_{2,3}\oplus U_{1,1}\). Since \(PG(r-1,2)|(G\cap F)\) is a target, this contradicts Lemma 2.3, as \(r(G\cap F)=r(R\cap F)\). Now assume \(q\geq 3\). If there is a projective flat \(F\) such that \(PG(r-1,q)|(G\cap F)\) is any of \(U_{2,2},U_{2,3},\dots,U_{2,q-2}\), or \(U_{2,q-1}\), then, letting \(F^{\prime}=\operatorname{cl}_{P}(G\cap F)\), we have \(r(G\cap F^{\prime})=r(R\cap F^{\prime})\), a contradiction to Lemma 2.3. Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Suppose that \(G\) is a rank-\(r\) minimal non-target. In addition, when \(q=2\), assume that \(G\) does not have \(U_{3,3}\) or \(U_{2,3}\oplus U_{1,1}\) as an induced restriction; and when \(q\geq 3\), assume instead that \(G\) does not have \(U_{2,2},U_{2,3},\dots,U_{2,q-2}\), or \(U_{2,q-1}\) as an induced restriction. Then, by Lemma 2.7, \(r(R)=r\). Clearly, \(r\geq 4\) when \(q=2\), and \(r\geq 3\) when \(q\geq 3\). **3.1.1**.: _When \(q\geq 2\), each green hyperplane \(H\) contains at most one red rank-\((r-2)\) flat._ Assume that \(H\) contains at least two red flats, \(F_{1}\) and \(F_{2}\), of rank \(r-2\). Then, all the elements of \(F_{1}-F_{2}\) are red. Adding an element \(z\) of \(F_{2}-F_{1}\) to \(F_{1}-F_{2}\) gives a subset of \(H\) whose red rank is \(r-1\). This is a contradiction as \(H\) is a green hyperplane. Thus 3.1.1 holds. Consider a rank-\((r-2)\) projective flat \(F\). Then \(F\) is contained in exactly \(q+1\) projective hyperplanes. Assume that \(r(R\cap F)=r(F)\). We make the following observations. **3.1.2**.: _When \(q=2\), at most two of \(H_{1},H_{2}\) and \(H_{3}\) are green._ Assume that all three hyperplanes are green. Then, all the elements of each of \(H_{1}-F,H_{2}-F\) and \(H_{3}-F\) are monochromatic green, so \(r(R)=r(F)=r-2\), a contradiction to Lemma 2.7. Thus 3.1.2 holds. **3.1.3**.: _When \(q\geq 3\), there are at least two red hyperplanes containing \(F\)._ As \(r(R)=r\), there is at least one red hyperplane containing \(F\). Now, assume there is exactly one red hyperplane containing \(F\). Then, as \(r(R)=r\), there is some red point in a green hyperplane \(H_{G}\) that contains \(F\). Therefore \(r(G\cap H_{G})=r(R\cap H_{G})\), contradicting Lemma 2.3. Thus 3.1.3 holds. **3.1.4**.: _When \(q\geq 3\), there is at most one green hyperplane containing \(F\)._ Let \(H_{G_{1}}\) and \(H_{G_{2}}\) be distinct green hyperplanes containing \(F\) and let \(H_{R_{1}}\) and \(H_{R_{2}}\) be distinct red hyperplanes containing \(F\). If there is a red point in \(H_{G_{1}}-F\), then \(r(G\cap H_{G_{1}})=r(R\cap H_{G_{1}})\), a contradiction. Hence, there are no red points in \(H_{G_{1}}-F\) or in \(H_{G_{2}}-F\). Consider red points \(x\) in \(H_{R_{1}}-F\) and \(y\) in \(H_{R_{2}}-F\). The line \(\mathrm{cl}_{P}(\{x,y\})\) intersects each of \(H_{G_{1}}\) and \(H_{G_{2}}\) once at some point not in \(F\). Therefore, this line will have at least two red and two green points, a contradiction. Thus 3.1.4 holds. Let \(G_{2}\) and \(R_{2}\) be the sets of green and red projective flats of \(PG(r-1,q)\) of rank \(r-2\), and let \(G_{1}\) and \(R_{1}\) be the sets of green and red projective hyperplanes of \(PG(r-1,q)\). We now construct a bipartite graph \(B\) with vertex sets \(G_{2}\cup R_{2}\) and \(G_{1}\cup R_{1}\). A vertex \(x\) in \(G_{2}\cup R_{2}\) is adjacent to a vertex \(y\) in \(G_{1}\cup R_{1}\) if the flat associated to \(x\) is contained in the hyperplane associated to \(y\). We count the number of cross edges, \(G_{1}R_{2}\)-edges or \(R_{1}G_{2}\)-edges, of \(B\). By 3.1.1, no flat in \(G_{1}\) contains two or more flats in \(R_{2}\), so it follows, using symmetry, that the total number of cross edges is at most \(|G_{1}|+|R_{1}|\). Consider a pair \((H_{G},H_{R})\), where \(H_{G}\in G_{1}\) and \(H_{R}\in R_{1}\). The total number of these pairs is \(|G_{1}||R_{1}|\). Say \(H_{G}\cap H_{R}\) is a red flat \(F_{R}\). Then the edge of \(B\) from \(H_{G}\) to \(F_{R}\) is a cross edge. When \(q=2\), by 3.1.2, there is at most one other red hyperplane \(H_{R}^{\prime}\) such that \(H_{R}^{\prime}\cap H_{G}=F_{R}\). Therefore, when \(q=2\), the number of cross edges is at least \(\frac{1}{2}|G_{1}||R_{1}|\). If \(q\geq 3\), then, by 3.1.4, each cross edge corresponds to exactly \(q\) such pairs, so the number of cross edges is at least \(\frac{1}{q}|G_{1}||R_{1}|\). Hence, for all \(q\), the number of cross edges is at least \(\frac{1}{q}|G_{1}||R_{1}|\). Thus, \[\frac{1}{q}|G_{1}||R_{1}|\leq|G_{1}|+|R_{1}|. \tag{1}\] We may suppose \(|G_{1}|\leq|R_{1}|\). Then \(\frac{1}{q}|G_{1}|\leq\frac{|G_{1}|}{|R_{1}|}+1\), so \[|G_{1}|\leq 2q. \tag{2}\] Assume \(q=2\). Then \(|G_{1}|\leq 4\). Now take a basis \(B_{G}\) of \(G\). As \(r(G)\geq 4\), each \((r(G)-1)\)-element subset of \(B_{G}\) spans a green hyperplane. Hence \(|G_{1}|\geq 4\), so \(|G_{1}|=4\). Then, by (1), we have \(\frac{1}{2}(4)|R_{1}|\leq 4+|R_{1}|\), so \(|R_{1}|\leq 4\). Therefore, \(|R_{1}|=4\) and \(PG(r-1,2)\) has exactly eight hyperplanes. This is a contradiction, as \(PG(r-1,2)\) has \(2^{r}-1\) hyperplanes. Now assume \(q\geq 3\). Take a green hyperplane \(H\). As \(r(G)=r(R)=r\), there is some green point \(z\) not in \(H\). Now, \(PG(r-1,q)|(G\cap H)\) is a target having, say \((F_{0},F_{1},\ldots,F_{k-1},H)\), as its corresponding canonical nested sequence of projective flats. Let \(X\) be a projective flat of rank \(r-2\) that is contained in \(H\) and contains \(F_{k-1}\). Then \(PG(r-1,q)|(H-X)\cong AG(r-2,q)\) and all the elements of \(H-X\) are green. In \(H-X\), there are \(\frac{q(q^{r-2}-1)}{q-1}\) green rank-\((r-2)\) affine flats. Let \(Z\) be one of these affine flats. Then \(\operatorname{cl}_{P}(Z\cup z)\) will be a green projective hyperplane. This implies that \(|G_{1}|\geq\frac{q(q^{r-2}-1)}{q-1}\), so, by (2), \[2q\geq \frac{q(q^{r-2}-1)}{q-1}.\] Thus \(2q-2\geq q^{r-2}-1\), so \[2q\geq q^{r-2}+1.\] Observe that, for \(r\geq 4\), as \(q\geq 3\), the last inequality does not hold. Thus \(r(G)\leq 3\). Assume \(r(G)=3\). Suppose there is a green line \(L\) with a red point \(z\) on it. Because there is no line having at least two green and at least two red points, \(z\) is the only red point on \(L\). As \(r(G)=r(R)=3\), there are red points \(u\) and \(v\) that are not on \(L\) such that \(r(\{u,v,z\})=3\). Then \(\operatorname{cl}_{P}(\{u,v\})\) is a red line \(L_{1}\) that meets \(L\) at green a point \(p\). Moreover, there is a green point \(x\) that is not on \(L\) or \(L_{1}\). Consider the line \(L_{2}=\operatorname{cl}_{P}(\{x,z\})\). This line intersects \(L_{1}\) at some red point \(r_{0}\), so \(L_{2}\) is a red line whose only green point is \(x\). Observe that every other line that passes through \(x\) will be a green line, as it must intersect \(L\) at a point other than \(z\). This implies that every red point in \(PG(2,q)\) lies on \(L_{1}\) or \(L_{2}\) or is a single red point on the green line \(\operatorname{cl}_{P}(\{p,x\})\). Therefore, for distinct red points \(r_{1},r_{2}\) in \(L_{1}-\{r_{0}\}\), one of \(\operatorname{cl}_{P}(\{r_{1},z\})\) or \(\operatorname{cl}_{P}(\{r_{2},z\})\) will have at least two red points and two green points, a contradiction. Therefore, there cannot be a red point on any green line. By symmetry, there cannot be a green point on any red line. Since every two lines meet, this is a contradiction. ## 4. Affine Target Matroids In this section, we look at targets arising from affine geometries. This section begins with preliminary results about affine targets and minimal affine-non-targets. It concludes with the forbidden induced restrictions for affine targets over \(GF(q)\). One fact that we use repeatedly is that if \((G,R)\) is a \(2\)-coloring of \(AG(r-1,q)\), then \(G\) is an affine target if and only if \(R\) is an affine target. Viewing \(AG(r-1,q)\) as a restriction, \(PG(r-1,q)|X\), of \(PG(r-1,q)\) obtained by deleting a projective hyperplane \(H\) from \(PG(r-1,q)\), we call \(H\) the _complementary hyperplane of_\(X\). We shall also refer to \(H\) as the _complementary hyperplane of_\(AG(r-1,q)\). **Proposition 4.1**.: _Let \((E_{0},E_{1},\ldots,E_{k})\) and \((F_{0},F_{1},\ldots,F_{k})\) be nested sequences of flats in \(AG(r-1,q)\) such that \(r(E_{i})=r(F_{i})\) for all \(i\) in \(\{0,1,\ldots k\}\). Let \(H\) and \(H^{\prime}\) be the complementary hyperplanes of \(E_{k}\) and \(F_{k}\), respectively. Let \(G_{E}\) and \(G_{F}\) be the union, respectively, of all \(E_{i+1}-E_{i}\) and of all \(F_{i+1}-F_{i}\) for the even numbers \(i\) in \(\{0,1,\ldots,k\}\). Then \(AG(r-1,q)|G_{E}\cong AG(r-1,q)|G_{F}\)._ Proof.: Observe that \(E_{k}=E(PG(r-1,q))-H\) and \(F_{k}=E(PG(r-1,q))-H^{\prime}\). Let \(h\) be the smallest \(i\) such that \(r(E_{i})>0\). Let \(\{b_{h,1},b_{h,2},\ldots,b_{h,m_{h}}\}\) and \(\{d_{h,1},d_{h,2},\ldots,d_{h,m_{h}}\}\) be bases \(B_{h}\) and \(D_{h}\) of \(PG(r-1,q)|(\operatorname{cl}_{P}(E_{h})-E_{h})\) and \(PG(r-1,q)|(\operatorname{cl}_{P}(F_{h})-F_{h})\), respectively. Let \(v\) and \(v^{\prime}\) be elements in \(E_{h}\) and \(F_{h}\), respectively. Then \(\{v,b_{h,1},b_{h,2},\ldots,b_{h,m_{h}}\}\) is a basis for \(PG(r-1,q)|\operatorname{cl}_{P}(E_{h})\) and \(\{v^{\prime},d_{h,1},d_{h,2},\ldots,d_{h,m_{h}}\}\) is a basis for \(PG(r-1,q)|\operatorname{cl}_{P}(F_{h})\). Let \(B_{0}=B_{1}=\cdots=B_{h-1}=\emptyset\) and \(D_{0}=D_{1}=\cdots=D_{h-1}=\emptyset\). For \(j\geq h\), assume that \(B_{0},B_{1},\ldots,B_{j}\) and \(D_{0},D_{1},\ldots,D_{j}\) have been defined. Let \(B_{j+1}\) and \(D_{j+1}\) be bases of \(PG(r-1,q)|(\operatorname{cl}_{P}(E_{j+1})-E_{j+1})\) and \(PG(r-1,q)|(\operatorname{cl}_{P}(F_{j+1})-F_{j+1})\), respectively, such that \(B_{j}\subseteq B_{j+1}\) and \(D_{j}\subseteq D_{j+1}\). Observe that adding \(v\) and \(v^{\prime}\) to \(B_{j+1}\) and \(D_{j+1}\), respectively, gives bases for \(PG(r-1,q)|\operatorname{cl}_{P}(E_{j+1})\) and \(PG(r-1,q)|\operatorname{cl}_{P}(F_{j+1})\) for all \(j\). Let \(B_{j+1}-B_{j}=\{b_{j+1,1},b_{j+1,2},\ldots,b_{j+1,m_{j+1}}\}\) and \(D_{j+1}-D_{j}=\{d_{j+1,1},d_{j+1,2},\ldots,d_{j+1,m_{j+1}}\}\). Observe that \(B_{k}\) and \(D_{k}\) are bases for \(H\) and \(H^{\prime}\), respectively. Now, \(G_{E}=\operatorname{cl}_{P}(G_{E})-H\) and \(G_{F}=\operatorname{cl}_{P}(G_{F})-H^{\prime}\). Define the automorphism \(\phi\) on \(PG(r-1,q)\) by \(\phi(v)=v^{\prime}\) and \(\phi(b_{s,t})=d_{s,t}\), for all \(s\) and \(t\) such that \(s\geq h\). Then \(\phi(H)=H^{\prime}\) and, for all \(i\), we have \(\phi(\operatorname{cl}_{P}(B_{i}))=\operatorname{cl}_{P}(D_{i})\), so \(\phi(\operatorname{cl}_{P}(B_{i+1})-\operatorname{cl}_{P}(B_{i})-H)=\phi( \operatorname{cl}_{P}(B_{i+1}))-\phi(\operatorname{cl}_{P}(B_{i}))-\phi(H)= \operatorname{cl}_{P}(D_{i+1})-\operatorname{cl}_{P}(D_{i})-H^{\prime}\). Thus, \(PG(r-1,q)|(\operatorname{cl}_{P}(G_{E})-H)\cong PG(r-1,q)|(\operatorname{cl}_ {P}(G_{F})-H^{\prime})\). Therefore, \(AG(r-1,q)|G_{E}\cong AG(r-1,q)|G_{F}\). Similar to projective targets, the previous result means that we can refer to a simple \(GF(q)\)-representable affine matroid \(M\) as being an affine target when all the \(GF(q)\)-representations of \(M\) are affine targets. **Proposition 4.2**.: _The class of affine targets is closed under induced restrictions._ Proof.: Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\). Assume that \(G\) is an affine target. Then \(G\) corresponds to a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of affine flats with \(G\) being the union of the sets \(F_{i+1}-F_{i}\) for all even \(i\). Take a proper flat \(X\) of \(AG(r-1,q)\). As the intersection of two affine flats is an affine flat, the sequence \((X\cap F_{0},X\cap F_{1},\ldots,X\cap F_{k})\) is a nested sequence of affine flats. Assume that \(n\) is odd. As \(F_{n}-F_{n-1}\subseteq G\), it follows that \((X\cap F_{n})-(X\cap F_{n-1})\subseteq G\cap F\). Hence, \(G\cap F\) is the union of the sets \((X\cap F_{i+1})-(X\cap F_{i})\) for all even \(i\). Therefore, \(AG(r-1,q)|(G\cap X)\) is an affine target. We will use the following well-known lemmas about affine geometries quite often in this section (see, for example [3, Exercise 6.2.2]). **Lemma 4.3**.: \(AG(r-1,q)\) _can be partitioned into \(q\) hyperplanes._ **Lemma 4.4**.: _Let \(X\) and \(Y\) be distinct hyperplanes of \(AG(r-1,q)\). Then either \(r(X\cap Y)=0\), or \(r(X\cap Y)=r-2\)._ The techniques used for handling affine targets are similar to those that we used for projective targets. The binary case will be treated separately. **Lemma 4.5**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,2)\) with \(|G|=|R|\). Then \(r(G)=r(R)\)._ Proof.: Since \(|G|=|R|\), we have that \(|G|=2^{r-2}\). Because the hyperplanes of \(AG(r-1,2)\) have exactly \(2^{r-2}\) elements, either \(AG(r-1,2)|G\) is a hyperplane, or \(r(G)=r\). Since \(AG(r-1,2)|G\) is a hyperplane if and only if \(AG(r-1,2)|R\) is a hyperplane, the lemma follows. **Lemma 4.6**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,2)\). Assume \(G\) is an affine target and \(F\) is a flat of \(AG(r-1,2)\). Then either exactly one of \(G\cap F\) and \(R\cap F\) is of rank \(r(F)\); or \(r(G\cap F)=r(R\cap F)=r(F)-1\), and each of \(G\cap F\) and \(R\cap F\) is an affine flat. Moreover, if \(r(G\cap F)=r(F)\) and \(H_{1}\) and \(H_{2}\) are disjoint hyperplanes of \(AG(r-1,2)|F\), then \(r(G\cap H_{1})=r(F)-1\) or \(r(G\cap H_{2})=r(F)-1\)._ Proof.: Assume \(r(G\cap F)<r(F)\). Then there is a rank-\((r(F)-1)\) affine flat \(H_{G}\) that is contained in \(F\) and contains \(G\). As \(H_{G}\) is a hyperplane of \(AG(r-1,2)|F\), there is another hyperplane \(H_{R}\) of \(AG(r-1,2)|F\) that is complementary to \(H_{G}\) in \(F\). Moreover, \(H_{R}\subseteq R\cap F\), so \(r(R\cap F)\geq r(F)-1\). If there is a red point \(z\) in \(H_{G}\), then \(r(R\cap F)=r(F)\). Otherwise, \(r(R\cap F)=r(G\cap F)=r(F)-1\), and each of \(R\cap F\) and \(G\cap F\) is an affine flat. Now suppose that \(r(G\cap F)=r(F)\) and that \(H_{1}\) and \(H_{2}\) are disjoint hyperplanes of \(AG(r-1,2)|F\) with \(r(G\cap H_{1})<r(F)-1\) and \(r(G\cap H_{2})<r(F)-1\). As \(AG(r-1,2)|(G\cap F)\) is an affine target of rank \(r(F)\), there is a hyperplane \(H^{\prime}\) of \(AG(r-1,2)|F\) that is monochromatic green. Since \(H^{\prime}\) must meet both of \(H_{1}\) and \(H_{2}\), its intersection with each such set has rank \(r(F)-3\). Since \(F\) is green, it follows that \(H_{1}\) or \(H_{2}\) is green. **Lemma 4.7**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\), where \(q\geq 3\). Assume that \(G\) is an affine target and \(F\) is a flat of \(AG(r-1,q)\). Then exactly one of \(G\cap F\) and \(R\cap F\) has rank \(r(F)\)._ Proof.: Assume \(r(G\cap F)<r(F)\). Then there is a rank-\((r(F)-1)\) affine flat \(H_{G}\) containing \(G\cap F\). Thus \(F-H_{G}\) does not contain any green points, so \(r(R\cap F)=r(F)\). As with \(2\)-colorings of \(E(PG(r-1,q))\), for a \(2\)-coloring \((G,R)\) of \(E(AG(r-1,q))\), a flat \(F\) is _green_ if \(r(G\cap F)=r(F)\). We call \(F\)_red_ if \(r(R\cap F)=r(F)\). Furthermore, a flat \(F\) of \(AG(r-1,2)\) is _half-green and half-red_ if \(r(G\cap F)=r(R\cap F)=r(F)-1\). In this case, \(G\cap F\) and \(R\cap F\) are complementary hyperplanes of \(AG(r-1,2)|F\). The following results show how one can get an affine target from a projective target and how to construct projective targets from affine targets. **Proposition 4.8**.: _Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Let \(H\) be a hyperplane of \(PG(r-1,q)\). Assume that \(G\) is a projective target. Then \(PG(r-1,q)|(G-H)\) is an affine target._ Proof.: As \(G\) is a projective target, \(G\) corresponds to a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of projective flats, where \(G\) is equal to the union of \(F_{i+1}-F_{i}\) for all even \(i\). Then \(F_{j}-H\) is an affine flat for all \(j\). Therefore, \((F_{0}-H,F_{1}-H,\ldots,F_{k}-H)\) is a nested sequence of affine flats. Let \(F^{\prime}_{j}=F_{j}-H\) for all \(j\). Then \(PG(r-1,q)|(G-H)\) corresponds to the nested sequence \((F^{\prime}_{0},F^{\prime}_{1},\ldots,F^{\prime}_{k})\) of affine flats and \(G-H\) is equal to the union of \(F^{\prime}_{i+1}-F^{\prime}_{i}\) for all even \(i\). The following result is immediate. **Proposition 4.9**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\). Assume that \(G\) is an affine target corresponding to a nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of affine flats where \(G\) is equal to the union of \(F_{i+1}-F_{i}\) for all even \(i\). Viewing \(AG(r-1,q)\) as a restriction of \(PG(r-1,q)\), the sequence \((\operatorname{cl}_{P}(F_{0}),\operatorname{cl}_{P}(F_{1}),\ldots,\operatorname {cl}_{P}(F_{k}))\) is a nested sequence of projective flats and, if \(G_{P}\) is the projective target that is the union of \(\operatorname{cl}_{P}(F_{i+1})-\operatorname{cl}_{P}(F_{i})\) for all even \(i\), and \(H=E(PG(r-1,q))-E(AG(r-1,q))\), then \(PG(r-1,q)|(G_{P}-H)\cong AG(r-1,q)|G\)._ We call the projective target \(G_{P}\) that arises from the affine target \(G\) in Proposition 4.9 the _standard projective target arising from \(G\)_. Now consider an affine target \(M_{1}\) that arises from a green-red coloring of \(PG(r-1,q)\backslash H\) where \(H\) is a projective hyperplane. Let \(M_{2}\) be a projective target that arises as a green-red coloring of \(H\). We say that \(M_{1}\) and \(M_{2}\) are _compatible_ if the green-red coloring of \(PG(r-1,q)\) induced by the colorings of \(M_{1}\) and \(M_{2}\) is a projective target, that is, if \(PG(r-1,q)|(E(M_{1})\cup E(M_{2}))\) is a projective target. In the previous proposition, the affine target \(G\) and the projective target \(G_{P}\cap H\) are compatible as \(PG(r-1,q)|(G\cup(G_{P}\cap H))\) is the projective target \(G_{P}\). We now consider when \(PG(r-1,q)|(E(M_{1})\cup E(M_{2}))\) is not a standard projective target. As \(M_{1}\) is an affine target, it corresponds to a canonical nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of affine flats. Let \(F_{h}\) be the first non-empty flat in this sequence. Then \(\operatorname{cl}_{P}(F_{h})\) meets the projective hyperplane \(H\) in a rank-\((r(F_{h})-1)\) projective flat \(T\). In the construction of a standard projective target, \(T\) is monochromatic. The next result shows that, apart from the standard projective target, the only way for \(M_{1}\) and \(M_{2}\) to be compatible is if we modify the standard projective target by replacing \(T\) with a \(2\)-coloring of it that is a projective target. **Proposition 4.10**.: _Let \((G,R)\) be a \(2\)-coloring of \(PG(r-1,q)\). Let \(H\) be a projective hyperplane. Assume that \(PG(r-1,q)|(G-H)\) is an affine target corresponding to a canonical nested sequence \((F_{0},F_{1},\ldots,F_{k})\) of affine flats. Assume that \(PG(r-1,q)|(G\cap H)\) is a projective target corresponding to a canonical nested sequence \((S_{0},S_{1},\ldots,S_{t})\) of projective flats. Then \(PG(r-1,q)|(G-H)\) and \(PG(r-1,q)|(G\cap H)\) are compatible if and only if, when \(\beta\) is the smallest \(h\) such that \(r(F_{h})>0\),_ 1. _there is an_ \(m\) _in_ \(\{0,1,\ldots,t\}\) _such that_ \(F_{\beta}\cup S_{m}\) _is a projective flat,_ \(r(S_{m})=r(F_{\beta})-1,\) _and_ \(PG(r-1,q)|(G\cap(F_{\beta}\cup S_{m}))\) _is a projective target; and_ 2. _for all_ \(\alpha\) _in_ \(\{1,2,\ldots,k-\beta\}\)_, the set_ \(F_{\beta+\alpha}\cup S_{m+\alpha}\) _is a projective flat,_ \((F_{\beta+\alpha}\cup S_{m+\alpha})-(F_{\beta+\alpha-1}\cup S_{\beta+\alpha-1})\) _is monochromatic, and_ \(t=m+k-\beta\)_._ Proof.: Assume that \(PG(r-1,q)|(G-H)\) and \(PG(r-1,q)|(G\cap H)\) are compatible. Then \(PG(r-1,q)|G\) is a projective target corresponding to a canonical nested sequence \((X_{0},X_{1},\ldots,X_{s})\) of projective flats. Thus \((X_{0}\cap H,X_{1}\cap H,\ldots,X_{s}\cap H)\) is a nested sequence of projective flats for \(PG(r-1,q)|H\), and \((X_{0}-H,X_{1}-H,\ldots,X_{s}-H)\) is a nested sequence of affine flats for \(PG(r-1,q)\backslash H\). Now, 1. \(X_{1}=\emptyset\) and \(X_{2}\cap H=\emptyset\) but \(X_{3}\cap H\neq\emptyset\); or 2. \(X_{1}=\emptyset\) and \(X_{2}\cap H\neq\emptyset\); or 3. \(X_{1}\neq\emptyset\) but \(X_{1}\cap H=\emptyset\) and \(X_{2}\cap H\neq\emptyset\); or 4. \(X_{1}\neq\emptyset\) and \(X_{1}\cap H\neq\emptyset\). For the projective target \(PG(r-1,q)|H\), the canonical nested sequence is \((X_{2}\cap H,X_{3}\cap H,\ldots,X_{s}\cap H)\) in case (a) and is \((X_{0}\cap H,X_{1}\cap H,\ldots,X_{s}\cap H)\) in the other three cases. Let \(\gamma\) be the smallest \(h\) such that \(X_{h}-H\) is non-empty. Then \(\operatorname{cl}_{P}(X_{\gamma}-H)\) meets \(H\) in a projective flat of rank \(r(X_{\gamma})-1\). Thus \(PG(r-1,q)|(G\cap X_{\gamma}\cap H)\) is a projective target in \(X_{\gamma}\cap H\) that corresponds to the canonical nested sequence \((X_{2}\cap H,X_{3}\cap H,\ldots,X_{\gamma}\cap H)\) in case (a) and to the canonical nested sequence \((X_{0}\cap H,X_{1}\cap H,\ldots,X_{\gamma}\cap H)\) in the other three cases. Now \((X_{\gamma}-X_{\gamma-1})-H=(X_{\gamma}-H)-(X_{\gamma-1}-H)=(X_{\gamma}-H)-\emptyset\). Thus \(X_{\gamma}-H\) is monochromatic. Therefore, the canonical nested sequence corresponding to \(PG(r-1,q)|(G-H)\) is \((X_{\gamma-1}-H,X_{\gamma}-H,\ldots,X_{s}-H)\) when \(X_{\gamma}-H\) is green and is \((\emptyset,X_{\gamma-1}-H,X_{\gamma}-H,\ldots,X_{s}-H)\) when \(X_{\gamma}-H\) is green. \(X_{\gamma}-H\) is red. Thus \((F_{0},F_{1},\dots,F_{k})\) is \((X_{\gamma-1}-H,X_{\gamma}-H,\dots,X_{s}-H)\) when \(X_{\gamma}-H\) is green and is \((\emptyset,X_{\gamma-1}-H,X_{\gamma}-H,\dots,X_{s}-H)\) when \(X_{\gamma}-H\) is red. We see that \(F_{\beta}=X_{\gamma}-H\), that \(F_{\beta}\cup(X_{\gamma}\cap H)\) is a projective flat, that \(r(X_{\gamma}\cap H)=r(F_{\beta})-1\), and that \(PG(r-1,q)|(G\cap(F_{\beta}\cup(X_{\gamma}\cap H)))=PG(r-1,q)|(G\cap X_{\gamma})\). Therefore, \(PG(r-1,q)|(G\cap(F_{\beta}\cup(X_{\gamma}\cap H)))\) is a projective target. Thus (i) holds. Evidently \(F_{\beta+\alpha}\cup(X_{\gamma+\alpha}\cap H)=X_{\gamma+\alpha}\), so \(F_{\beta+\alpha}\cup(X_{\gamma+\alpha}\cap H)\) is a projective flat for all \(\alpha\) in \(\{1,2,\dots,k-\beta\}\). Moreover, \(\gamma+k-\beta=s\) and (ii) holds. Now suppose that (i) and (ii) hold. We know that \(S_{m}-S_{m-1}\) and \(F_{\beta}\) are monochromatic. Then \(PG(r-1,q)|G\) is a projective target for which the corresponding nested sequence is \((S_{0},S_{1},\dots,S_{m-1},F_{\beta}\cup S_{m},F_{\beta+1}\cup S_{m+1},\dots,\)\(F_{k}\cup S_{t})\) when the colors of \(S_{m}-S_{m-1}\) and \(F_{\beta}\) match and is \((S_{0},S_{1},\dots,\)\(S_{m},F_{\beta}\cup S_{m},F_{\beta+1}\cup S_{m+1},\dots,F_{k}\cup S_{t})\) when the colors of \(S_{m}-S_{m-1}\) and \(F_{\beta}\) differ. We conclude that \(PG(r-1,q)|(G\cap H)\) and \(PG(r-1,q)|(G-H)\) are compatible. A _minimal affine-non-target_ is an affine matroid that is not an affine target such that every proper induced restriction of it is an affine target. The next result is an analog of Lemma 2.7. **Lemma 4.11**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\). Assume \(G\) is a rank-\(r\) minimal affine-non-target. Then \(r(R)=r\)._ Proof.: Assume \(r(R)<r\). Then \(R\) is contained in an affine hyperplane \(H\). As \(G\) is a minimal affine-non-target, \(AG(r-1,q)|(G\cap H)\) is an affine target corresponding to a nested sequence \((F_{0},F_{1},\dots,F_{n-1},H)\) of affine flats. As \(R\subseteq H\), there are no red points in \(E(AG(r-1,q))-H\). Then we obtain the contradiction that \(G\) is an affine target for which a corresponding sequence of nested affine flats is \((F_{0},F_{1},\dots,F_{n-1},H,E(AG(r-1,q)))\) if \(H-F_{n-1}\subseteq R\) and \((F_{0},F_{1},\dots,F_{n-1},E(AG(r-1,q)))\) if \(H-F_{n-1}\subseteq G\). **Lemma 4.12**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,2)\). Assume \(G\) is a minimal affine-non-target of rank \(r\). Then \(AG(r-1,2)\) has a red hyperplane and a green hyperplane that are disjoint._ Proof.: Assume the lemma fails. By Lemma 4.11, \(r(R)=r\), so we have a red hyperplane \(X_{1}\) and a green hyperplane \(Y_{1}\). There are affine hyperplanes \(X_{2}\) and \(Y_{2}\) that are complementary to \(X_{1}\) and \(Y_{1}\), respectively. As the lemma fails, \(X_{2}\) is not green and \(Y_{2}\) is not red. By assumption, \(X_{1}\) and \(Y_{1}\) meet in a rank-\((r-2)\) flat \(F_{1,1}\). For \((i,j)\neq(1,1)\), let \(F_{i,j}=X_{i}\cap Y_{j}\). As \(r(F_{1,1})=r-2\), it follows that \(r(F_{i,j})=r-2\) for each \(i\) and \(j\). Then \(\{F_{1,1},F_{1,2},F_{2,1},F_{2,2}\}\) is a partition of \(AG(r-1,2)\) and there are red points in each of \(F_{1,2}\) and \(F_{1,1}\), and there are green points in each of \(F_{1,1}\) and \(F_{2,1}\). Next we show the following. **4.12.1**.: _There is a red point in \(F_{2,1}\)._ As \(AG(r-1,2)|(R\cap X_{2})\) is a target and \(r(G\cap X_{2})<r-1\), it follows, by Lemma 4.6, that either \(r(R\cap X_{2})=r-1\), or \(R\cap X_{2}\) and \(G\cap X_{2}\) are affine \(F_{1,2}\) flats of rank \(r-2\). In the first case, there is certainly a red point in \(F_{2,1}\). Consider the second case. Assume that \(F_{2,1}\) is monochromatic green. Then \(F_{2,2}\) is monochromatic red. As \(r(R\cap F_{1,2})>0\), we see that \(r(R\cap Y_{2})=r-1\), so \(Y_{2}\) is red, a contradiction. Thus 4.12.1 holds. **4.12.2**.: \(F_{1,2}\) _is red._ Assume that \(F_{1,2}\) is not red. Then \(r(R\cap F_{1,2})<r-2\). As \(X_{1}\) is red and \(r(G\cap F_{1,1})>0\), it follows that \(r(G\cap F_{1,2})<r-2\). Thus, by Lemma 4.6, \(R\cap F_{1,2}\) and \(G\cap F_{1,2}\) are affine flats of rank \(r-3\). Observe that \(F_{1,1}\) is not red, otherwise \(r(R\cap Y_{1})=r-1\), a contradiction. Moreover, \(F_{1,1}\) is not a green flat, otherwise \(X_{1}\) is a green hyperplane. Thus \(R\cap F_{1,1}\) and \(G\cap F_{1,1}\) are affine flats of rank \(r-3\). Now, as \(r(R\cap X_{1})=r-1\) and \(|R\cap X_{1}|=|G\cap X_{1}|\), it follows by Lemma 4.5 that \(r(G\cap X_{1})=r-1\), a contradiction to Lemma 4.6. Therefore, 4.12.2 holds. As \(Y_{2}\) is not red but \(F_{1,2}\) is red, \(F_{2,2}\) is monochromatic green. Since \(r(G\cap F_{2,1})>0\), we obtain the contradiction that \(X_{2}\) is green. In each of the remaining results in this section, we shall consider disjoint sets \(\mathbf{X}\) and \(\mathbf{Y}\) of hyperplanes of \(AG(r-1,q)\) where the members of \(\mathbf{X}\) and \(\mathbf{Y}\) partition \(E(AG(r-1,q))\). With \(\mathbf{X}=\{X_{1},X_{2},\ldots,X_{q}\}\) and \(\mathbf{Y}=\{Y_{1},Y_{2},\ldots,Y_{q}\}\), we let \(F_{i,j}=X_{i}\cap Y_{j}\) for all \(i\) and \(j\). **Lemma 4.13**.: _Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\), where \(q\geq 3\). Assume that \(G\) is a minimal affine-non-target of rank \(r\). Then \(AG(r-1,q)\) has a red hyperplane and a green hyperplane that are disjoint._ Proof.: Assume the lemma fails. By Lemma 4.7, each proper flat of \(AG(r-1,q)\) is either red or green but not both. By Lemma 4.11, \(r(G)=r(R)=r\), so \(AG(r-1,q)\) has a red hyperplane \(X_{1}\) and a green hyperplane \(Y_{1}\). Then there are partitions \(\{X_{1},X_{2},\ldots,X_{q}\}\) and \(\{Y_{1},Y_{2},\ldots,Y_{q}\}\) of \(E(AG(r-1,q))\) into sets \(\mathbf{X}\) and \(\mathbf{Y}\) of hyperplanes. By assumption, all the hyperplanes in \(\mathbf{X}\) are red and all the hyperplanes in \(\mathbf{Y}\) are green. As \(X_{1}\cap Y_{1}\neq\emptyset\), it follows, by Lemma 4.4, that \(r(F_{i,j})=r-2\) for all \(i\) and \(j\). As \(X_{1}\) is red, at most one of \(F_{1,1},F_{1,2},\ldots,F_{1,q-1},\text{ and }F_{1,q}\) is green. Thus, we may assume that \(F_{1,1},F_{1,2},\ldots,F_{1,q-2}\), and \(F_{1,q-1}\) are red. As \(F_{1,1}\) is red and \(Y_{1}\) is green, \(Y_{1}-F_{1,1}\) will be monochromatic green. Similarly, \(Y_{2}-F_{1,2}\) will be monochromatic green. This implies that \(r(G\cap X_{2})=r(R\cap X_{2})=r-1\), a contradiction. The following technical lemmas show a relationship between the lines and planes of \(AG(r-1,q)\) and the hyperplanes in \(\mathbf{X}\) and \(\mathbf{Y}.\) In these lemmas, when we take closures, we are doing so in the underlying affine geometry \(AG(r-1,q)\). **Lemma 4.14**.: _Let \(\mathbf{X}\) and \(\mathbf{Y}\) be two disjoint sets each consisting of a set of hyperplanes that partition \(AG(r-1,q)\). Let \(x\) and \(y\) be distinct elements of \(E(AG(r-1,q))\) such that \(|\{x,y\}\cap F_{i,j}|\leq 1\) for all \(i,j\) and no member of \(\textbf{X}\) or **Y** contains \(\{x,y\}\). Then \(|\operatorname{cl}(\{x,y\})\cap X_{i}|=1\) and \(|\operatorname{cl}(\{x,y\}\cap Y_{j}|=1\) for all \(i\) and \(j\)._ Proof.: Clearly \(|\operatorname{cl}(\{x,y\})\cap X_{i}|\leq 1\) for all \(i\), otherwise \(X_{i}\) contains \(\{x,y\}\). As \(|\operatorname{cl}(\{x,y\})|=q\), we deduce that \(|\operatorname{cl}(\{x,y\})\cap X_{i}|=1\) for all \(i\). The lemma follows by symmetry. **Lemma 4.15**.: _For \(q\) in \(\{2,3\}\), let **X** and **Y** be two disjoint sets each consisting of a set of hyperplanes that partition \(AG(r-1,q)\). Let \(\{x,y,z\}\) be a rank-\(3\) subset of \(E(AG(r-1,q))\) such that \(|\{x,y,z\}\cap F_{i,j}|\leq 1\) for all \(i\) and \(j\), and there is an \(X_{k}\) in **X** such that \(|X_{k}\cap\{x,y,z\}|=2\). Then \(|\operatorname{cl}(\{x,y,z\})\cap F_{i,j}|=1\) for all \(i\) and \(j\)._ Proof.: Note that, by Lemma 4.3, each \(F_{i,j}\) has rank \(r-2\). Let \(q=2\). We may assume that \(x\in F_{1,1}\), that \(y\in F_{1,2}\), and that \(z\in F_{2,1}\). As \(r(\{x,y,z\})=3\), there is exactly one point, say \(e\), in \(\operatorname{cl}(\{x,y,z\})-\{x,y,z\}\). Assume \(e\not\in F_{2,2}\). Then, by symmetry, we may assume that \(e\in X_{1}\). As \(\{e,x,y,z\}\) is a circuit, we deduce that \(z\in X_{1}\), a contradiction. Now assume that \(q=3\). We may assume that \(x\in F_{1,1}\) and \(y\in F_{1,2}\). Suppose \(z\in F_{2,3}\). Consider \(\operatorname{cl}(\{x,z\})\). The third point \(e\) on this line cannot be in \(X_{1}\), otherwise the circuit \(\{e,x,z\}\) gives the contradiction that \(z\) is in \(X_{1}\). Similarly, \(e\) cannot be in \(X_{2}\), \(Y_{1}\), or \(Y_{3}\). Therefore, \(e\in F_{3,2}\). By a similar argument, the third point on \(\operatorname{cl}(\{y,z\})\) is in \(F_{3,1}\). Continuing in this manner, we deduce that \(|\operatorname{cl}(\{x,y,z\})\cap X_{3}|=3\). Since \(|\operatorname{cl}(\{x,y,z\})|=9\), using the same technique, we deduce that \(|\operatorname{cl}(\{x,y,z\})\cap F_{i,j}|=1\) for all \(i\) and \(j\). By symmetry, we may now assume that \(z\in F_{2,1}\). Then the third elements on the lines \(\operatorname{cl}(\{x,z\}),\operatorname{cl}(\{x,y\})\), and \(\operatorname{cl}(\{y,z\})\) are in \(F_{3,1}\), \(F_{1,3}\), and \(F_{3,3}\), respectively. Arguing as before, we again deduce that \(|\operatorname{cl}(\{x,y,z\})\cap F_{i,j}|=1\) for all \(i\) and \(j\). **Lemma 4.16**.: _Let **X** and **Y** be two disjoint sets of each consisting of hyperplanes that partition \(AG(r-1,2)\). Let \(P_{1}=\{w,x,y,z\}\) be a rank-\(3\) flat of \(AG(r-1,2)\) such that \(w,x\in F_{1,2}\) and \(y,z\in F_{2,1}\). Let \(P_{2}=\{e,f,y,z\}\) be a rank-\(3\) flat of \(AG(r-1,2)\) such that \(e,f\in F_{2,2}\). Then \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat such that \(|\operatorname{cl}(P_{1}\cup P_{2})\cap F_{i,j}|=2\) for all \(i\) and \(j\)._ Proof.: As \(|P_{1}\cap P_{2}|=2\), it follows that \(r(P_{1}\cup P_{2})=4\). Thus \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat. Now consider \(\operatorname{cl}(\{e,w,z\})\). By Lemma 4.15, \(\operatorname{cl}(\{e,w,z\})\) intersects \(F_{1,1}\) in an affine flat. Therefore, as \(\operatorname{cl}(P_{1}\cup P_{2})\) meets each of \(X_{1},X_{2},Y_{1},Y_{2},F_{1,1},F_{1,2},F_{2,1}\), and \(F_{2,2}\) in an affine flat, each such intersection has \(1\), \(2\), or \(4\) elements. Thus the lemma follows. We now prove the main results of this section. Proof of Theorem 1.3.: Assume \(G\) is an affine target and there is a rank-\(4\) affine flat \(F\) such that \(AG(r-1,2)|(G\cap F)\cong U_{4,4}\). Then \(AG(r-1,2)|(R\cap F)\cong U_{4,4}\). This contradicts Lemma 4.6 as \(r(G\cap F)=r(R\cap F)=4\). Hence a binary affine target does not have \(U_{4,4}\) as an induced restriction. Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,2)\). Suppose that \(G\) is a rank-\(r\) minimal affine-non-target and that \(G\) does not contain \(U_{4,4}\) as an induced restriction. By Lemma 4.12, \(AG(r-1,2)\) has a red hyperplane \(X_{1}\) and a green hyperplane \(X_{2}\) such that \(X_{1}\cap X_{2}=\emptyset\). By Lemma 4.11, \(r(R)=r\), so there is a red point \(z\) in \(X_{2}\). As \(AG(r-1,2)|(R\cap X_{1})\) is an affine target, in \(X_{1}\), there is a monochromatic red rank-\((r-2)\) flat \(F_{1,1}\). Observe that \(\operatorname{cl}(F_{1,1}\cup z)\) is a red hyperplane \(Y_{1}\) that intersects \(X_{1}\) and \(X_{2}\). Then there is a hyperplane \(Y_{2}\) that is complementary to \(Y_{1}\). By Lemma 4.4, \(r(F_{i,j})=r-2\) for all \(i\) and \(j\). Observe that \(z\) is in \(F_{2,1}\). Furthermore, there is a red point \(e\text{ in }F_{1,2}\) and there are green points \(f\) and \(g\) in \(F_{2,1}\) and \(F_{2,2}\), respectively. As \(r(G)=r\), there is a green point \(h\) in \(F_{1,2}\). We make the following observations. **4.17.1**.: \(F_{1,2}\cup F_{2,1}\) _is an affine hyperplane._ Observe that \(F_{2,1}\) is contained in three affine hyperplanes, two of which are \(F_{1,1}\cup F_{2,1}\) and \(F_{2,1}\cup F_{2,2}\). Therefore, \(F_{1,2}\cup F_{2,1}\) is the third such hyperplane. Thus 4.17.1 holds. **4.17.2**.: \(F_{2,2}\) _is not monochromatic green._ Assume that \(F_{2,2}\) is monochromatic green. Then \(Y_{2}\) is green. By 4.17.1, \(F_{1,2}\cup F_{2,1}\) is an affine hyperplane, so both \(AG(r-1,2)|(G\cap(F_{1,2}\cup F_{2,1}))\) and \(AG(r-1,2)|(R\cap(F_{1,2}\cup F_{2,1}))\) are affine targets. Then there is a rank-\((r-2)\) affine flat \(F\) such that either \(G\cap(F_{1,2}\cup F_{2,1})\subseteq F\) or \(R\cap(F_{1,2}\cup F_{2,1})\subseteq F\). Because we currently have symmetry between the red and green subsets of \(AG(r-1,2)\), we may assume the former. Then \(f\) and \(h\) are in \(F\). Let \(x\) be a red point in \(F_{1,2}-F\). Let \(P_{1}=\operatorname{cl}(\{f,h,x\})\). The fourth point \(y\) on this plane is in \(Y_{1}\), otherwise the circuit \(\{f,h,x,y\}\) gives the contradiction that \(f\in Y_{2}\). Moreover, \(y\not\in F\), otherwise the circuit \(\{f,h,x,y\}\) gives the contradiction that \(x\in F\). Thus \(y\in F_{1,2}-F\), so \(y\) is red. Let \(P_{2}=\operatorname{cl}(\{f,g,y\})\). Then the fourth point \(g^{\prime}\) on this plane is in \(F_{2,2}\), so \(g^{\prime}\) is green. By Lemma 4.16, \(r(\operatorname{cl}(P_{1}\cup P_{2}))=4\). Let \(\{s,t\}=\operatorname{cl}(P_{1}\cup P_{2})-\{f,g,g^{\prime},h,x,y\}\). Then, by Lemma 4.16, \(s\) and \(t\) in \(F_{1,1}\), so both points are red. Therefore, \(r(G\cap\operatorname{cl}(P_{1}\cup P_{2}))=r(R\cap\operatorname{cl}(P_{1}\cup P _{2}))=4\), so \(AG(r-1,2)|\{f,g,g^{\prime},h\}\cong U_{4,4}\). We conclude that \(AG(r-1,2)|G\) has \(U_{4,4}\) as an induced restriction, a contradiction. Thus 4.17.2 holds. The affine hyperplane \(F_{1,2}\cup F_{2,1}\) is either green, red, or half-green and half-red. **4.17.3**.: \(F_{1,2}\cup F_{2,1}\) _is not red_ Assume that \(F_{1,2}\cup F_{2,1}\) is red. Then, by Lemma 4.6, at least one of \(F_{1,2}\) and \(F_{2,1}\) will be red. Assume that \(F_{2,1}\) is red. As \(X_{2}\) is green, \(F_{2,2}\) is monochromatic green, otherwise \(r(R\cap X_{2})=r-1\). By 4.17.2, we deduce that \(F_{2,1}\) is not red. Thus \(F_{1,2}\) is red. Observe that if \(F_{2,1}\) is green, then \(r(G\cap(F_{1,2}\cup F_{2,1}))=r(R\cap(F_{1,2}\cup F_{2,1}))=r-1\), a contradiction. Thus, \(F_{2,1}\) is half-green and half-red. As \(X_{2}\) is green, by Lemma 4.6, \(F_{2,2}\) is green. Therefore \(r(G\cap(F_{2,2}\cup h))=r-1\), so \(Y_{2}\) is green. As \(F_{1,2}\) is red, \(F_{2,2}\) is monochromatic green, a contradiction to 4.17.2. Therefore, 4.17.3 holds. Since \(F_{1,2}\cup F_{2,1}\) is not red, there is a monochromatic green flat \(Z\) of rank \(r-2\) that is contained in \(F_{1,2}\cup F_{2,1}\). Because neither \(F_{1,2}\) nor \(F_{2,1}\) is monochromatic green, \(Z\) meets \(F_{1,2}\) and \(F_{2,1}\) in monochromatic green flats, \(Z_{1,2}\) and \(Z_{2,1}\), of rank \(r-3\). Similarly, as \(X_{2}\) is green, there is a monochromatic green flat \(V\) of rank \(r-2\) that is contained in \(X_{2}\). Because neither \(F_{2,1}\) nor \(F_{2,2}\) is monochromatic green, \(V\) meets \(F_{2,1}\) and \(F_{2,2}\) in monochromatic green flats, \(V_{2,1}\) and \(V_{2,2}\), of rank \(r-3\). In the next part of the argument, we shall use the observation that if \(Y_{2}\) is green, then we have symmetry between \((X_{1},X_{2})\) and \((Y_{1},Y_{2})\). **4.17.4**.: \(F_{1,2}\cup F_{2,1}\) _is not green._ Assume that \(F_{1,2}\cup F_{2,1}\) is green. Then, by Lemma 4.6, \(F_{2,1}\) or \(F_{1,2}\) is green. But the latter case implies that \(Y_{2}\) is green, so this case can be reduced to the former by the symmetry between \((X_{1},X_{2})\) and \((Y_{1},Y_{2})\) noted above. Thus we may assume that \(F_{2,1}\) is green. Now \(Z_{2,1}\) and \(V_{2,1}\) are rank-\((r-3)\) monochromatic green flats that are both contained in \(F_{2,1}\). Suppose \(Z_{2,1}=V_{2,1}\). As \(F_{2,1}\) is green, there is a green element \(g_{1}\) in \(F_{2,1}-Z_{2,1}\). Since \(F_{2,2}\) is not monochromatic green, there is a red point \(u_{1}\) in \(F_{2,2}-V_{2,2}\). Take a green point \(g_{2}\) in \(V_{2,2}\) and let \(P_{1}=\operatorname{cl}(\{g_{1},g_{2},u_{1}\})\). Let the fourth point on this plane be \(g_{3}\). Then \(g_{3}\in F_{2,1}\), otherwise the circuit \(\{g_{1},g_{2},g_{3},u_{1}\}\) implies that \(g_{1}\in F_{2,2}\), a contradiction. Likewise, \(g_{3}\in V_{2,1}\), otherwise \(g_{2}\not\in V\), a contradiction. Because \(X_{1}\) is red, there is a red point \(u_{2}\) in \(F_{1,2}-Z_{1,2}\). Let \(P_{2}=\operatorname{cl}(\{g_{1},g_{3},u_{2}\})\). Let \(g_{4}\) be the fourth point on this plane. Then the circuit \(\{g_{1},g_{3},g_{3},u_{2}\}\) implies that \(g_{4}\ \in\ Z_{1,2}\), so \(g_{4}\) is green. By Lemma 4.16, \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat having two points, \(s\) and \(t\), in \(F_{1,1}\). We see that \(AG(r-1,2)|\{g_{1},g_{2},g_{3},g_{4}\}\cong U_{4,4}\). Thus \(G\) has \(U_{4,4}\) as an induced restriction, a contradiction. Thus \(Z_{2,1}\neq V_{2,1}\). Now \(Y_{2}\) contains the monochromatic green flats \(Z_{1,2}\) and \(V_{2,2}\), each of which has rank \(r-3\). Thus \(Y_{2}\) is green, or \(Y_{2}\) is half-green and half-red. Assume the latter. Then \(Z_{1,2}\cup V_{2,2}\) is a monochromatic green flat of rank \(r-2\) and \(Y_{2}-(Z_{1,2}\cup V_{2,2})\) is a monochromatic red flat of rank \(r-2\). As before, we take \(u_{1}\) to be a red point in \(F_{2,2}\). Choose \(g_{1}\) to be a point in \(V_{2,2}\). Then \(g_{1}\) is green. Let \(g_{2}\) be a point in \(Z_{2,1}-V_{2,1}\), so \(g_{2}\) is green. Let \(P_{1}=\operatorname{cl}(\{g_{1},g_{2},u_{1}\})\) and let \(g_{3}\) be the fourth point in \(P_{1}\). Then \(g_{3}\in F_{2,1}\) and \(g_{3}\in V\). Thus \(g_{3}\in V_{2,1}\), so \(g_{3}\) is green. Choose \(u_{2}\) in \(F_{1,2}-Z_{1,2}\). Then \(u_{2}\) is red. Let \(P_{2}=\operatorname{cl}(\{g_{1},u_{1},u_{2}\})\) and let \(g_{4}\) be the fourth point in \(P_{2}\). Then \(g_{4}\in F_{1,2}\) and \(g_{4}\in Z_{1,2}\cup V_{2,2}\), so \(g_{4}\in Z_{1,2}\). Thus \(g_{4}\) is green. By Lemma 4.16, \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat that meets \(F_{1,1}\) in two elements, both of which are red. Moreover, \(AG(r-1,2)|\{g_{1},g_{2},g_{3},g_{4}\}\cong U_{4,4}\), a contradiction. We now know that \(Y_{2}\) is green. Then there is a monochromatic green flat \(W\) of rank \(r-2\) such that \(W\subseteq Y_{2}\). As neither \(F_{1,2}\) nor \(F_{2,2}\) is monochromatic green, \(W\cap F_{1,2}\) and \(W\cap F_{2,2}\) are monochromatic green flats, \(W_{1,2}\) and \(W_{2,2}\) of rank \(r-3\). We choose \(u_{1}\) to be a red point in \(F_{2,2}-(V_{2,2}\cup W_{2,2})\). Choose \(g_{1}\) in \(V_{2,2}\cup W_{2,2}\). Then \(g_{1}\) is green. Choose \(g_{2}\) in \(Z_{2,1}-V_{2,1}\). Then \(g_{2}\) is green. The fourth point \(g_{3}\) of the plane \(P_{1}\) that equals \(\operatorname{cl}(\{g_{1},g_{2},u_{1}\})\) is in \(F_{2,1}\cap V\); that is, \(g_{3}\in V_{2,1}\), so \(g_{3}\) is green. Now let \(u_{2}\) be a red point in \(F_{1,2}-(Z_{1,2}\cup W_{1,2})\). The fourth point \(g_{4}\) on the plane \(P_{2}\) that equals \(\operatorname{cl}(\{g_{1},u_{1},u_{2}\})\) is in \(F_{1,2}\cap W\), so it is in \(W_{2,1}\) and hence is green. Then, by Lemma 4.16, \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat that contains exactly four green points \(g_{1},g_{2},g_{3}\), and \(g_{4}\). Since \(AG(r-1,2)|\{g_{1},g_{2},g_{3},g_{4}\}\cong U_{4,4}\), we have a contradiction. We conclude that 4.17.4 holds. By 4.17.3 and 4.17.4, we must have that \(F_{1,2}\cup F_{2,1}\) is half-green and half-red. As \(Z\) is a monochromatic green flat of rank \(r-2\) that is contained in \(F_{1,2}\cup F_{2,1}\), we deduce that \((F_{1,2}\cup F_{2,1})-Z\) is a monochromatic red flat of rank \(r-2\). Moreover, \(F_{1,2}-Z\) and \(F_{2,1}-Z\) are monochromatic red flats of rank \(r-3\). Thus \(V_{2,1}=Z_{2,1}\). As \(X_{2}\) is green, there is a green point \(g_{1}\) in \(F_{2,2}-V\). Take \(g_{2}\) to be a point in \(V_{2,2}\) and let \(u_{1}\) be a point in \(F_{2,1}-V_{2,1}\). Let \(P_{1}=\operatorname{cl}(\{g_{1},g_{2},u_{1}\})\). The fourth point \(g_{3}\) on this plane is in \(F_{2,2}\) and in \(V\) so it is in \(V_{2,2}\) and hence it is green. Let \(u_{2}\) be a point in \(F_{1,2}-Z_{1,2}\). Then \(u_{2}\) is red. Let \(P_{2}=\operatorname{cl}(\{g_{3},u_{1},u_{2}\})\). The fourth point \(g_{4}\) on this plane is in \(F_{1,2}\cap Z\), so it is green. By Lemma 4.16, \(\operatorname{cl}(P_{1}\cup P_{2})\) is a rank-\(4\) affine flat that contains exactly four green points, \(g_{1},g_{2},g_{3}\), and \(g_{4}\). Moreover, \(AG(r-1,2)|\{g_{1},g_{2},g_{3},g_{4}\}\cong U_{4,4}\), a contradiction. We conclude that the theorem holds. Proof of Theorem 1.4.: Assume that \(G\) is an affine target over \(GF(3)\) such that there is an affine flat \(F\) for which \(AG(r-1,3)|(G\cap F)\) is one of \(U_{3,3},U_{3,4},U_{2,3}\oplus U_{1,1},U_{2,3}\oplus_{2}U_{2,4},P(U_{2,3},U_{2,3})\), or \(\mathcal{W}^{3}\). Then \(r(G\cap F)=r(R\cap F)=3\), contradicting Lemma 4.7. Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,3)\). Suppose that \(G\) is a rank-\(r\) minimal affine-non-target. Then \(r(G)\geq 3\). If \(r(G)=3\), then, by Lemma 4.11, \(r(R)=3\). One can now check that \(AG(r-1,3)|G\) is one of \(U_{3,3},U_{3,4},U_{2,3}\oplus U_{1,1},U_{2,3}\oplus_{2}U_{2,4},P(U_{2,3},U_{2,3})\), or \(\mathcal{W}^{3}\). Thus we may assume \(r(G)\geq 4\) and that \(G\) does not contain a rank-\(3\) flat \(F\) such that \(r(G\cap F)=r(R\cap F)=3\). By Lemma 4.11, \(r(R)=r\). Now, by Lemma 4.13, there is a green hyperplane \(X_{1}\) and a red hyperplane \(X_{2}\) that are disjoint. Let \(\{X_{1},X_{2},X_{3}\}\) and \(\{Y_{1},Y_{2},Y_{3}\}\) be distinct sets, \(\mathbf{X}\) and \(\mathbf{Y}\), each consisting of three disjoint hyperplanes in \(AG(r-1,3)\). Then, by Lemma 4.4, \(r(F_{i,j})=r-2\) for all \(i\) and \(j\). We proceed by showing there are no possible colorings of the hyperplanes in \(\mathbf{Y}\). **4.18.5**.: _If \(F_{1,1}\) and \(F_{1,2}\) are green, then \(Y_{1}\) or \(Y_{2}\) is green._ Assume that \(Y_{1}\) and \(Y_{2}\) are both red. Then \(Y_{1}-F_{1,1}\) and \(Y_{2}-F_{1,2}\) are monochromatic red. As \(r(G)=r\), there is a green element \(e\) in \(Y_{3}-F_{1,3}\). Let \(f\) and \(g\) be green elements in \(F_{1,1}\) and \(F_{1,2}\), respectively. Consider \(\operatorname{cl}(\{e,f,g\})\). By Lemma 4.15 this plane will contain red points in \(F_{2,1},F_{2,2}\), and \(F_{3,2}\). Therefore \(r(G\cap\operatorname{cl}(\{e,f,g\}))=r(R\cap\operatorname{cl}(\{e,f,g\}))=3\), a contradiction. Thus 4.18.5 holds. **4.18.6**.: _There cannot be at least two red hyperplanes or at least two green hyperplanes in \(\mathbf{Y}\)._ Assume that \(Y_{1}\) and \(Y_{2}\) are red. As \(X_{1}\) is green, at most one of \(F_{1,1},F_{1,2}\), and \(F_{1,3}\) is red. By 4.18.5, we may assume that \(F_{1,2}\) is red. Then \(X_{1}-F_{1,2}\) is monochromatic green, so \(Y_{1}-F_{1,1}\) is monochromatic red. As \(r(G)=r\), there is a green point \(g\) that is not in \(X_{1}\). Then \(g\in F_{2,2}\cup F_{2,3}\cup F_{3,2}\cup F_{3,3}\). Let \(e\) be a green point in \(F_{1,1}\), and \(f\) be a red point in \(F_{1,2}\). Consider \(\mathrm{cl}(\{e,f,g\})\). By Lemma 4.15, this plane will contain red points in \(F_{2,1}\) and \(F_{3,1}\), and a green point in \(F_{1,3}\). Therefore, \(r(G\cap\mathrm{cl}(\{e,f,g\}))=r(R\cap\mathrm{cl}(\{e,f,g\}))=3\), a contradiction. By symmetry, there cannot be two green hyperplanes in \(\mathbf{Y}\). Thus 4.18.6 holds. We conclude that there are no possible colorings of the hyperplanes in \(\mathbf{Y}\), a contradiction. Proof of Theorem 1.5.: Assume that \(G\) is an affine target over \(GF(q)\) for \(q\geq 4\). If there is an affine flat \(F\) such that \(AG(r-1,q)|(G\cap F)\) is any of \(U_{2,2},U_{2,3},\ldots,U_{2,q-3}\), or \(U_{2,q-2}\), then \(r(G\cap F)=r(R\cap F)\), contradicting Lemma 4.7. Let \((G,R)\) be a \(2\)-coloring of \(AG(r-1,q)\). Suppose that \(G\) is a rank-\(r\) minimal affine-non-target that does not contain \(U_{2,2},U_{2,3},\ldots,U_{2,q-3}\), or \(U_{2,q-2}\) as an induced restriction. Then \(r(G)\geq 3\). By Lemma 4.7, \(r(R)=r\). Now, by Lemma 4.13, there is a red hyperplane \(X_{1}\) that is disjoint from a green hyperplane \(X_{2}\). Let \(\{X_{1},X_{2},\ldots,X_{q}\}\) and \(\{Y_{1},Y_{2},\ldots,Y_{q}\}\) be disjoint sets, \(\mathbf{X}\) and \(\mathbf{Y}\), each consisting of \(q\) disjoint hyperplanes in \(AG(r-1,q)\). By Lemma 4.4, \(r(F_{i,j})=r-2\) for all \(i\) and \(j\). We show there are no possible colorings of the hyperplanes in \(\mathbf{Y}\). **4.19.7**.: _There is at least one green hyperplane and at least one red hyperplane in **Y**._ Assume that all members of \(\mathbf{Y}\) are red. As \(X_{2}\) is green, we may assume that \(F_{2,1},F_{2,2},\ldots,F_{2,q-2}\), and \(F_{2,q-1}\) are green. Then \(Y_{k}-F_{2,k}\) is monochromatic red for all \(k\) in \(\{1,2,\ldots,q-1\}\). As \(r(G)=r\), there is a green element \(e\) in \(Y_{q}-F_{2,q}\). We may assume that \(e\in F_{3,q}\). Let \(f\) be a green element in \(F_{2,1}\). Consider \(\mathrm{cl}(\{e,f\})\). By Lemma 4.14, this line will contain red points in \(Y_{2}-(F_{2,2}\cup F_{3,2})\) and \(Y_{3}-(F_{2,3}\cup F_{3,3})\). However, this gives the contradiction that \(r(G\cap\mathrm{cl}(\{e,f\}))=r(R\cap\mathrm{cl}(\{e,f\}))=2\). By symmetry, not all members of \(\mathbf{Y}\) are green. Thus 4.19.7 holds. **4.19.8**.: _There cannot be at least two green hyperplanes and at least two red hyperplanes in **Y**._ Let \(Y_{1}\) and \(Y_{2}\) be green and let \(Y_{3}\) and \(Y_{4}\) be red. As \(X_{1}\) is red, at most one of \(F_{1,1},F_{1,2},\ldots,F_{1,q-1},\text{ and }F_{1,q}\) is green. This implies that \(F_{1,1}\) or \(F_{1,2}\) is red, so we may assume the latter. Then \(Y_{2}-F_{1,2}\) is monochromatic green. Similarly, as \(X_{2}\) is green, \(F_{2,3}\) or \(F_{2,4}\), say \(F_{2,3}\), is green. Then \(Y_{3}-F_{2,3}\) is monochromatic red. Assume \(F_{1,1}\) and \(F_{2,1}\) are green. Then \(X_{1}-F_{1,1}\) is monochromatic red. Let \(e\) be a red point in \(F_{1,4}\) and let \(f\) be a green point in \(F_{2,1}\). Consider \(\operatorname{cl}(\{e,f\})\). Then, by Lemma 4.14, this line will have a green point in \(Y_{2}-(F_{1,2}\cup F_{2,2})\) and a red point in \(Y_{3}-(F_{1,3}\cup F_{2,3})\). Hence \(r(G\cap\operatorname{cl}(\{e,f\}))=r(R\cap\operatorname{cl}(\{e,f\}))\), a contradiction. A symmetric argument holds when \(F_{1,4}\) and \(F_{2,4}\) are both red. Therefore, either \(F_{1,1}\) or \(F_{2,1}\) is red, and either \(F_{1,4}\) or \(F_{2,4}\) is green. This implies that \(Y_{1}-(F_{1,1}\cup F_{2,1})\) is monochromatic green and \(Y_{4}-(F_{1,4}\cup F_{2,4})\) is monochromatic red. Hence \(r(G\cap X_{3})=r(R\cap X_{3})=r-1\), a contradiction. Thus 4.19.8 holds. **4.19.9**.: _There cannot be exactly one red hyperplane or exactly one green hyperplane in **Y**._ Assume that \(Y_{1}\) is red and \(Y_{2},Y_{3},\ldots,Y_{q-1}\), and \(Y_{q}\) are green. As \(X_{1}\) is red, at most one of \(F_{1,1},F_{1,2},\ldots,F_{1,q-1},\text{and }F_{1,q}\) is green. First assume that \(F_{1,2},F_{1,3},\ldots,F_{1,q-1}\) and \(F_{1,q}\) are red. Then \(Y_{k}-F_{1,k}\) is monochromatic green for all \(k\) in \(\{2,3,\ldots,q\}\). By Lemma 4.7, \(r(R)=r\), so there is a red point \(e\) in \(Y_{1}-F_{1,1}\). We may assume \(e\) is in \(F_{2,1}\). Let \(f\) be a red point in \(F_{1,2}\) and consider \(\operatorname{cl}(\{e,f\})\). By Lemma 4.14, this line will have green points in \(X_{3}-(F_{3,1}\cup F_{3,2})\) and \(X_{4}-(F_{4,1}\cup F_{4,2})\). Therefore, \(r(G\cap\operatorname{cl}(\{e,f\}))=r(R\cap\operatorname{cl}(\{e,f\}))=2\), a contradiction. Now assume that \(F_{1,2}\) is green. Then \(X_{1}-F_{1,2}\) is monochromatic red. Hence \(Y_{k}-F_{1,k}\) is monochromatic green for all \(k\) in \(\{3,4,\ldots,q\}\). Let \(e\) be a red element in \(Y_{1}-F_{1,1}\) and \(f\) be a green element in \(Y_{2}-F_{1,2}\) such that \(|X_{i}\cap\{e,f\}|\leq 1\) for all \(i\) in \(\{2,3,\ldots,q\}\). As \(Y_{1}\) is red and \(Y_{2}\) is green, such a pair of points exists. We may assume that \(e\in F_{2,1}\) and \(f\in F_{3,2}\). Then, by Lemma 4.14, \(\operatorname{cl}(\{e,f\})\) will contain a red element in \(X_{1}-(F_{1,1}\cup F_{1,2})\) and a green element in \(X_{4}-(F_{4,1}\cup F_{4,2})\), a contradiction. By symmetry, there cannot be exactly one green hyperplane in **Y**. Thus 4.19.9 holds. We conclude that there are no possible colorings of the hyperplanes in **Y**, a contradiction.
2310.07194
Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
Low-density parity-check (LDPC) codes have been successfully commercialized in communication systems due to their strong error correction capabilities and simple decoding process. However, the error-floor phenomenon of LDPC codes, in which the error rate stops decreasing rapidly at a certain level, presents challenges for achieving extremely low error rates and deploying LDPC codes in scenarios demanding ultra-high reliability. In this work, we propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect. First, by leveraging the boosting learning technique of ensemble networks, we divide the decoding network into two neural decoders and train the post decoder to be specialized for uncorrected words that the first decoder fails to correct. Secondly, to address the vanishing gradient issue in training, we introduce a block-wise training schedule that locally trains a block of weights while retraining the preceding block. Lastly, we show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights. By applying these training methods to standard LDPC codes, we achieve the best error-floor performance compared to other decoding methods. The proposed NMS decoder, optimized solely through novel training methods without additional modules, can be integrated into existing LDPC decoders without incurring extra hardware costs. The source code is available at https://github.com/ghy1228/LDPC_Error_Floor .
Hee-Youl Kwak, Dae-Young Yun, Yongjune Kim, Sang-Hyo Kim, Jong-Seon No
2023-10-11T05:05:40Z
http://arxiv.org/abs/2310.07194v2
# Boosting Learning for LDPC Codes ###### Abstract Low-density parity-check (LDPC) codes have been successfully commercialized in communication systems due to their strong error correction capabilities and simple decoding process. However, the error-floor phenomenon of LDPC codes, in which the error rate stops decreasing rapidly at a certain level, presents challenges for achieving extremely low error rates and deploying LDPC codes in scenarios demanding ultra-high reliability. In this work, we propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect. First, by leveraging _the boosting learning technique_ of ensemble networks, we divide the decoding network into two neural decoders and train the post decoder to be specialized for uncorrected words that the first decoder fails to correct. Secondly, to address the vanishing gradient issue in training, we introduce a _block-wise training schedule_ that locally trains a block of weights while retraining the preceding block. Lastly, we show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights. By applying these training methods to standard LDPC codes, we achieve the best error-floor performance compared to other decoding methods. The proposed NMS decoder, optimized solely through novel training methods without additional modules, can be integrated into existing LDPC decoders without incurring extra hardware costs. The source code is available at [https://github.com/ghy1228/LDPC_Error_Floor](https://github.com/ghy1228/LDPC_Error_Floor). ## 1 Introduction The field of learning-based decoding for error-correcting codes began with research on training neural networks to produce the information vector when given a distorted codeword [1; 2; 3; 4]. These works assume an arbitrary neural network with no prior knowledge of decoding algorithms, and accordingly, face the challenge of learning a decoding algorithm. In contrast, model-based neural decoders are designed by mapping a well-known graph-based iterative decoding algorithm, such as belief propagation (BP) and min-sum (MS) decoding algorithms, to a neural network and then training its weights [5]. Compared to the arbitrary network approaches or the error correction transformer [6], model-based neural decoders offer the advantages of guaranteeing the performance of existing iterative algorithms and using hardware architectures [7] that are already well optimized for iterative decoding algorithms. LDPC codes have been incorporated into WiMAX and 5G communication systems [8; 9], owing to their strong error-correcting capabilities and low decoding complexity [10; 11]. However, more advanced LDPC coding technology needs to be developed for diverse communication environments lying in the scope of future 6G systems. In particular, for environments that require extremely low frame error rate (FER) such as the next generation ultra-reliable and low-latency communications (xURLLC) [12], it is crucial to mitigate the error-floor in the decoding of LDPC codes. The error-floor phenomenon refers to an abnormal phenomenon where the FER does not decrease as rapidly as in the waterfall region [11; 13]. The error-floor phenomenon also should be addressed for systems demanding very high reliability, such as solid-state drive (SSD) storage [14], DNA storage [15], and cryptosystems [16]. However, enhancing other features of LDPC codes often inadvertently reinforces the error-floor phenomenon as a side effect. For instance, the error-floor tends to be intensified when optimizing LDPC codes for superior waterfall performance or decoding by low complexity decoders such as quantized MS decoders [17]. Therefore, research focused on alleviating the error-floor, especially when decoding LDPC codes optimized for performance with low decoding complexity, has become significant. Such advancements will broaden the applications of LDPC codes. ### Main contributions With this need in mind, we focus on how to train a low-complexity neural MS (NMS) decoder to prevent the error-floor in well designed LDPC codes. The main contributions of the paper are threefold as follows. _Boosting learning using uncorrected words:_ We first leverage the boosting learning technique [18; 19] that employs a sequential training approach for multiple classifiers, wherein subsequent classifiers concentrate on the data samples that preceding classifiers incorrectly classify. Inspired by this method, we divide the neural decoder into two cascaded neural decoders and train the first decoder to be focused on the waterfall performance, while training the second decoder to be specialized in handling the uncorrected words that are not corrected by the first decoder due to the error-floor phenomenon. Uncorrected words in the error-floor region mostly contain small-error patterns related to trapping sets or absorbing sets [11], which can be effectively corrected by weighting decoding messages. As a result, a significant performance improvement in the error-floor region is achieved by boosting learning. _Block-wise training schedule with retraining:_ To mitigate the error-floor, iterative decoding typically requires a large number of decoding iterations, often exceeding \(50\)[17; 20; 21; 22]. However, NMS decoders encompassing many iterations can undergo the vanishing gradient problem in training [23]. To address this problem, we propose a new training schedule inspired by block-wise training methods [24; 25]. The proposed block-wise training schedule divides the entire decoding iterations into sub-blocks and trains these sub-blocks in a sequential manner. Additionally, rather than fixing the weights trained from previous blocks, we retrain them to escape from local minima. As a result, the proposed schedule enables to train numerous weights for all \(50\) iterations successfully while outperforming both the multi-loss method [5] and the iter-by-iter schedule [26]. _Weight sharing technique with dynamic weight allocation:_ The weight sharing technique is a way to reduce the number of trainable weights by grouping specific weights to share a common value. The waterfall performance does not severely degrade even if we bundle all weights for each iteration [27; 26]. However, our observations indicate that this does not hold true in the error-floor region, implying that a higher degree of weight diversity is necessary to correct error patterns causing the error-floor. To obtain sufficient diversity with a minimal number of weights, we dynamically assign different weights to unsatisfied check nodes (UCNs) and satisfied check nodes (SCNs) in the decoding process. By utilizing only two weight values for SCNs and UCNs each iteration, we achieve the performance of the NMS decoder using different weights for every edge. This method reduces the number of weights to be trained to only 2.6% of the original number of weights. We employ these training methods on a range of representative LDPC codes adopted for standards such as WiMAX [8], IEEE 802.11n [28], and 5G new radio [9]. The FER point at the onset of the error-floor diminishes by over two orders of magnitude for all codes compared to conventional weighted MS (WMS) decoding [29]. Compared to existing NMS decoding approaches [27; 26], our proposed scheme exhibits a notably enhanced capability to suppress the error-floor. This scheme also achieves a similar performance as the state-of-the-art post-processing method in [22], with only a third of the iterations. ### Related works We compare the proposed training scheme with the existing schemes for NMS decoders in Table 1. First, all works except [31; 32] aim to improve the waterfall performance. Although the scope of the works in [31; 32] includes the error-floor performance, they assumed specific conditions of binary symmetric channels (BSC) and FAID, while we deal with the more general situation of additive white Gaussian noise (AWGN) channels and MS decoding. Regarding the training sample selection, the training samples can be received words randomly taken from the AWGN channel [5; 27; 31; 33], or codewords with erroneous trapping sets (or absorbing sets) [30; 32]. However, to use the method in [30; 32], trapping sets should be enumerated, which is only applicable to short LDPC codes and not feasible for medium to large length standard LDPC codes. In contrast, the proposed boosting method, which generates training samples through decoding with linear complexity, can be applied even to codes of several thousand lengths. For scheduling of training, it is common to train all weights at once (One-shot training) [5] and some works sequentially train the weights corresponding to a single iteration locally [27; 33], while we train a block of iterations with retraining. In terms of the weight sharing techniques, we confirm that the proposed sharing technique using UCN weights is superior to the spatial or temporal sharing technique used in [5; 27; 30; 31; 32]. Meanwhile, a method of assigning different weights to UCNs has been introduced in [33], but they applied the same UCN weight to all CNs belonging to a proto CN when at least one CN is unsatisfied, whereas we pinpoint specific UCNs and apply weights individually. There have been studies adding hypernetworks to the Vanilla NMS decoder [34; 35; 36] or using a transformer architecture [6] to improve the waterfall performance at the expense of increased training and decoding costs. While the proposed training techniques are broadly applicable to these augmented neural decoders, this work primarily aims to improve the error-floor performance of the Vanilla NMS decoder under practical conditions. ## 2 Preliminaries ### LDPC codes In this paper, we consider quasi-cyclic (QC) LDPC codes, which have been adopted in various applications due to their implementation advantages [13; 37]. The Tanner graph of a QC-LDPC code, consisting of \(n=Nz\) VNs and \(m=Mz\) CNs, can be obtained by lifting the protograph, composed of \(M\) proto CNs and \(N\) proto VNs, with a lifting factor \(z\)[37; 38]. Let \(E\) be the total number of edges in the protograph. As a running example, we use the WiMAX QC-LDPC code of length \(n=576\) and code-rate \(3/4\) with \(N=24,M=6,E=88\), and \(z=24\)[8]. ### Neural min-sum decoding For iteration \(\ell\), let \(m_{c\to v}^{(\ell)}\) represent the message from CN \(c\) to VN \(v\) and let \(m_{v\to c}^{(\ell)}\) represent the message from VN \(v\) to CN \(c\). The neighboring nodes of \(x\) are represented by \(\mathcal{N}(x)\). The initial conditions are \(m_{v\to c}^{(0)}=m_{v}^{\text{ch}}\), \(m_{c\to v}^{(0)}=0\) for the channel LLR \(m_{v}^{\text{ch}}\) of VN \(v\). For \(\ell=1,\ldots,\overline{\ell}\), the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Reference & Codes & Target region & Decoders & Training sample & Training schedule & Weight sharing \\ \hline \multirow{2}{*}{This work} & \multirow{2}{*}{Standard LDPC} & Waterfall, & \multirow{2}{*}{MS} & \multirow{2}{*}{Uncorrected words} & Block-wise & Spatial with \\ & & Error-floor & & & with retraining & UCN weights \\ \hline [5] & BCH & Waterfall & BP, MS & Received words & One-shot & Temporal \\ \hline [27] & Standard LDPC & Waterfall & BP, MS & Received words & Iter-by-Iter & Spatial \\ \hline [30] & Short LDPC & Waterfall & BP & Absorbing set & One-shot & Temporal \\ \hline [31] & Regular LDPC & Waterfall, & \multirow{2}{*}{FAID} & \multirow{2}{*}{Received words} & \multirow{2}{*}{One-shot} & \multirow{2}{*}{Temporal} \\ & & Error-floor & & & & \\ \hline [32] & Short LDPC & Waterfall, & \multirow{2}{*}{FAID} & \multirow{2}{*}{Trapping set} & \multirow{2}{*}{One-shot} & \multirow{2}{*}{Temporal} \\ & & Error-floor & & & & \\ \hline [33] & Standard LDPC & Waterfall & Layered & Received words & Iter-by-Iter & UCN weights \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between model-based neural decoders NMS decoding algorithm [5] updates the messages as follows \[m_{v\to c}^{(\ell)} =\overline{w}_{v}^{(\ell)}m_{v}^{\text{ch}}+\sum_{c^{\prime}\in \mathcal{N}(v)\backslash c}m_{c^{\prime}\to v}^{(\ell-1)} \tag{1}\] \[m_{c\to v}^{(\ell)} =w_{c\to v}^{(\ell)}\left(\prod_{v^{\prime}\in\mathcal{N}(c) \backslash v}\operatorname{sgn}\left(m_{v^{\prime}\to c}^{(\ell)}\right) \right)\min_{v^{\prime}\in\mathcal{N}(c)\backslash v}|m_{v^{\prime}\to c}^{( \ell)}|, \tag{2}\] where \(\overline{w}_{v}^{(\ell)}\) and \(w_{c\to v}^{(\ell)}\) are called the VN weight and CN weight, respectively. At the last iteration \(\overline{\ell}\), output LLRs \(m_{v}^{\text{o}}\) are computed as \(m_{v}^{\text{o}}=m_{v}^{\text{ch}}+\sum_{c^{\prime}\in\mathcal{N}(v)}m_{c^{ \prime}\to v}^{(\ell)}\). By quantizing \(m_{v\to c}^{(\ell)}\), \(m_{c\to v}^{(\ell)}\), and \(m_{v}^{\text{ch}}\), the quantized NMS decoding algorithm is obtained. The quantized decoders are widely used in practical applications due to its low complexity and commonly employed in existing error-floor researches [17; 20; 21; 22]. Therefore, we use it to ensure a fair comparison. Specifically, we use 5-bit uniform quantization with a maximum magnitude of \(7.5\) and a step size of \(0.5\) for the quantized NMS decoder as in [20; 21; 22]. ### Training weights for the NMS decoder If all the weights in (1) and (2) are set to \(1\), NMS decoding is equivalent to MS decoding [39], or if VN weights \(\overline{w}_{v}^{(\ell)}\) are \(1\) and CN weights \(w_{c\to v}^{(\ell)}\) have the same value, the decoder operates as the WMS decoder [40]. The NMS decoder gains performance improvement over the WMS or MS decoder by greatly increasing the diversity of weights. However, the full diversity weights increase the training complexity and require a large amount of memory to store the weights. Therefore, previous studies used weight sharing techniques by assigning the same value to weights with the same attributes. First, since this paper deals with QC-LDPC codes, we use the protograph weight sharing technique [26] by default, assigning the same weight value to the VNs (or CNs) belonging to a proto VN (or CN). Then, the weights to be trained are represented by \(\{\overline{w}_{v_{p}}^{(\ell)},w_{c_{p}\to v_{p}}^{(\ell)}\}\) for a proto VN \(v_{p}\) and a proto CN \(c_{p}\). The total number of weights is then \((N+E)\overline{\ell}\). If we employ spatial weight sharing in [27], only one VN weight and one CN weight remain for each iteration, and the weights are \(\{\overline{w}^{(\ell)},w^{(\ell)}\}_{\ell=1}^{\ell}\), with a total number of \(2\overline{\ell}\). On the other hand, by using temporal weight sharing [5] to eliminate differences between iterations, the weights are \(\{\overline{w}_{v_{p}},w_{c_{p}\to v_{p}}\}\), and the number is \((N+E)\). The neural network in Fig. 1(b) corresponds to NMS decoding of the Tanner graph in Fig. 1(a) with \(\overline{\ell}=2\). The input to this neural network is the channel LLR vector \((m_{1}^{\text{ch}},\dots,m_{n}^{\text{ch}})\), and the output is the output LLR vector \((m_{1}^{\text{o}},\dots,m_{n}^{\text{o}})\). For each iteration, two hidden layers are arranged, and each hidden layer has a number of nodes equal to the number of edges in the Tanner graph. In the odd hidden layers, the VN to CN message operation in (1) is performed, while in the even hidden layers, the CN to VN message operation in (2) is performed. The input layer is also connected to the odd hidden layers, which corresponds to the addition of the channel LLR in (1). The messages from the \(2\ell\)-th hidden layer to the \((2\ell+1)\)-th hidden layer are weighted by \(w_{c\to v}^{(\ell)}\), and the messages from the Figure 1: (a) The Tanner graph of an LDPC code and (b) the neural network corresponding to NMS decoding with a maximum iteration of \(\overline{\ell}=2\). input nodes to the \((2\ell+1)\)-th hidden layer are weighted by \(\overline{w}_{v}^{(\ell)}\). As our goal is to reduce the FER in the error-floor region, we use the FER loss, \(\frac{1}{2}\bigg{[}1-\mathrm{sgn}\Big{(}\mathrm{min}_{1\leq v\leq N}m_{v}^{ \mathrm{o}}\Big{)}\bigg{]}\)[32]. ## 3 Proposed training method In this section, we introduce the proposed training methods through three subsections. The organized training algorithm is shown in Fig. 2. ### Boosting learning using uncorrected words For the boosting learning approach, we first divide the entire decoding process into two stages: the base decoding stage \(\{1,\ldots,\ell_{1}\}\) and the post decoding stage \(\{\ell_{1}+1,\ldots,\ell_{1}+\ell_{2}=\overline{\ell}\}\). Training of base decoder follows the conventional training method: the received words sampled from \(\mathrm{E_{b}}/\mathrm{N_{0}}\) region \(\mathcal{R}_{1}\) are used as training samples. Specifically, we set \(\ell_{1}=20\) and \(\mathcal{R}_{1}=\{2.0,2.5,3.0,3.5,4.0\}\), which corresponds the waterfall region of WMS decoding. We use the spatial sharing technique (i.e., \(\mathbf{w}_{B}=\{\overline{w}^{(\ell)},w^{(\ell)}\}_{\ell=1}^{\ell_{1}}\)) since this achieves comparable performance to the full diversity weights in the waterfall region. In Fig. 3(a), the base NMS decoder is compared with the MS decoder and the WMS decoder with a single weight of \(0.75\) for \(\overline{\ell}=20\). The WMS decoding performance has a severe error-floor even though its waterfall performance is better than the MS decoding performance. Compared to the MS and WMS decoders, the NMS decoder for \(\overline{\ell}=20\) performs better over the training range \(\mathcal{R}_{1}\). On the other hand, the NMS decoder performs worse than the MS decoder in the error-floor region (e.g., \(4.5\) dB), which is outside \(\mathcal{R}_{1}\). To improve the performance in the error-floor region, a straightforward approach is extending the training range \(\mathcal{R}_{1}\) to include the error-floor region. However, the FER of the base decoder for received words from the error-floor region is very low, resulting in an almost negligible FER loss. Consequently, integrating the error-floor region into the training range does not impact the weight update process. Before training the post decoder, we first collect uncorrected words that the trained base decoder fails to correct among the received words sampled from region \(\mathcal{R}_{2}\). Then, the uncorrected words serve as training samples for the post decoder, which is distinct from the conventional training methods. The post decoder trains the weights \(\{\overline{w}_{v_{p}}^{(\ell)},w_{v_{p}\to v_{p}}^{(\ell)}\}_{\ell=\ell_{1}+1}\) with the aim of correcting the uncorrected words. After completing the training, the trained weights are used for the NMS decoding algorithm in (1)-(2). From the perspective of the NMS decoder, it performs continuous decoding up to iteration \(\overline{\ell}\) using the trained weights, but for the sake of discussion, we assume as if there are two cascaded decoders (base and post) in the perspective of training. Note that we employ the full diversity weights for the post decoder to confirm the best performance but we will introduce the shared weights \(\mathbf{w}_{P}\) (used in Fig. 2) in the next subsection. We also set \(l_{2}=10\), \(\overline{\ell}=30\) for this experiment, and subsequently extend the maximum number of iterations in the following subsection. To analyze the effectiveness of the proposed boosting learning, we compare the following three cases. Figure 2: The proposed training method represented by (a) an algorithm and (b) a block diagram. Case 1: Uncorrected words sampled at \(4.5\) dB in the error-floor region (i.e., \(\mathcal{R}_{2}=4.5\)). Case 2: Uncorrected words sampled at \(3.5\) dB in the waterfall region (i.e., \(\mathcal{R}_{2}=3.5\)). Case 3: Received words sampled at \(4.5\) dB without filtering. Regarding Case 1 and Case 2, we collect a total of \(60{,}000\) uncorrected words, allocating \(50{,}000\) for training, \(5{,}000\) for validation, and remaining \(5{,}000\) for test. Training is conducted for \(100\) epochs. Fig. 3(b) shows the distribution of the number of errors after base decoding and post decoding for the test samples used in Case 1 and Case 2. For Case 1, the uncorrected words collected in the error-floor region mainly have a small number of errors since most of decoding errors are trapped in small trapping sets, so the distribution is concentrated on small numbers (see Case 1 after base decoding). For ease of use, we refer to codewords with fewer than \(11\) remaining errors as small-error words. The post decoder, which is mainly trained on these small-error words, corrects a significant number of small-error words (see Case 1 after post decoding). Out of the total \(5{,}000\) test samples, \(68.5\%\) of samples are corrected by the post decoder, resulting that the test FER for the test samples is \(0.315\). This means that, when decoding for received words of the AWGN channel, the resulting FER at \(\mathrm{E_{b}/N_{0}}=4.5\) dB after post decoding is \(0.315\) times of the FER after base decoding as shown in Fig 3(a). In other words, the post decoder is successfully trained to correct small-error words inducing the error-floor. On the other hand, Case 2, where the samples are collected from the waterfall region, has a distribution that is widespread across all areas (see Case 2 after base decoding). In addition, the distribution remains almost the same after post decoding (see Case 2 after post decoding), which means that the post decoder fails to reduce the FER sufficiently. For the test samples, the test FER at \(\mathrm{E_{b}/N_{0}}=3.5\) dB after post decoding is \(0.77\) times of the FER after base decoding whose difference is not noticeable as shown in Fig 3(a). Comparing the results of the two cases, we conclude that composing mainly with small-error words facilitates the post decoder to learn to correct small-error words more effectively. As a result, Fig. 3(a) shows that Case 1 mitigates the error-floor more than Case 2. Meanwhile, for Case 3, where all received words are used as training samples without filtering, almost all of them are corrected during base decoding. Since the post stage training is mainly performed on codewords without errors, the loss function becomes almost \(0\). Then, the weights of the post decoder are unchanged from the initial value \(1\), and accordingly, the performance approaches the MS decoding performance, as shown in Fig. 3(a). ### Block-wise training schedule In the previous subsection, the number of iterations for the post decoder is set to \(\ell_{2}=10\). To lower the error-floor further, a large number of iterations is required, so we set \(\ell_{2}=30,\overline{\ell}=50\). However, deep neural decoders with a large iteration number are prone to suffer from the vanishing gradient problem. In order to tackle this issue, we propose a block-wise training schedule which is shown in Fig. 4(a). The proposed training schedule locally trains the weights corresponding to a block of \(\Delta_{1}\) iterations at each training stage. In the first stage, the weights belonging to the first block are trained, Figure 3: (a) Decoding performances of the MS, WMS, NMS decoders and (b) Error distributions after base and post decoding for Case 1 and Case 2. and in the next stage, the weights of the subsequent \(\Delta_{1}\) iterations are trained. At this point, the weight values of previous \(\Delta_{2}\) iterations, which are already trained in the first stage, are further trained by taking the result of the first stage training as its initial state. This retraining, which is not used in the iter-by-iter training schedule [26], assists in preventing the learning process from falling into a local minimum. Note that the method of one-shot training [5] corresponds to the case of \(\Delta_{1}=\ell_{2},\Delta_{2}=0\), and the iter-by-iter training schedule [26] is equivalent to the case of \(\Delta_{1}=1,\Delta_{2}=0\). We employ a greedy approach to determine the optimal values for \(\Delta_{1}\) and \(\Delta_{2}\), resulting that \(\Delta_{1}=5,\Delta_{2}=10\) offers superior performance in terms of the test FER. Fig. 4(b) shows the evolution of test FER as a function of the iteration number. In the case of one-shot training [5], the vanishing gradient problem hinders training the weights of earlier iterations, so the test FER stays roughly the same until iteration \(40\) and starts to fall thereafter. The same behavior is observed for \(\overline{\ell}=40\). Thus, the test FERs of \(\overline{\ell}=40\) and \(\overline{\ell}=50\) are almost the same. Next, since the iter-by-iter training schedule [26] finds the optimal weight for each individual iteration, the test FER falls even at the first iteration of the post decoder (i.e., \(\ell=21\)) without encountering the vanishing gradient problem. However, this local optimization leads to a degraded local minimum, and consequently, the test FER gradually and slowly decreases with each iteration. Likewise, the multi-loss method [5] shows a similar result. In contrast, the block-wise training schedule with \(\Delta_{1}=5,\Delta_{2}=0\) shows a superior test FER at iteration \(25\) compared to the other training schedules because it results in a better solution by training the weights of multiple iterations simultaneously. Moreover, the schedule with retraining (i.e., \(\Delta_{1}=5,\Delta_{2}=10\)) outperforms the schedule without retraining (i.e., \(\Delta_{1}=5,\Delta_{2}=0\)) at iteration \(30\) though it shows a worse result at iteration \(25\). This implies that through retraining, the weights of intermediate iterations have been adjusted to preprocess error patterns thereby leading to stronger correction capabilities in the final iteration. As a result, at the maximum of \(50\) iterations, the proposed training schedule with \(\Delta_{1}=5,\Delta_{2}=10\) provides the better test FER value compared to the other training schedules as shown in Fig. 4(b): \(0.11\) for the block-wise, \(0.16\) for the multi-loss, \(0.18\) for the one-shot, \(0.37\) for the iter-by-iter,. ### Weight sharing technique using UCN weights Assuming the techniques proposed thus far (in detail, uncorrected words at \(\mathrm{E_{b}/N_{0}}=4.5\) dB and training schedule with \(\Delta_{1}=5,\Delta_{2}=10\)) are used, we compare the weight sharing techniques Figure 4: (a): Block-wise training schedule and (b): Evolution of the test FER across iterations. Figure 5: Illustration of the proposed weight sharing technique and comparison with other sharing techniques. in Fig. 5. Compared to the full diversity weights, the spatial and temporal sharing techniques significantly reduce the number of distinct weights, but cause performance degradation. In contrast, the proposed sharing technique that introduces a new weight type called UCN weight shows almost identical performance while using only about 2.6% of the weights compared to the full diversity weights. The proposed sharing technique assigns different weights to UCNs and SCNs as shown in Fig. 5. This is feasible because the decoder knows whether a CN satisfies the check equation or not. Using the spatial sharing technique and distinguishing between SCN weight \(w^{(\ell)}\) and UCN weight \(\hat{w}^{(\ell)}\), the proposed sharing technique can be represented as \(\{\overline{w}^{(\ell)},w^{(\ell)},\hat{w}^{(\ell)}\}_{\ell}\) for iteration \(\ell\) and the total number of distinct weights becomes \(3\ell_{2}\). Techniques using different weights for UCNs and SCNs have also been proposed in [33, 41]. However, the work [41] uses only one suppression factor \(\rho\) to represent the UCN weight (i.e., \(\hat{w}^{(\ell)}=(1+\rho)w^{(\ell)}\)), making the UCN weight dependent on the CN weight. As a result, due to the limited degree of freedom for the UCN weight, it is difficult to obtain the decoding diversity for effectively removing various types of error patterns. Moreover, in [33], if at least one of the CNs belonging to a single proto CN is unsatisfied, all \(z\) CNs from the proto CN are weighted by the UCN weight. This approach, which applies the same weight to a large number of CNs tied together at the proto level, is not suitable for correcting words with a small number of UCNs, because it does not separately handle individual CNs like the proposed method. ## 4 Performance evaluation In this section, we compare the proposed and other conventional decoding schemes in terms of the decoding performance. All simulations are performed by NVIDIA GeForce RTX 3090 GPU and AMD Ryzen 9 5950X 16-Core Processor CPU. For training, the weights are trained by the Adam optimizer [42] with a learning rate of \(0.001\). We evaluate the decoding performance using Monte Carlo methods with at least \(500\) uncorrected words for each FER point. Fig. 6(a) shows the FER performances of the proposed scheme for the WiMAX LDPC code. The proposed scheme incorporates the i) boosting learning with uncorrected words, ii) block-wise training schedule with retraining, and iii) spatial weight sharing with UCN weights. The performance is compared with MS decoding, WMS decoding, and existing NMS decoding schemes in [5, 26]. Among the neural decoder studies listed in Table 1, we exclude comparison with the studies that use FAID and layered decoding, or that require enumerating trapping sets and absorbing sets. In addition, we choose not to compare with augmented neural networks [6, 34] since our approach does not increase model complexity to deal with the low-error-rate region of long codes. A comparative analysis for short codes in the waterfall region can be found in the appendix. For the NMS decoding schemes in [5, 26], the base decoder is used for iterations from \(1\) to \(20\) like the proposed scheme, and the training methods introduced in [5, 26] are employed for the post stage. The full diversity weights are used for the schemes in [5, 26] to observe their best performance. For the scheme in [5], received words in the waterfall region (\(\text{E}_{\text{b}}/\text{N}_{0}\)\(2\)-\(4\) dB) are used as training samples, and the weights for the post stage are trained all at once without a training schedule. For the scheme Figure 6: FER performances of (a): WiMAX LDPC (length \(576\), rate \(3/4\)), (b): IEEE802.11n LDPC (length \(648\), rate \(5/6\)), (c): 5G LDPC (length \(552\), rate \(1/2\)) codes. in [26], received words from the \(\mathrm{E_{b}/N_{0}}\) points where the MS decoder achieves bit error rate of \(10^{-3}\) are used as training samples, and the iter-by-iter training schedule is employed. The remaining hyper-parameters are set in the same way as in the proposed scheme. As shown in Fig. 6(a), the conventional NMS decoders in [5; 26] show good performance in the waterfall region (\(\mathrm{E_{b}/N_{0}}\)\(2\)-\(4\) dB), but the error-floor occurs from \(4\) dB. This is because the training samples are composed of received words without filtering. In contrast, the proposed scheme shows excellent performance in both the waterfall and error-floor regions, and the error-floor phenomenon is barely noticeable down to FER of \(10^{-7}\). In particular, comparing the results of \(\overline{\ell}=20\) and \(\overline{\ell}=50\), it is confirmed that the post decoder successfully removes the error-floor. In addition, we compare the proposed scheme with the state-of-the-art post processing scheme in [22]. We directly reference the simulation results from [22]. As shown in Fig. 6(a), the scheme in [22] shows a similar or worse performance to the proposed scheme, but it has disadvantages of having very high decoding complexity and latency since it consumes a large number of iterations \(\overline{\ell}=150\). Table 2 compares the schemes in terms of the decoding complexity. The NMS decoder has more multiplications than the MS decoder by \((E+N)z\) due to the weighting operation. The number of other operations is the same as in the MS decoder. Total complexity is evaluated with assumption that the comparison \(C\) is as twice as complex than the addition \(A\) and multiplication \(M\)[43]. The additional memory for storing the weights of the proposed scheme is \(3\overline{\ell}\) which is much lower than those of [5; 26] which exploit full weight diversity. Since the scheme in [22] does not use weighting, the complexity per iteration is lower than the proposed NMS scheme, but the total complexity is more than twice as high as the proposed NMS scheme due to the higher number of iterations. Moreover, additional complexity is required for the error path detector [22]. In Fig. 6(b), (c), similar results are observed for the IEEE802.11n LDPC and 5G LDPC codes, where the proposed scheme outperforms the other schemes and achieves an FER of \(10^{-7}\) without a severe error-floor. ## 5 Conclusions This paper proposed training methods for the NMS decoder of LDPC codes to enhance the error-floor performance. Using uncorrected words from the base decoder, we trained the post decoder to be specialized for error patterns causing the error-floor, promoting decoding diversity in the cascaded base and post decoders. We also proposed a training schedule to circumvent the vanishing gradient and local minimum problems, and a weight sharing technique that significantly reduces the number of distinct weights without sacrificing performance. The proposed NMS decoder using the trained weights showed the excellent waterfall and error-floor performances for several standard LDPC codes. Along with the performance improvement, the proposed training scheme has the advantage of being flexibly applicable regardless of the types of channel, code, and decoding algorithm. This scheme can also be implemented directly on hardware architectures without additional costs, and can be directly utilized with no prior analysis of the target code and decoding algorithm. ## 6 Acknowledgments This work was supported by Samsung Electronics Co., Ltd (IO230411-05859-01), by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government [2021-0-00746, Development of Tbps wireless communication technology], by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00212103), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00247197).
2305.03144
Influence of various text embeddings on clustering performance in NLP
With the advent of e-commerce platforms, reviews are crucial for customers to assess the credibility of a product. The star ratings do not always match the review text written by the customer. For example, a three star rating (out of five) may be incongruous with the review text, which may be more suitable for a five star review. A clustering approach can be used to relabel the correct star ratings by grouping the text reviews into individual groups. In this work, we explore the task of choosing different text embeddings to represent these reviews and also explore the impact the embedding choice has on the performance of various classes of clustering algorithms. We use contextual (BERT) and non-contextual (Word2Vec) text embeddings to represent the text and measure their impact of three classes on clustering algorithms - partitioning based (KMeans), single linkage agglomerative hierarchical, and density based (DBSCAN and HDBSCAN), each with various experimental settings. We use the silhouette score, adjusted rand index score, and cluster purity score metrics to evaluate the performance of the algorithms and discuss the impact of different embeddings on the clustering performance. Our results indicate that the type of embedding chosen drastically affects the performance of the algorithm, the performance varies greatly across different types of clustering algorithms, no embedding type is better than the other, and DBSCAN outperforms KMeans and single linkage agglomerative clustering but also labels more data points as outliers. We provide a thorough comparison of the performances of different algorithms and provide numerous ideas to foster further research in the domain of text clustering.
Rohan Saha
2023-05-04T20:53:19Z
http://arxiv.org/abs/2305.03144v1
# Influence of various text embeddings on clustering performance in NLP ###### Abstract With the advent of e-commerce platforms, reviews are crucial for customers to assess the credibility of a product. The star ratings do not always match the review text written by the customer. For example, a three star rating (out of five) may be incongruuous with the review text, which may be more suitable for a five star review. A clustering approach can be used to relabel the correct star ratings by grouping the text reviews into individual groups. In this work, we explore the task of choosing different text embeddings to represent these reviews and also explore the impact the embedding choice has on the performance of various classes of clustering algorithms. We use contextual (BERT) and non-contextual (Word2Vec) text embeddings to represent the text and measure their impact of three classes on clustering algorithms - partitioning based (KMeans), single linkage agglomerative hierarchical, and density based (DBSCAN and HDBSCAN), each with various experimental settings. We use the silhouette score, adjusted rand index score, and cluster purity score metrics to evaluate the performance of the algorithms and discuss the impact of different embeddings on the clustering performance. Our results indicate that the type of embedding chosen drastically affects the performance of the algorithm, the performance varies greatly across different types of clustering algorithms, no embedding type is better than the other, and DBSCAN outperforms KMeans and single linkage agglomerative clustering but also labels more data points as outliers. We provide a thorough comparison of the performances of different algorithms and provide numerous ideas to foster further research in the domain of text clustering. _Keywords_ Clustering Product Reviews Machine Learning Text Embeddings Introduction E-commerce websites are prevalent in the digital era and placing orders online for commodities is seamless. E-commerce websites have a feedback system where the customers can submit a review for a product, which usually includes a star rating along with a text review. These reviews are intended to represent the sentiments of the customer towards that specific product. A higher rating for a product indicates that the product serves its intended purpose, which in turn builds the customer's trust in that product. Moreover, the ratings and the corresponding reviews help business improve their products. However, there are instances where the assigned rating does not reflect the implied sentiment of the review. For example, a review with a rating of two may have a review text that might better suited for a four star rating. In other words, mismatches between the rating and review text's sentiment may mislead a potential customer and possibly be detrimental to the business. Using a clustering algorithm can help us group reviews of similar sentiment and potentially assign a rating appropriate to the underlying sentiment of a review. But how do we select a clustering algorithm for such data? And what type of data representation do we use to characterize the review text? We investigate such questions in this work. In the domain of text mining, clustering algorithms have been widely used to find underlying patterns in data ([1, 16, 7, 6] to name a few). Such works focus on specific text clustering tasks and use particular types of numerical vectors to represent the text. There is a paucity of work exploring the impact of different text representations' on the performance of different types of clustering algorithms. In other words, how do we choose a text representation for a specific type of clustering algorithm? Our work presents a crucial step in this direction. In this work, we compare the performance of various types of clustering algorithms when applied to different types of text representations (embeddings). We obtain textual data from product reviews from Amazon.com and represent them using different types of embeddings. For each type of embedding, we train four types of clustering algorithms, namely partitioning, hierarchical, and two density based, and compare the performance of each algorithm using internal and external validation techniques. Our results indicate that the choice of text representation has a drastic effect on the performance of a clustering algorithm, and density based algorithms may perform better than other types of clustering algorithms. We also found that it may not always be the case that the number of clusters identified by the clustering algorithms is equal to the number of predefined labels (in our case, the labels are the ratings assigned to each review). All data and code is available at [https://github.com/simpleParadox/cmput_697_project](https://github.com/simpleParadox/cmput_697_project) Methods In this section, we explain the different components of our experimental paradigm. For simplicity we show an overview of the experimental framework in Figure 1. ### Dataset and Preprocessing In this work, we use the Amazon product reviews dataset1 that contains 1597 samples and 27 columns of primarily consumer electronics from the Amazon brand. We remove the samples where the rating is not present for a review. The final dataset contains 1177 samples. Each sample contains the product name, product brand, number of stars for the rating, review text, review title, etc. (refer to the dataset link for the full description of each feature). Given the scope of this work, we will only use the review text, review title, and the star rating accompanying each review. The star ratings are in the range of 1-5. We concatenate the review title and the review text to represent the samples in the dataset. The concatenation ensures that the review title contributes to the overall information of the review. We also truncate each concatenated sample to have a maximum 512 tokens as this is required with the maximum length of the input to a language model (discussed in Section 2.2). We show the preprocessing steps in Figure 2. We show the distribution of the ratings in Figure 3. We observe that the ratings in the dataset are not uniformly distributed. Figure 1: Experimental framework diagram. First, the reviews are loaded and preprocessed, where the tokenization of reviews takes place. Then, various embeddings are obtained for each review on which different clustering algorithms are trained. To evaluate the performance of the clustering algorithm, we use internal validation (silhouette score) and external validation (adjusted rand index score and cluster purity). ### Embeddings Our goal is to investigate the effect of different types of text embeddings on the performance of clustering algorithms. To this end, we use two language models the pretrained BERT [4] Figure 3: Distribution of ratings in the dataset. The majority of the reviews in the dataset are skewed towards higher ratings. Figure 2: Preprocessing framework depicting the steps involved in data cleaning. First, we remove the reviews without a rating. We then select the review title and review text, and concatenate the two. We truncate the concatenate sample to have a maximum length of 512 words. Finally, we tokenize the input to obtain the text representations. A dataset is then formed that contains the data. (pretrained on the BookCorpus dataset and the English Wikipedia dataset) and pretrained Word2Vec [12] (pretrained on the Google News dataset) to represent each review in the dataset. We use the Word2Vec implementation from Gensim [14] and the BERT implementation from Huggingface2. For each sample, we concatenate the review title and the review text, before feeding them into the language models. For Word2Vec, we first tokenize the sample and then obtain the hidden vector (embedding) for each token (word) from the language model. Each hidden vector is of size 300 dimensions. Finally, we average the hidden vectors for each token to obtain one single 300 dimensional vector for the review sample. Word2Vec embeddings do not capture contextual information present in the review. Therefore, we also use BERT to account for contextual variations in the review text as previous work has shown that BERT embeddings outperform other text embeddings [16]. To obtain BERT embeddings, we tokenize the sample using the pretrained BERT tokenizer and then feed the tokenized result into the BERT model. We use two types of BERT embeddings to represent each review. First, we use the <CLS> token to represent the review which is a single 768-dimensional vector that takes into consideration all the tokens in the input review text. Second, we obtain the last hidden state (768-dimensional) vector for each token, which we average to obtain a single 768-dimensional vector. For each type of embedding, we use StandardScaler from scikit-learn [2] to z-score the data. To summarize, we use three types of embeddings. Footnote 2: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) - 300 dimensional. - 768 dimensional. - 768 dimensional. Each embedding type encodes the text differently. Moreover, as embeddings vary in the number of dimensions, their distribution in space is also different. To visualize the distribution of the embeddings in space, we use a dimensionality reduction technique to plot the embeddings in 2 - dimensions. We apply t-SNE [9] with 2-components on each embedding type. We show the plots for each embedding in Figure 4. We observe that each embedding type is distributed differently in space. This characteristic of the data may result in the varied performance of different clustering algorithms. ### Clustering Algorithms To find patterns in the vector representations of the reviews, we apply four clustering algorithms and compare each algorithm's performance. We use KMeans [8, 10], single linkage agglomerative hierarchical [13] as partitioning and hierarchical based methods respectively, and DBSCAN [5], and HDBSCAN [3] for density based methods. We use the implementation from scikit-learn [2] for KMeans, Agglomerative Hierarchical single linkage clustering, and DBSCAN. For HDBSCAN, we use the implementation from the hdbscan library [11]. Figure 4: TSNE plots in 2 - dimensions for the different embedding types used in this work. Figure 3(a) shows the distribution of the Word2Vec embeddings, Figure 3(b) shows the distribution of the BERT - CLS embeddings, and Figure 3(c) shows the distribution of the BERT - Average embeddings. None of the embeddings are similarly distributed in space. ### Evaluation Numerous validation metrics have been proposed to evaluate the performance of clustering algorithms (see [18] and [19] for a review). The validation metrics are generally categorized into internal and external validation. For internal validation, we use the silhouette score3. For external validation, we use the adjusted rand-index4 and cluster purity, which measures the extent to which a cluster contains a single class. Footnote 3: [https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) Footnote 4: [https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) ### Experimental Settings Our dataset has text reviews for which the ratings range from a value of 1 to a value of 5. For the KMeans and the single linkage agglomerative hierarchical algorithms we chose to conduct two variations of experiments. First, we set the number of clusters to 5 (one for each rating value) and apply the various clustering algorithms to the dataset. Second, we apply the clustering algorithms on the dataset as if there were 3 clusters. We transform the ratings to have three class labels by applying the following transformation. \[\text{rating}=\begin{cases}1&\text{if rating}<3\\ 2&\text{if rating}=3\\ 3&\text{if rating}>3\end{cases}\] It is useful to note that we only specify the number of clusters as a hyperparameter for the KMeans and the Agglomerative Hierarchical Clustering. For KMeans, we initialize the algorithm with the k-means++ method where the initial cluster centroids are selected using a sampling based approach from the empirical probability distribution of the points' contribution to the overall inertia5. We set the maximum number of iterations to 300. For DBSCAN, we tune the hyperparameter epsilon\(\epsilon\), which controls the maximum distance between two samples for them to be considered in the neighbourhood of each other, and min_samples is fixed to a value of 5, which defines the number of samples in the neighbourhood of a point for it to be considered a core point. For the HDBSCAN algorithm, we only tune the parameter min_cluster_size, which controls the minimum number of points for a cluster to form. We discuss the values for the hyperparameters in the Results section (Section 3). For all the algorithms, we use euclidean distance metric. Footnote 5: [https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) Results and Discussion Various types of embeddings may affect the performance of different clustering algorithms, for which we investigate the results in this section. ### KMeans We start by exploring the effect of choosing various embedding types on the KMeans algorithm. One of the crucial hyperparameters of the KMeans algorithm is the number of clusters, as KMeans requires the user to specify the number of clusters in the data. The review dataset contains reviews categorized into five categories (ratings). However, this categorization may not reflect itself in terms of the number of clusters in the embedding space. The question then arises, what value should we specify for the number of clusters? One of the popular methods to choose the optimal number of clusters is the elbow (or knee) method [17] where the inertia (within cluster sum of squares) is graphed on the ordinate (y-axis) and the number of clusters on the abscissa (x-axis); the point on the graph where there's a 'break' or 'knee' is chosen as the optimal number of clusters. However, when we plotted the inertia against the number of clusters, there was no discernible break in the graph. Therefore, we chose the silhouette score method to determine the optimal number of clusters. The silhouette score is a metric generally used for internal validation of clustering algorithms but it can also be used to choose the number of clusters (n_clusters) for the KMeans algorithm [15]. We tested a range of values (3 - 19) and plotted the silhouette score for each value. Since the performance of the KMeans algorithm also depends on the initialization of the cluster centroids, for each value of n_clusters, we used seed values between 1 - 50 to initialize the cluster centroids. In Figure 5, we show the internal validation scores for KMeans for each embedding type, averaged across multiple seeds, and the standard error of mean shown along the curves. Each curve on the graph represents an embedding type. The red curve denotes the results when the KMeans algorithm is trained on the BERT - CLS embeddings, the blue curve shows the results for the BERT - Average embeddings, and the blue curve shows the results for the Word2Vec - Average embeddings. We observe that the silhouette score for all three embeddings is maximum when the number of clusters is 3, and this score stabilizes as the number of clusters increases. Such a result indicates that the optimal number of clusters is 3. Our dataset, however, is as five ratings (categories). This brings up the question, does the data belong to three clusters instead of five? To this end, we use external validation to evaluate the performance of the KMeans algorithm for each embedding type. We show the external validation scores in Figure 6 for both three clusters and five clusters. The adjusted rand scores for the BERT - CLS embedding outperform those of BERT - Average and Word2Vec - Average. Although such a result is consistent with the results of the internal validation scores, the difference between the scores is not significant. From the external validation results, it is not clear whether choosing 3 or 5 clusters will result in better model performance. Moreover, the adjusted rand scores for both three and five clusters are close to zero indicating poor clustering quality, thus requiring the need for the evaluation of clustering algorithms. We also use the cluster purity measure as an evaluation method to assess the extent to which a cluster contains a single class. We show the cluster purity results in Figure 7. The scale of the purity value is important to note. The purity value is the lowest when the number of clusters is close to 3 or 5. As the number of clusters increases, the purity value increases. This is because the more the number of clusters, the fewer points that belong to each cluster. When the number of clusters is equal to 3, the silhouette scores were the highest (see Figure 5). The purity scores also indicate that the clusters identified by the KMeans algorithm are possibly poor in quality. Figure 5: Internal validation scores for KMeans with varying number of clusters. Since the initialization centroids of the KMeans affects the performance of the algorithm, we run the algorithm with starting seed values in the range of 1-50. The average results are shown here and the standard error of mean is shown with shaded areas along the curves. All in all, BERT - CLS can be the choice of embedding if KMeans is used to cluster text embeddings, but caution must be exercised as optimal results may not be guaranteed. In the next section, we investigate the performance of agglomerative single linkage clustering algorithm to cluster the text embeddings. ### Single Linkage Agglomerative Hierarchical Although our dataset contains reviews specified into 5 categories (ratings), the KMeans analysis (see Section 3.1) showed us that it is likely that the data may belong to a different number of clusters. Hierarchical clustering is generally used when the number of clusters is not known in advance and a distance threshold (cut-off threshold) can be used on a dendrogram to obtain a fixed number of clusters. For simplicity and consistency, we tune a single hyperparameter. We specify a range of values for n_clusters for which the algorithm automatically decides the cut-off threshold to obtain the number of clusters. Similar to the analysis in Section 3.1, we use the range of 3 - 19 for the hyperparameter n_clusters. We show the internal validation results in Figure 8. For BERT - CLS, the single linkage agglomerative clustering algorithm outperforms other embeddings when the number Figure 6: External validation results for KMeans averaged across fifty seed values (seed 1-50). BERT - CLS embeddings for n_clusters = 3 seem to outperform other embeddings. However, the values are close to zero indicating poor clustering quality. of clusters is 3. However, this effect is transient and the silhouette score drops quickly when the number of clusters is 5. We also observe a similar effect when the model is trained on the BERT - Average embeddings. On the other, when using Word2Vec - Average embeddings, the silhouette score of the model is greater than those obtained when the model is trained on BERT - CLS and BERT - Average embeddings when the number of clusters is greater than 3. Such a result indicates that the Word2Vec embeddings may be distributed in the space in a manner that enables the clustering algorithm to find clusters that have good cohesion and are well relatively well separated from other clusters in the space. However, since the single-linkage hierarchical clustering only considers the closest points between two clusters before merging them, it may cause the clusters to be connected in a chain-like manner. This may also result in the silhouette scores becoming relatively stable even if the number of clusters increases. When comparing the silhouette scores for the single linkage agglomerative hierarchical clustering to those observed during the KMeans analysis, the silhouette scores for the single linkage agglomerative clustering are higher than those identified by the KMeans algorithm. Although, this may indicate that the single linkage agglomerative clustering performs better Figure 7: Cluster purity scores for the KMeans algorithm. As the number of clusters increases, the purity scores increases. However, the purity scores of around 0.65 indicate sub-optimal clustering quality. than KMeans, the external validation results in Figure 9 indicated poor clustering quality as the adjusted rand scores were close to a value of zero. The cluster purity scores observed in Figure 10 also suggested the poor clustering quality of the single linkage agglomerative clustering algorithm. Similar to the KMeans results, the purity scores were in the range of between 0.60 to 0.67 further providing evidence that the single linkage agglomerative hierarchical clustering algorithm is not better at finding clusters in the underlying data space. ### Dbscan Besides exploring KMeans and single linkage hierarchical clustering, we wanted to apply the density based algorithms To evaluate the density density based algorithms, we only use the silhouette score as the internal validation metric. As reviewed previously, we only tune the value of the epsilon hyperparameter and keep the value of the min_samples hyperparameter fixed to 5. For the DBSCAN algorithm, we show the results in Figure 11. We observe silhouette scores close to 1.0 for lower values of epsilon suggesting that DBSCAN can find well defined clusters and performs better than KMeans and single linkage agglomerative clustering. However, the number of clusters identified by DBSCAN is not equal to those Figure 8: Internal Validation scores for Single Linkage Agglomerative Hierarchical Clustering identified by KMeans and single linkage agglomerative clustering. We saw in Sections 3.1 and 3.2 that a lower number of clusters resulted in higher silhouette scores; around 3 clusters. On the other hand, for DBSCAN, we observed more clusters for higher silhouette scores. For epsilon=0.5, there were 13 clusters for the BERT - Average embeddings, 16 for BERT - CLS embeddings, and 22 for Word2Vec - Average embeddings. These results suggest that it is not always the case that model performance is consistent across different algorithms when applied to the same data. There was also an interesting effect in terms of the change in the silhouette score when the value of epsilon was set to 15.0. The silhouette scores dropped drastically for BERT - CLS, and Word2Vec - Average. However, the silhouette score for BERT - Average did not drop drastically. This may be due to the underlying distribution of the BERT - Average embeddings in the data space that enables the algorithm to identify clusters that are more well defined than those of BERT - CLS and Word2Vec - Average embeddings. Therefore, one may further investigate the possibility of using BERT - Average embeddings when using DBSCAN to cluster textual review data. Such results may suggest that indeed more clusters exist in the data. But one must be cautious before drawing conclusions as the number of points in each identified cluster may be low in number. It must be noted that while training the DBSCAN algorithm, we fixed the Figure 9: External validation scores for Single Linkage Agglomerative Hierarchical Clustering value of min_samples to 5, which may have affected the results. Moreover, we also observed that when \(\epsilon\) was set to 0.5, many points are identified as noise / outliers. We show the number of noise points for \(\epsilon=0.5\), and \(\epsilon=15.0\) in Table 1. Therefore, each cluster contains very few points. The internal validations scores and the number of outliers for each embedding type suggest that even though the silhouette scores may appear to be high, the presence of outliers does not necessarily make the DBSCAN algorithm _better_ than the KMeans or the single linkage agglomerative hierarchical algorithms. One of the possible explanations for obtaining such results could be related to the distribution of the data points in the embedding space and we discuss this point in more detail in the limitations section (Section 4). The cluster purity results shown in Figure 12 provide further support to the idea that \begin{table} \begin{tabular}{c|c|c} Embedding type & \(\epsilon=0.5\) & \(\epsilon=15.0\) \\ \hline BERT - CLS & 1056 & 574 \\ BERT - Average & 1099 & 931 \\ Word2Vec - Average & 973 & 376 \\ \end{tabular} \end{table} Table 1: Number of noise points for DBSCAN for \(\epsilon\) values 0.5 and 15.0. Figure 10: Cluster purity scores for DBSCAN. For each embedding type, the cluster purity increases with increase in the number of clusters. for lower values of epsilon (and keeping the value of min_samples fixed to a value of 5), smaller clusters are generated and most of the data points are labelled as noise points; as cluster purity is calculated only on the identified clusters, the value is high. Furthermore, when \(\epsilon=15.0\), the number of noise points decreases indicating that more points are assigned to a cluster, but a decrement in the silhouette score observed in Figure 11 suggests a poor clustering performance. All in all, the results indicate that the DBSCAN algorithm's performance is highly pertinent to the distribution of the embeddings, the value of \(\epsilon\), and the value of min_samples. Although the silhouette and the cluster purity scores can be used to evaluate the performance of DBSCAN, one must also consider calculating the number of noise points as the relative proportion of noise points identified by the algorithm can affect the final interpretation of the results. ### Hobscan One of the shortcomings of DBSCAN is its inability to identify clusters of varying densities. As reviewed in Section 2.2, the distribution of embeddings in space can vary in density. Figure 11: Internal validation scores for various epsilon values for the DBSCAN algorithm. Values of epsilon chosen were [0.5, 1.0, 5.0, 10.0, 15.0]. . Consequently, we also analyze the performance of HDBSCAN for clustering the various embedding types. We show the internal validation results for HDBSCAN in Figure 13. For simplicity, we only tune the hyperparameter min_cluster_size of the HDBSCAN model, which controls the minimum size of the clusters below which all points are considered to be noise points, and keep all the other hyperparameters to their respective default values (see Section 2.3). We observe that when the model is trained on Word2Vec - Average and BERT - Average, the silhouette score is maximum when the min_cluster_size is 10. In addition, the silhouette score for the model trained on BERT - Average embeddings dropped sharply when the value of min_cluster_size was above 10. However, such an effect is not observed when the model is trained on the BERT - CLS embeddings. The silhouette scores for BERT - CLS were relatively stable for all values of min_cluster_size. These results suggest that when using algorithms such as HDBSCAN, average embeddings such as BERT - Average or Word2Vec - Average might be better suited for clustering text embeddings, instead of using BERT - CLS embeddings. Do the silhouette scores reflect the clustering quality? We observe in Figure 14 that when using the BERT - Average and Word2Vec - Average embeddings, HDBSCAN clusters have higher purity when using BERT - CLS embeddings. Generally, the clustering purity score is above 0.8 for all embeddings (except after when the min_cluster_size value is more than Figure 12: Cluster purity scores for the DBSCAN algorithm. 10 in case of BERT - Average) suggesting that the clusters are of generally good quality. However, interesting results were observed when comparing the number of clusters identified by DBSCAN and HDBSCAN. The number of clusters identified by HDBSCAN was lower than those identified by DBSCAN. The number of clusters identified by HDBSCAN for each embedding is given below (when the min_cluster_size is 10). * CLS = 9 * Average = 7 * Average = 7 Even for a lower number of clusters, the silhouette scores and the cluster purity scores are relatively high indicating that HDBSCAN may be more robust in clustering text embeddings. However, one must be cautious before drawing conclusions as the number of noise points identified by HDBSCAN may inflate the results. In fact, we observed high number of noise points identifed by HDBSCAN when min_cluster_size = 10; the number of outliers identified by HDBSCAN were 945 for the BERT - CLS embedding, 1046 for the BERT - Average embedding, and 1048 for the Word2Vec - Average embedding. The number of outliers identified by HDBSCAN for the best performing hyperparameter are comparable to the number of outliers identified by HDBSCAN. In other words, both the density based algorithms do not appear to be significantly outperform the other. All things considered, when comparing all the above algorithms, DBSCAN and HDBSCAN appear to outperform KMeans and single linkage agglomerative hierarchical clustering algorithm. However, one must calculate the number of outliers identified by the density based algorithms as it may affect the final conclusion of the comparative analysis. ## 4 Limitations and Future Work Our work has potential limitations. We used a fairly restricted dataset that primarily has examples of consumer electronics from one brand. Therefore, minor variations in the performance of the clustering algorithms are plausible if a larger and more diverse dataset is used. An extension of this work may include a comprehensive comparison across various types of datasets (containing other types of items, multiple languages etc). We did not factor in the demographics of the reviews. Reviews for the same product can exist in multiple languages and existing language models may represent a single review differently in different languages, which may lead to variance in the performance of the clustering algorithms. In other words, progress in optimal representations of text is a bottleneck when evaluating embeddings for clustering algorithms. This may be considered as a motivation for fostering more research in aligning text representations across multiple languages. Users assign a star rating for multiple reasons and a product may be assigned a one star review for several reasons (bad packaging, dislike of a specific feature etc.). In this project, we treat all reviews for a given rating equally regardless of the underlying text / reason of the dataset in the review for that specific rating. However, customers assign a specific rating value for numerous reasons. For example, a customer may assign a product a rating of 5 because they liked the battery capacity. Another customer may have assigned a 5 star rating because the product was compact in size. This may be one of the reasons that our external validation scores were close to zero, as the assignment of ratings on a scale of 1-5 may be insufficient / too small of a range. Moreover, the number of clusters detected by the density based algorithms in Sections 3.3 and 3.4 support the presence of more than 5 clusters in the underlying review dataset. Moreover, as many points were identified as noise points by the DBSCAN algorithm, it is possible that more underlying clusters may exist in the data space. Further work is necessary to evaluate clustering algorithms on clustering review data with the consideration of the number of clusters being more than the number of predefined labels. Figure 13: Internal validation scores for the HDBSCAN algorithm. Values chosen for the hyperparameter min_cluster_size were [2, 5, 10, 15, 20]. This work uses three different evaluation metrics as interpreting clustering quality based on only one metric may not be accurate, especially when dealing with complex or high-dimensional data. However, there are advantages and limitations to all the metrics. Therefore, it is also possible that in the future more sophisticated metrics specifically designed for text embeddings, or a combination of existing metrics can be used to evaluate clustering results. While training the clustering algorithms, a limited number of hyperparameters were tuned for simplicity. More sophisticated hyperparameter tuning will help us gain a deeper understanding of the performance of the clustering algorithms given high-dimensional text embeddings. Future work may explore the ideas in this paper by tuning multiple hyperparameters for various algorithms. To represent the text reviews, we used embeddings that were obtained from pretrained models (Word2Vec pretrained on the google news dataset, and the BERT model pretrained on the BookCorpus and English Wikipedia dataset), which may not be the best representative of the data in the review dataset. In other words, it might have been possible that fine-tuning the pretrained language models on the review dataset might have resulted in embeddings that are better suited for clustering. In the future, researchers may look at fine Figure 14: Cluster purity scores for the HDBSCAN algorithm. Values chosen for the hyperparameter min_cluster_size were [2, 5, 10, 15, 20]. tuning language models on multiple diverse datasets to obtain embeddings that can more accurately represent textual reviews than pretrained models. Finally, it is useful to note that the choice of algorithm is crucial for clustering text data and this choice requires careful planning. For example, our results showed that DBSCAN outperforms the KMeans and single linkage hierarchical algorithm, but there were also a high number of noise points identified by DBSCAN. As the \(\epsilon\) value increases, the number of noise points decreases, but the silhouette score and the cluster purity score also decrease (for all embeddings except BERT - Average). Such results may help require to motivate research into developing clustering algorithms specifically designed to cluster embeddings. Although some limitations exist, our work also presents numerous ideas that may be investigated in the future to advance research in the domain of text clustering. ## 5 Conclusion In this work, we investigated the performance of various types of algorithms for clustering text embeddings and how choosing different types of embeddings impacts the clustering performance. We used a dataset containing product reviews listed on Amazon.com where each review was assigned a rating from 1 - 5. We represented the reviews using Word2Vec and BERT embeddings. We used the KMeans, Single Linkage Agglomerative Hierarchical clustering, DBSCAN, and HDBSCAN. We evaluated each algorithm using various measures such as the silhouette score, adjusted rand index score, and cluster purity. We observed that density based algorithms generally seem to outperform the KMeans and the single linkage hierarchical algorithms. In addition, the results of the density based algorithms suggested that the number of underlying clusters in the dataset may be more than the number of labels in the dataset (3 or 5 for the dataset). However, our results also highlight the challenges in clustering high-dimensional text data, such as the varied distribution of embeddings in space for the same rating value. One must also be careful about drawing conclusions when comparing clustering algorithms as different hyperparameter settings may have an effect on the results. For example, tuning the \(\epsilon\) value of the DBSCAN algorithm and min_cluster_size for HDBSCAN (Section 3.3) resulted in a different number of noise points being identified by the algorithms. This affects the clustering algorithm's performance. All in all, this work provides a proof-of-concept where we carry out a comparative analysis of different textual embeddings' impact on the performance of clustering algorithms. We hope that this work opens up new avenues for research in the domain of text clustering and developing new embeddings to adequately represent text data.
2304.14711
Non-commutative resolutions of linearly reductive quotient singularities
We prove existence of non-commutative crepant resolutions (in the sense of van den Bergh) of quotient singularities by finite and linearly reductive group schemes in positive characteristic. In dimension two, we relate these to resolutions of singularities provided by G-Hilbert schemes and F-blowups. As an application, we establish and recover results concerning resolutions for toric singularities, as well as canonical, log terminal, and F-regular singularities in dimension 2.
Christian Liedtke, Takehiko Yasuda
2023-04-28T09:28:23Z
http://arxiv.org/abs/2304.14711v2
# Non-commutative resolutions of linearly reductive quotient singularities ###### Abstract. We prove existence of non-commutative crepant resolutions (in the sense of van den Bergh) of quotient singularities by finite and linearly reductive group schemes in positive characteristic. In dimension two, we relate these to resolutions of singularities provided by \(G\)-Hilbert schemes and F-blowups. As an application, we establish and recover results concerning resolutions for toric singularities, as well as canonical, log terminal, and F-regular singularities in dimension \(2\). Key words and phrases:non-commutative crepant resolution, linearly reductive group scheme, quotient singularity, F-singularity, canonical, log terminal, and toric singularities 2 By now, there is a lot of work dedicated to NCRs and NCCRs, such as for the following classes of singularities: 1. quotient singularities by finite groups in characteristic zero [11] or by finite groups of order prime to the characteristic [12], 2. quotient singularities by not necessarily finite reductive group schemes in characteristic zero [11], 3. hypersurface singularities [15], and 4. toric singularities [16], [17], [18], [19]. ### Linearly reductive quotient singularities In this article, we study NCCRs in the following situation. **Setup 1.2**.: Let \(k\) be an algebraically closed field of characteristic \(p>0\). Let \(S:=k[[x_{1},...,x_{n}]]\) be a formal power series ring over \(k\). Let \(G\) be a finite group scheme over \(k\) that acts on \(\operatorname{Spec}S\), such that the action is free in codimension one. We let \[R\,:=\,S^{G}\,\subseteq\,S\] be the invariant subring and set \(X:=\operatorname{Spec}R\). **Definition 1.3**.: A finite group scheme \(G\) over \(k\) is called _linearly reductive_ if every \(k\)-linear and finite-dimensional representation of \(G\) is semi-simple. **Definition 1.4**.: A scheme \(X=\operatorname{Spec}R\) as in Setup 1.2 is called a _quotient singularity_ by the group scheme \(G\). If \(G\) is linearly reductive, then \(X\) is called a _linearly reductive quotient singularity_ (LRQ singularity for short). For background and details on quotient singularities by finite group schemes and especially linearly reductive ones, we refer to [1, 1, 2, 3, 4]. ### NCCRs for LRQ singularities In Section 2, we first establish the following result, which extends a classical proposition of Auslander [1]. Our extension was essentially already obtained by Faber, Ingalls, Okawa, and Satriano [10, Proposition 2.26], although they work in a slightly different setup. Moreover, a similar result was proven in [1, 1], again in somewhat different setups. **Theorem 1.5**.: _There exists a skew group scheme ring_ \[S*G\,=\,S\#H,\] _where \(H\) is the dual Hopf algebra of \(H^{0}(G,\mathcal{O}_{G})\). Moreover, there exists a natural isomorphism_ \[S*G\,\to\,\operatorname{End}_{R}(S)\] _of usually non-commutative \(R\)-algebras._ Next, we establish the following equivalences, which generalise results of Broer and the second-named author [11, 12]. **Theorem 1.6**.: _The following are equivalent:_ 1. \(R\) _is a pure subring of_ \(S\)_._ 2. \(R\) _is strongly F-regular._ 3. \(G\) _is linearly reductive._ 4. \(S*G\) _has finite global dimension._ _Moreover, if \(S*G\) has finite global dimension, then \(\operatorname{gl.dim}(S*G)=n\)._ As an application, we obtain that LRQ singularities admit NCCRs. **Theorem 1.7**.: _If \(G\) is linearly reductive, then_ 1. \(\operatorname{End}_{R}(S)\) _is an NCCR of_ \(R\)_._ 2. _If_ \(e\) _is sufficiently large, then_ \(\operatorname{End}_{R}(R^{1/p^{e}})\) _is Morita equivalent to_ \(\operatorname{End}_{R}(S)\) _and thus, also an NCCR of_ \(R\)_._ This result was already known in characteristic zero, as well as for finite groups of order prime to the characteristic of the ground field: in these cases, the first assertion follows, for example, from combining results of Auslander [1] and Yi [13], and the second assertion was established by Toda and the second-named author [10]. ### Auslander's results and dimension two There are some classical results of Auslander [1] that carry over to the situation of this article. To state them, we introduce the following categories: \[\begin{array}{lcl}\operatorname{Rep}_{k}(G)&:&\text{finite-dimensional $k$-linear $G$-representations}\\ \mathcal{P}&:&\text{finite and projective $S*G$-modules}\\ \mathcal{L}&:&\text{finite and reflexive $R$-modules}\\ \operatorname{add}_{R}(S)&:&\text{summands of finite sums of $S$}\end{array}\] In Section 3, we will show that these categories are related as follows. **Theorem 1.8**.: _Assume that \(G\) is linearly reductive._ 1. _The functors_ \[\begin{array}{ccccc}\operatorname{Rep}_{k}(G)&\to&\mathcal{P}&\to& \operatorname{add}_{R}(S)\\ W&\mapsto&S\otimes_{k}W&&\\ &&P&\mapsto&P^{G}\end{array}\] _induce equivalences of categories. Simple representations correspond to indecomposable modules under these equivalences._ 2. _If_ \(n=2\)_, then the inclusion_ \[\operatorname{add}_{R}(S)\,\subseteq\,\mathcal{L}\] _is an equivalence of categories._ This can be viewed as a _non-commutative McKay correspondence_. ### F-blowups and G-Hilbert schemes For a variety \(X\) over a perfect field \(k\) of positive characteristic \(p>0\) and an integer \(e\geq 1\), the second-named author introduced in [10] the _e.th F-blowup_ \[\operatorname{FB}_{e}(X)\,\to\,X,\] which is a proper and birational morphism and which is an isomorphism over the smooth locus of \(X\). In Section 4, we show the following result. **Theorem 1.9**.: _If \(G\) is linearly reductive, then there exist natural, proper, and birational morphisms_ \[\operatorname{Hilb}^{G}(\operatorname{Spec}S)\,\stackrel{{ \psi_{e}}}{{\longrightarrow}}\,\operatorname{FB}_{e}(\operatorname{Spec}R) \,\to\,\operatorname{Spec}R.\] _Moreover, \(\psi_{e}\) is an isomorphism if \(e\) is sufficiently large._ 1. _If_ \(n=2\)_, then_ \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\to\operatorname{Spec}R\) _is the minimal resolution of singularities._ 2. _If_ \(n=3\) _and_ \(R\) _is Gorenstein, then_ \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\to\operatorname{Spec}R\) _is a crepant resolution of singularities._ This extends results of Toda and the second-named author [11, 10] from the case where \(G\) is a finite group of order prime to \(p\), the characteristic of the ground field. Assertion (1) was already shown by the first-named author [11] and extends results of Ishii, Ito, and Nakamura [12, 13, 14] from the case where \(G\) is a finite group of order prime to \(p\). Assertion (2) in characteristic zero was established by Bridgeland, King, and Reid, as well as Nakamura [1, 1]. ### Examples In Section 5, we apply these results to some classes of singularities: 1. We establish the existence of NCCRs for normal and \(\mathbb{Q}\)-factorial toric singularities. This recovers some results of Faber, Mullen, and Smith [16] and extends some results of Spenko and van den Bergh [17] to positive characteristic. 2. We establish the existence of NCCRs for F-regular surface singularities via their description as LRQ singularities. This includes all canonical and log terminal singularities in dimension \(2\) and characteristic \(p\geq 7\). We then recover results of Hara [11] and the first-named author [11] that \(G\)-Hilbert schemes and sufficiently high F-blowups yield the minimal resolution of such singularities. Although some of these results were previously known, it is interesting that our approach gives a natural and uniform approach via the description of these singularities as quotients by finite and linearly reductive group schemes. After finishing the first draft of this paper, we noticed that Hashimoto and Kobayashi [15, Lemma 3.9] independently obtained a result similar to our Theorems 1.6 and 2.4. **Acknowledgements.** We thank Mitsuyasu Hashimoto, Gregor Kemper, Gebhard Martin, Shinnosuke Okawa, and Michael Wemyss for discussions and comments. The first-named author thanks the Mathematical Institute of the University of Oxford for kind hospitality. The second-named author was supported by JSPS KAKENHI Grant Numbers JP18H01112, JP21H04994, and JP23H01070. ## 2. Skew group scheme rings For \(G\) a finite group scheme over a field \(k\) of characteristic \(p\geq 0\) that acts on a \(k\)-algebra \(S\), we construct in this section the skew group scheme ring \(S*G\). If \(R:=S^{G}\subseteq S\) is the invariant subring, then we study a natural homomorphism of usually non-commutative \(R\)-algebras \[S*G\,\to\,\operatorname{End}_{R}(S)\] and show that it is an isomorphism if the action of \(G\) on \(S\) is free in codimension one. If \(p>0\), then we show that \(R\subseteq S\) is a pure subring if and only if \(R\) is strongly F-regular if and only if \(G\) is linearly reductive if and only if \(S*G\) has finite global dimension. ### Skew group scheme rings Let \(k\) be a field, let \(S\) be a commutative \(k\)-algebra, and let \(G\to\operatorname{Spec}k\) be a finite group scheme. Assume that we have an action \[\rho\,:\,G\,\to\,\operatorname{Aut}_{\operatorname{Spec}S/\operatorname{Spec }k}.\] Then, \(H^{0}(G,\mathcal{O}_{G})\) is a finite dimensional Hopf-algebra over \(k\) and we denote by \(H:=H^{0}(G,\mathcal{O}_{G})^{*}\) the dual Hopf algebra. The action \(\rho\) corresponds to a coaction of \(H^{0}(G,\mathcal{O}_{G})\) on \(S\) and thus, \(S\) is a right \(H^{*}\)-comodule algebra. This is equivalent to \(S\) being a left \(H\)-module algebra and we let \[S*G\,:=\,S\#H\,=\,S\#\left(H^{0}(G,\mathcal{O}_{G})^{*}\right)\] be the associated smash product algebra, see, for example, [10, Definition 4.1.3]. **Definition 2.1**.: We call \(S*G\) the _skew group scheme ring_. **Example 2.2**.: Let \(G_{\operatorname{abs}}\) be a finite group that acts on a \(k\)-algebra \(S\). Let \(G\to\operatorname{Spec}k\) be the constant group scheme associated to \(G_{\operatorname{abs}}\). Then, \(H:=H^{0}(G,\mathcal{O}_{G})^{*}\) is isomorphic to the group algebra \(k[G_{\operatorname{abs}}]\) with its usual Hopf algebra structure. From this and [10, Example 4.1.6], we conclude that \(S*G\) as just defined coincides with the classical skew group ring \(S*G_{\operatorname{abs}}\). ### Invariant rings With assumptions and notations from the previous paragraph, we let \[R\,:=\,S^{G}\,\subseteq\,S\] be the ring of invariants with respect to the \(G\)-action on \(S\). In the language of Hopf algebras, these are the \(H\)-invariants of \(S\) with \(H=H^{0}(G,\mathcal{O}_{G})^{*}\) as defined, for example, in [10, Definition 1.7.1]. The multiplication \(S\times S\to S\) and the \(G\)-action on \(S\) are \(R\)-linear and thus, we obtain morphisms \(S\to\operatorname{End}_{R}(S)\) and \(H\to\operatorname{End}_{R}(S)\). It is easy to see that these combine to a natural homomorphism \[S*G\,=\,S\#H\,\to\,\operatorname{End}_{R}(S)\] of usually non-commutative \(R\)-algebras. We now specialise to the case where \(S=k[[x_{1},...,x_{n}]]\). Moreover, we will also assume that the action \(\rho\) is _free in codimension one_, that is, there exists a Zariski-closed subset \(Z\subset\operatorname{Spec}S\) of codimension at least two, such that there is an induced action \(G\times V\to V\) where \(V:=\operatorname{Spec}S\backslash Z\) and where action is free in the scheme sense. More precisely, if \(\pi:\operatorname{Spec}S\to\operatorname{Spec}R\) denotes the quotient morphism by the \(G\)-action, then we set \(U:=\operatorname{Spec}R\backslash\pi(Z)\) and then, _freeness_ means that the morphism \[V\times_{\operatorname{Spec}k}G\,\to\,V\times_{U}V\] defined by \((v,g)\mapsto(v,gv)\) is an isomorphism. In the language of Hopf algebras, this means that if \(\operatorname{Spec}S^{\prime}\) is an affine open subset of \(V\) which is stable under the \(G\)-action, then the \(H\)-action on \(S\) with \(H=H^{0}(G,\mathcal{O}_{G})^{*}\) is _Galois_ in the sense of [10, Definition 8.1.1]. Moreover, for each \(\mathfrak{p}\in U\), the \(H\)-action on \(S_{\mathfrak{p}}:=S\otimes_{R}R_{\mathfrak{p}}\) is Galois. The following generalises a classical result of Auslander [10, page 118]. A similar statement in a slightly different setup was already shown [11, Proposition 2.26], but since our proof does not proceed via quotient stacks, we decided to give it nevertheless. **Proposition 2.3**.: _With notations and assumptions as in Setup 1.2, \(R\) is normal and the natural morphism_ \[S*G\,\to\,\operatorname{End}_{R}(S)\] _is an isomorphism of \(R\)-algebras._ Proof.: We let \(H:=H^{0}(G,\mathcal{O}_{G})^{*}\) be the dual Hopf algebra and then \(R\) is the ring of invariants of \(S\) with respect to the \(H\)-action. Let \(K\) be the field of fractions of \(R\). Since taking invariants is compatible with localisation [11, Lemma 1.1], we conclude \[R\,=\,R\,\cap\,(S\otimes_{R}K)^{H},\] which shows that \(R\) is normal. Therefore, \(S\) is finite and reflexive as \(R\)-module [11, Lemma 15.23.20]. It then follows from [11, Lemma 15.23.8] that \(\operatorname{End}_{R}(S)\) is a reflexive \(R\)-module. Since \(S*G\) is a finite and free as an \(S\)-module, we conclude that \(S*G\) is a reflexive \(R\)-module. Since \(S*G\) and \(\operatorname{End}_{R}(S)\) both are reflexive \(R\)-modules, it suffices to show that the natural homomorphism \(S*G\to\operatorname{End}_{R}(S)\) is an isomorphism at each prime of height one of \(R\). When localising at primes of height one, the \(H\)-action is Galois because \(\rho\) is free in codimension one and then, the statement follows from [11, Theorem 1.7], see also [10, Theorem 8.3.3]. The following generalises a result from the second-named author [14, Corollaries 3.3 and 6.18] from finite groups to finite group schemes. **Theorem 2.4**.: _We keep the notations and assumptions of Setup 1.2. Let \(\mathfrak{m}\subseteq S\) be the maximal ideal and let \(\mathfrak{j}\subseteq S\ast G\) be the Jacobson radical. Then, the following are equivalent:_ 1. \(R\) _is a pure subring of_ \(S\)_._ 2. \(R\) _is strongly F-regular._ 3. \(G\) _is linearly reductive._ 4. \(S\ast G\) _has finite global dimension._ 5. \(S\ast G\) _has global dimension_ \(n\)_._ 6. \(\mathfrak{j}=\mathfrak{m}(S\ast G)\)_._ 7. \(\mathfrak{j}\subseteq\mathfrak{m}(S\ast G)\)_._ _In particular, if one of the above equivalent conditions holds, then \(R\) is a Cohen-Macaulay ring._ Proof.: \((1)\Rightarrow(2):\) This is [12, Theorem 3.1]. \((2)\Rightarrow(1):\) This follows from the fact that a strongly F-regular ring is a splinter, see, for example, [11, Proposition 1.4]. \((3)\Rightarrow(4):\) If \(G\) is linearly reductive, then \(H:=H^{0}(G,\mathcal{O}_{G})^{*}\) is semisimple, see, for example, [11, Proposition A.2]. By [14, Corollary 4.2], we have that \(\mathrm{gl.dim}(S\ast G)=\mathrm{gl.dim}(S\#H)\) is finite. \((4)\Rightarrow(3):\) From the previous proposition, the natural map \(S\ast G\to\mathrm{End}_{R}(S)\) is an isomorphism. This implies that \(S\) is a faithful \(S\ast G\)-module. Let \(H:=H^{0}(G,\mathcal{O}_{G})^{*}\) be the dual Hopf algebra. Let \(0\neq t\in\int_{H}^{\ell}\) be a left integral. By [14, Corollary 2.3], there exists a \(c\in S\) with \(tc=1\). If \(\mathfrak{m}_{S}=(x_{1},...,x_{n})\) denotes the maximal ideal of \(S\), we let \(c^{\prime}:=\overline{c}\in S/\mathfrak{m}_{S}=k\). Since \(tc=1\) in \(S\) we also have \(tc^{\prime}=1\) in \(k\). Applying [14, Corollary 2.3] to \((S/\mathfrak{m}_{S})\#H=H\), we conclude \(\mathrm{gl.dim}(H)=0\). This implies that \(H\) is semi-simple (see, for example, [10, Theorem 4.2.2]) and using [11, Proposition A.2], we conclude that \(G\) is linearly reductive. \((3)\Rightarrow(1):\) By [14, proof of Corollary 1.8], we may assume that the \(G\)-action on \(S\) is linear. The statement then follows from [1, Remark 6.5.3(b)]. \((1)\Rightarrow(4):\) We have \(S\ast G\cong\mathrm{End}_{R}(S)\) by Proposition 2.3. In particular, \(\mathrm{End}_{R}(S)\) is free and hence, Cohen-Macaulay as an \(S\)-module. By [14, Proposition (1,8)], it is also Cohen-Macaulay as an \(R\)-module. By [14, Corollary 2.11] and Assumption (1), it is an NCCR and in particular, it has finite global dimension. \((4)\Rightarrow(5):\) See [11, Proposition 12.7]. \((5)\Rightarrow(4):\) Obvious. \((6)\Rightarrow(7):\) Obvious. \((3)\Rightarrow(6):\) We have \(J(S\ast G)\supseteq\mathfrak{m}(S\ast G)\) by [11, (5.9)]. On the other hand, since \(k\ast G\) is semisimple, we have \[J\left(S\ast G/(\mathfrak{m}(S\ast G))\right)\,=\,J(k\ast G)\,=\,0.\] We also have \(J(S*G)\subseteq\mathfrak{m}(S*G)\) by [1, 15.6], which proves equality. (7) \(\Rightarrow\) (4) : We consider the pullback functor \(\mathbf{F}^{*}\) from the module category of \(S*G\) to the module category of \(S^{1/p}*G\) that is induced by the inclusion \(S*G\hookrightarrow S^{1/p}*G\). Note that the same proof as the one of [13, Proposition 6.19] shows that this functor is identical to the functor \[\operatorname{Hom}_{R}(S,S^{1/p})\otimes-\quad:\quad\operatorname{End}_{R}(S) \text{-mod}\,\to\,\operatorname{End}_{R^{1/p}}(S^{1/p})\text{-mod}.\] By [13, Proposition 6.17], this functor is exact, preserves projective modules, and has zero kernel. By [13, Proposition 6.13 or Corollary 6.18], the functor is also order-raising. Thus, \(S*G\) has finite global dimension by [13, Corollary 5.6]. This completes the proof of the claimed equivalences. The last assertion of the theorem follows, for example, from [11, Theorem 2.6]. **Remark 2.5**.: If \(R\) is strongly F-regular, then it is log terminal [11]. Thus, in the situation of the above theorem, if \(G\) is linearly reductive, then the quotient singularity \(\operatorname{Spec}R^{G}\) is log terminal. If a Noetherian local ring \(R\) of equicharacteristic zero admits an NCCR, then \(X=\operatorname{Spec}R\) is log terminal, see [10, IY23], as well as [12]. On the other hand, there are quotient singularities in positive characteristic that are not log terminal, see, for example [13]. **Remark 2.6**.: In the situation of Theorem 2.4, even if \(R\) is not strongly F-regular, then it can be F-pure [14, Section 3b] (see also [1, Section 4] and [1, page 64]) or F-rational [10]. There is also the case where \(R\) is neither log-canonical nor Cohen-Macaulay [13] - in particular, it is neither F-pure nor F-rational. **Corollary 2.7**.: _With notations and assumptions as in Theorem 2.4, assume that the equivalent statements hold. Then,_ 1. \(\operatorname{End}_{R}(S)\) _is an NCCR of_ \(R\)_._ 2. _For sufficiently large_ \(e\)_,_ \[\operatorname{End}_{R}(R^{1/p^{e}})\] _is Morita equivalent to_ \(\operatorname{End}_{R}(S)\) _and thus, it is also an NCCR of_ \(R\)_._ Proof.: Assertion (1) was already shown in the proof of the theorem. Assertion (2) follows from [13, Corollary 4.2]. ## 3. Auslander's results and dimension two We keep the notations and assumptions of Setup 1.2 and assume moreover that \(G\) is linearly reductive. In this section, we show that the category \(\mathcal{P}\) of projective \(S*G\)-modules is equivalent to the category \(\operatorname{Rep}_{k}(G)\) of \(G\)-representations, as well as to the category \(\operatorname{add}_{R}(S)\) generated by direct summands of the \(R\)-module \(S\). If moreover \(n=2\), then these are equivalent to the category \(\mathcal{L}\) of reflexive \(R\)-modules. This generalises classical results of Auslander [1]. ### Auslander's results In [1, Section 1], the following proposition is shown in the case where \(n=2\) and where \(G\) is a finite group, whose order is prime to the characteristic of \(k\). **Proposition 3.1**.: _We keep the notations and assumptions of Setup 1.2, let \(\mathfrak{m}\subseteq S\) be the maximal ideal, and assume that \(G\) is linearly reductive. We define the two categories_ \[\begin{array}{ccc}\operatorname{Rep}_{k}(G)&:&\text{ finite-dimensional $k$-linear $G$-representations}\\ \mathcal{P}&:&\text{ finite and projective $S*G$-modules},\end{array}\] _which are related as follows._ 1. _If_ \(W\in\operatorname{Rep}_{k}(G)\)_, then there is a natural_ \(S*G\)_-module structure on_ \(S\otimes_{k}W\) _that extends the_ \(S\)_-action on_ \(S\) _and the_ \(G\)_-action on_ \(W\)_. Moreover,_ \(S\otimes_{k}W\) _is a finite and projective_ \(S*G\)_-module, that is, lies in_ \(\mathcal{P}\)_._ 2. _If_ \(P\) _is a finite_ \(S*G\)_-module, then_ \(P/\mathfrak{m}P\) _is a finite-dimensional and_ \(k\)_-linear_ \(G\)_-representation, that is, lies in_ \(\operatorname{Rep}_{k}(G)\)_._ 3. _If_ \(P\in\mathcal{P}\)_, then there exists an isomorphism of_ \(S*G\)_-modules_ \[P\,\cong\,S\otimes_{k}(P/\mathfrak{m}P).\] 4. _The functor_ \[\begin{array}{ccc}\operatorname{Rep}_{k}(G)&\to&\mathcal{P}\\ W&\mapsto&S\otimes_{k}W\end{array}\] _induces an equivalence of categories. Simple_ \(G\)_-representations correspond to indecomposable_ \(S*G\)_-modules under this equivalence._ Proof.: We have that \(S*G\) is semiperfect by [1, (23.3)]. The Jacobson radical of \(S*G\) is equal to \(\mathfrak{m}*G\) by Theorem 2.4. The map that sends \(P\) to \(P/\mathfrak{m}P\) defines a bijection from isomorphism classes of indecomposable projective \(S*G\)-modules to isomorphism classes of simple \(k*G\)-modules by [1, Proposition 27.10]. Since \[(S\otimes_{k}(P/\mathfrak{m}P))\,/\mathfrak{m}\,(S\otimes_{k}(P/\mathfrak{m}P ))\,=\,P/\mathfrak{m}P,\] the map \(W\mapsto S\otimes_{k}W\) is the inverse of the above bijection. We continue with the following result, which in similar contexts is sometimes called _Auslander's projectivisation_. **Proposition 3.2**.: _We keep the assumptions and notations of Proposition 3.1 and define the category_ \[\operatorname{add}_{R}(S)\qquad\quad:\qquad\text{ summands of finite sums of $S$}.\] _Then, the functor_ \[\begin{array}{ccc}\mathcal{P}&\to&\operatorname{add}_{R}(S)\\ P&\mapsto&P^{G}\end{array}\] _induces equivalences of categories._ Proof.: See, for example, [12, Proposition 2.4]. Note that this proposition relies on [1, Proposition II.2.1], which is stated for Artinian rings only. However, the elementary proof there also works for Noetherian rings, which is sufficient for our purposes. Under this equivalence, the regular (resp. trivial) representation of \(G\) in \(\operatorname{Rep}_{k}(G)\) gets mapped to \(S\) (resp. \(R\)). Let \(\rho_{i}:G\to\operatorname{\mathbf{GL}}(V_{i})\) be the set of finite-dimensional, \(k\)-linear, and simple representations of \(G\) up to isomorphism. We have the well-known decomposition of the regular representation \[\rho_{\operatorname{reg}}\,\cong\,\bigoplus_{i}\,\rho_{i}{}^{\oplus\dim_{k} \rho_{i}}, \tag{1}\] see, for example, [1, Theorem 5.9, Corollary 5.11 and Remark 5.12]. Alternatively, one can also use the lifting result [1, Proposition 2.9] together with the decomposition (1) of the regular representation of the finite group \(G_{\operatorname{abs}}\). Using this decomposition and Proposition 3.1, we conclude that \[S\,=\,\bigoplus_{i}\,\big{(}(S\otimes_{k}V_{i})^{G}\big{)}{}^{\oplus\dim_{k} \rho_{i}}\] is the decomposition of \(S\) into indecomposable and reflexive \(R\)-modules. Applying Proposition 2.3 to this, we conclude the following. **Corollary 3.3**.: _Under the assumptions of Proposition 3.2, there exists an isomorphism of \(R\)-algebras_ \[S\ast G\,\cong\,\operatorname{End}_{R}\left(\bigoplus_{i}\big{(}(S\otimes_{k} V_{i})^{G}\big{)}{}^{\oplus\dim_{k}\rho_{i}}\right).\] ### Dimension 2 Now, we specialise further to the case \(n=2\). Then, \(R\) is normal and two-dimensional, and thus, a finite \(R\)-module is reflexive if and only if it is Cohen-Macaulay, see [1, Proposition 1.4.1]. In particular, \(S\) is a reflexive \(R\)-module and thus, the objects of \(\operatorname{add}_{R}(S)\) are reflexive \(R\)-modules. **Proposition 3.4**.: _Under the assumptions and notations of Proposition 3.4, assume \(n=2\) and define the category_ \[\mathcal{L}\qquad\quad:\qquad\text{ finite and reflexive $R$-modules}.\] _Then,_ 1. _up to isomorphism, the indecomposable reflexive_ \(R\)_-modules are precisely the indecomposable_ \(R\)_-summands of_ \(S\)_._ 2. _The inclusion_ \[\operatorname{add}_{R}(S)\,\subseteq\,\mathcal{L}.\] _is an equivalence of categories. In particular, there exist only a finite number of nonisomorphic indecomposable reflexive_ \(R\)_-modules._ Proof.: In [1, Proposition 2.1], this is shown in the case where \(G\) is a finite group, whose order is prime to the characteristic of \(k\). The same arguments also work for \(G\) a finite and linearly reductive group scheme over \(k\) ## 4. F-blowups In this section, we study two-dimensional LRQ singularities and prove that they can be resolved by sufficiently high F-blowups. We show this by relating F-blowups to \(G\)-Hilbert schemes. ### F-blowups We start by recalling F-blowups that were introduced by the second-named author in [10] and which are characteristic \(p\) variants of higher Nash blowups. More precisely, let \(X\) be an \(n\)-dimensional variety over a perfect field \(k\) of characteristic \(p>0\), let \(X_{\operatorname{sm}}\subseteq X\) be the smooth locus, and let \(F^{e}:X_{e}\to X\) be the \(e\).th iterated Frobenius. For a \(K\)-rational point \(x\in X_{\operatorname{sm}}(K)\), the fibre \((F^{e})^{-1}(x)\) is a zero-dimensional subscheme of length \(p^{en}\) of \(X_{e}\otimes_{k}K\) and thus, corresponds to a \(K\)-rational point of the Hilbert scheme \(\operatorname{Hilb}_{p^{en}}(X_{e})\). **Definition 4.1**.: The _\(e\).th F-blowup_, denoted \(\operatorname{FB}_{e}(X)\), is the closure of \[\left\{(F^{e})^{-1}(x)\,|\,x\in X_{\operatorname{sm}}\right\}\] inside \(\operatorname{Hilb}_{p^{en}}(X_{e})\). By [10, Corollary 2.6], there exists a natural morphism \[\pi_{e}\,:\,\operatorname{FB}_{e}(X)\,\to\,X,\] which is projective, birational, and an isomorphism over \(X_{\operatorname{sm}}\). If we set \(\mathcal{M}_{e}:=(F^{e})_{*}\mathcal{O}_{X_{e}}\), then \((\pi_{e})^{*}\mathcal{M}_{e}/(\operatorname{tors})\) is locally free. Moreover, \(\pi_{e}\) is the universal proper birational morphism having this property. In other words, the \(e\).th F-blowup of \(X\) is the blowup at the module \(\mathcal{M}_{e}\), see [1, 11]. ### G-Hilbert schemes Now, we assume that we are in the situation of Setup 1.2. If \(G\) is linearly reductive, then there exists a _Hilbert-Chow morphism_ \[\operatorname{Hilb}^{G}(\operatorname{Spec}S)\,\to\,\operatorname{Spec}R,\] as shown by the first-named author in [10, Section 4.3], which extends the classical results of Ito and Nakamura [13]. Existence of the \(G\)-Hilbert scheme for \(G\) a finite and linearly reductive group scheme is due to Blume [1]. **Lemma 4.2**.: _The \(G\)-Hilbert scheme is the blowup at the \(R\)-module \(S\)._ Proof.: Let \(Y\to\operatorname{Spec}R\) be the blowup at the \(R\)-module \(S\). We will show that the birational correspondence between \(Y\) and \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\) extends to morphisms in both directions. If \(U\) denotes the universal family over \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\), then we have inclusions \[U\,\subseteq\,\operatorname{Hilb}^{G}(\operatorname{Spec}S)\times_{ \operatorname{Spec}R}\operatorname{Spec}S\,\subseteq\,\operatorname{Hilb}^{G}( \operatorname{Spec}S)\widehat{\times}_{\operatorname{Spec}k}\operatorname{Spec }S.\] Thus, \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\to\operatorname{Spec}R\) is a flattening of the \(R\)-module \(S\). Using the universality of \(Y\), we obtain a morphism \(\operatorname{Hilb}^{G}(\operatorname{Spec}S)\to Y\). On the other hand, the flat \(Y\)-scheme \((Y\times_{\operatorname{Spec}R}\operatorname{Spec}S)_{\operatorname{red}} \subseteq Y\widehat{\times}_{\operatorname{Spec}k}\operatorname{Spec}S\) induces a morphism \(Y\to\operatorname{Hilb}^{G}(\operatorname{Spec}S)\). **Theorem 4.3**.: _We keep the notations and assumptions of Setup 1.2 and assume moreover that \(G\) is linearly reductive. Then, for each \(e\geq 1\), we have a natural morphism_ \[\psi_{e}\,:\,\mathrm{Hilb}^{G}(\mathrm{Spec}\,S)\,\to\,\mathrm{FB}_{e}(\mathrm{ Spec}\,R).\] _Moreover, it is an isomorphism if \(e\) is sufficiently large._ Proof.: If \(Y_{1}\) and \(Y_{2}\) are the blowups at \(R\)-modules \(M_{1}\) and \(M_{2}\) respectively, then the blowup at \(M_{1}\oplus M_{2}\) is the unique irreducible component of \(Y_{1}\times_{\mathrm{Spec}\,R}Y_{2}\) that surjects onto \(\mathrm{Spec}\,R\). This shows that the blowup at a module \(M\) depends only on the set of isomorphism classes of indecomposable modules that appear as direct summands of \(M\). It follows also that if every indecomposable summand of \(M\) also appears as a direct summand of \(M^{\prime}\), then there exists a natural morphism from the blowup at \(M^{\prime}\) to the blowup at \(M\). This gives the existence of the natural morphism \(\psi_{e}\), as desired. For \(e\gg 0\), the indecomposable \(R\)-modules that appear as summands of \(R^{1/p^{e}}\) are the same as direct summands of \(S\)[14, Proposition 4.1], which shows that \(\psi_{e}\) is an isomorphism. **Remark 4.4**.: The fact that \(\psi_{e}\) is an isomorphism for sufficiently large \(e\) can be viewed as a commutative version of the Morita equivalence between the two NCCRs \(\mathrm{End}_{R}(S)=S\ast G\) and \(\mathrm{End}_{R}(R^{1/p^{e}})\), which we established in Corollary 2.7. We refer the interested reader to [10] for a discussion in the case where \(G\) is a finite group of order prime to \(p\). If \(n=2\), then \(\mathrm{Hilb}^{G}(\mathrm{Spec}\,\,S)\to\mathrm{Spec}\,R\) is the minimal resolution of singularities, which is due to Ishii, Ito, and Nakamura [11, 12, 13] if \(G\) is a finite group of order prime to \(p\) and which is due to the first-named author [14, Theorem 4.5] if \(G\) is a finite and linearly reductive group scheme. Together with the previous results, we conclude the following. **Corollary 4.5**.: _Under the assumptions of Theorem 4.3 assume moreover \(n=2\). If \(e\) is sufficiently large, then_ \[\mathrm{Hilb}^{G}(\mathrm{Spec}\,S)\,\cong\,\mathrm{FB}_{e}(\mathrm{Spec}\,R) \,\to\,\mathrm{Spec}\,R\] _is the minimal resolution of singularities of \(\mathrm{Spec}\,R\)._ **Corollary 4.6**.: _Under the assumptions of Theorem 4.3 assume moreover that \(n=3\) and \(R\) is Gorenstein. If \(e\) is sufficiently large, then_ \[\mathrm{Hilb}^{G}(\mathrm{Spec}\,S)\,\cong\,\mathrm{FB}_{e}(\mathrm{Spec}\,R) \,\to\,\mathrm{Spec}\,R\] _is a crepant resolution of singularities of \(\mathrm{Spec}\,R\)._ Proof.: This follows from Thereom 6.3.1 of [12], in the same way as Corollary 3.6 of [10] does. We end this section by the following lemma, which gives an easy criterion for when an LRQ singularity is Gorenstein. In characteristic zero or if \(G\) is a finite group of order prime to \(p\), then this is due to Watanabe [12]. The extension to the linear reductive case should be known to the experts, but we could not find a proper reference. See also the survey in [10, Section 3.9.5]. **Proposition 4.7**.: _In Setup 1.2 assume moreover that \(G\) is linearly reductive. After a change of coordinates, we may assume that the \(G\)-action is linear, that is, that \(G\) is a subgroup scheme of \(\mathbf{GL}_{n,k}\) and that the \(G\)-action on \(S=k[[x_{1},...,x_{n}]]\) is compatible with the embedding of \(G\) into \(\mathbf{GL}_{n,k}\) and the usual \(\mathbf{GL}_{n,k}\)-action on \(S\). The following are equivalent:_ 1. \(G\) _is a subgroup scheme of_ \(\mathbf{SL}_{n,k}\)_._ 2. \(R=S^{G}\) _is Gorenstein._ Proof.: The assertion on linearisation follows from [11, proof of Corollary 1.8], see also the discussion in [10, Section 6.2]. \((1)\Rightarrow(2)\): This is due to Hashimoto [14, Corollary 32.5]. \((2)\Rightarrow(1)\): Since \(G\) is linearly reductive, there exists a lift of \(G\) and the linear \(G\)-action on \(S=k[[x_{1},...,x_{n}]]\) to \(W(k)[[x_{1},...,x_{n}]]\), see the discussion in [11, Section 4.4]. From this, we obtain the canonical lift \(\mathcal{X}_{\mathrm{can}}\to\mathrm{Spec}W(k)\) of the LRQ singularity \(X=\mathrm{Spec}\,S^{G}\) over the ring \(W(k)\) of Witt vectors. We let \(K\) be the field of fractions of \(W(k)\) and let \(\overline{K}\) be an algebraic closure of \(K\). Seeking a contradiction, suppose that \(G\) (which is naturally a subgroup scheme of \(\mathbf{GL}_{n,k}\)) is not a subgroup scheme of \(\mathbf{SL}_{n,k}\) and that \(X\) is Gorenstein. There exists a finite field extension \(L\supseteq K\), such that the generic fibre of the lift of \(G\) over \(L\) is a constant group scheme associated to some finite group \(G_{\mathrm{abs}}\). Since \(G\) is not a subgroup scheme of \(\mathbf{SL}_{n,k}\), we have that \(G_{\mathrm{abs}}\) is not a subgroup of \(\mathbf{SL}_{n,\overline{K}}\) and thus, a fortiori, not of \(\mathbf{SL}_{n,L}\), see also the discussion in [11, Section 4.4]. Let \(\mathcal{X}_{L}:=\mathcal{X}\otimes_{\mathrm{Spec}\,W(k)}\mathrm{Spec}\,L\) be the generic fibre of \(\mathcal{X}\) base-changed to \(L\). Since \(\mathcal{X}_{L}\) is isomorphic to \(\mathrm{Spec}\,L[[x_{1},...,x_{n}]]^{G_{\mathrm{abs}}}\), it is not Gorenstein by the characteristic zero results already mentioned above. On the other hand, \(\mathcal{X}\) is Gorenstein since \(X\) is [15, Theorem 23.4], which implies that the geometric generic fibre of \(\mathcal{X}_{L}\) over \(L\) is also Gorenstein [15, Theorem 18.2 and Theorem 23.6]. This is a contradiction. ## 5. Examples In this section, we apply the results of the previous sections to a couple of classes of singularities, such as \(\mathbb{Q}\)-factorial toric singularities, F-regular surface singularities, and canonical surface singularities. By a _singularity_\(X\), we mean in this section the spectrum \(X=\mathrm{Spec}\,R\) where \(R\) is a local and complete \(k\)-algebra with \(k\) an algebraically closed field. ### Toric singularities Let us say that a normal singularity \(X\) is _toric_ if arises as the completion of a normal toric variety at a closed point. The following result should be well-known to the experts, see [11, Theorem 11.4.8] in characteristic zero. **Proposition 5.1**.: _Let \(X\) be a normal \(n\)-dimensional singularity over an algebraically closed field \(k\). Set \(S:=k[[x_{1},...,x_{n}]]\) and let \(\mathbb{T}^{n}:=(\mathbb{G}_{m,k})^{n}\) be the \(n\)-dimensional torus together with its usual \(k\)-linear action on \(S\). Then, the following are equivalent:_ 1. \(X\) _is toric and_ \(\mathbb{Q}\)_-factorial._ 2. \(X\) _is isomorphic to_ \(\operatorname{Spec}R\) _with_ \(R\cong S^{G}\)_, where_ \(G\) _is a finite subgroup scheme of_ \(\mathbb{T}^{n}\)_, and where_ \(G\) _acts via the_ \(\mathbb{T}^{n}\)_-action on_ \(S\) _with an action that is free in codimension one._ Proof.: We follow the proof of [11, Proposition 7.3] and generalise it to our situation: \((1)\Rightarrow(2):\) If \(X\) is toric, then it is analytically isomorphic to \(\operatorname{Spec}k[M]\) for some affine semi-group \(M\). Clearly, we may assume that \(X\) is not smooth and then, \(\operatorname{Spec}k[M]\) has no torus factors. In this situation, the Cox construction (see, for example [14, Section 3.1]) realises \(\operatorname{Spec}k[M]\) as a quotient \(\mathbb{A}_{k}^{n}/G\), where \(G\cong\operatorname{Cl}(k[M])^{D}\cong\operatorname{Cl}(X)^{D}\) and where the \(G\)-action is linear and diagonal (see, for example, [14, Proposition 5.8 and Corollary 5.9]). Since \(X\) is \(\mathbb{Q}\)-factorial, \(\operatorname{Cl}(X)\) is finite and thus, \(G\) is a finite group scheme. Moreover, \(G\) is a subgroup scheme of \(\mathbb{T}^{n}\). By the linearly reductive version of the Chevalley-Todd theorem [10], the \(G\)-action is small, that is, free in codimension one. \((2)\Rightarrow(1):\) We have that \(X\) is toric by [14, Theorem 5.2]. Let \(G^{\prime}\) be the associated reduced subscheme of \(G\), which is a finite group and let \(R^{\prime}:=S^{G^{\prime}}\). Then, as is well-known, \(R^{\prime}\) is \(\mathbb{Q}\)-factorial. For \(e\gg 0\), we have \[(R^{\prime})^{p^{e}}\,\subseteq\,R\,\subseteq\,R^{\prime}.\] Now, if \(D\) is a Weil divisor of \(\operatorname{Spec}R\) and if \(D^{\prime}\) is its pullback to \(\operatorname{Spec}R^{\prime}\), then for some \(n>0\), \(nD^{\prime}\) is Cartier and thus, defined by some element \(f\in R^{\prime}\). Thus, \(p^{e}nD\) is a Cartier divisor, which is defined by \(f^{p^{e}}\in R\). This shows that \(X\) is \(\mathbb{Q}\)-factorial. **Remarks 5.2**.: Let us make a couple of comments: 1. A toric variety \(X=X(\Delta)\) is \(\mathbb{Q}\)-factorial if and only if each cone \(\sigma\in\Delta\) is simplicial, see, for example, [15, Lemma 14-1-1]. In particular, normal toric varieties of dimension \(n\leq 2\) are \(\mathbb{Q}\)-factorial. 2. Finite subgroup schemes of \(\mathbb{G}_{m,k}\) are kernels of multiplication-by-\(N\) for some \(N\geq 0\) and thus, isomorphic to \(\boldsymbol{\mu}_{N}\). Similarly, finite subgroup schemes of \(\mathbb{T}^{n}=(\mathbb{G}_{m,k})^{n}\) are of the form \(\prod_{i=1}^{n}\boldsymbol{\mu}_{N_{i}}\) for some \(N_{i}\)'s with \(N_{i}\geq 0\). In particular, they are diagonalisable. 3. If \(G\) is a subgroup scheme of \(\mathbb{T}^{N}\) for some \(N\) and it acts on \(\operatorname{Spec}S\) freely in codimension one, then we may assume that the \(G\)-action is linear because \(G\) is linearly reductive. Since \(G\) is diagonalisable, simultaneous diagonalisation implies that \(G\) is a subgroup scheme of \(\mathbb{T}^{n}\) and that the \(G\)-action on \(S\) is factors through the usual \(\mathbb{T}^{n}\)-action on \(S\). **Corollary 5.3**.: _Let \(X=\operatorname{Spec}R\) be a normal \(n\)-dimensional, toric, and \(\mathbb{Q}\)-factorial singularity over an algebraically closed field of characteristic \(p>0\)._ 1. _If_ \(R=S^{G}\) _with_ \(S=k[[x_{1},...,x_{n}]]\) _and_ \(G\) _as in Theorem_ 5.5_.(2), then_ \(\operatorname{End}_{R}(S)\) _is an NCCR of_ \(R\)_._ 2. _For_ \(e\) _sufficiently large,_ \(\operatorname{End}_{R}(R^{1/p^{e}})\) _is an NCCR of_ \(R\)_._ 3. _If_ \(n=2\) _and if_ \(e\) _is sufficiently large, then_ \(\operatorname{FB}_{e}(X)\) _is the minimal resolution of singularities of_ \(X\)_._ 4. _If_ \(n=3\)_, if_ \(e\) _is sufficiently large, and if_ \(R\) _is Gorenstein, then_ \(\operatorname{FB}_{e}(X)\) _is a crepant resolution of singularities of_ \(X\)_._ Proof.: Assertions (1) and (2) follow from Corollary 2.7. Assertions (3) and (4) follow from Corollary 4.5 and Corollary 4.6, respectively. **Remark 5.4**.: Faber, Muller, and Smith [14] proved that if \(\operatorname{Spec}R\) is a normal toric singularity in characteristic \(p>0\) and if \(e\) is sufficiently large, then \(\operatorname{End}_{R}(R^{1/p^{e}})\) is an NCR. Recall that this means that the latter ring has finite global dimension, but that it is not necessarily Cohen-Macaulay. A similar result in characteristic zero was established by Spenko and van den Bergh in [14]. ### F-regular surface singularities Let us recall the following result from [13, Section 11]: **Theorem 5.5**.: _Let \(X\) be a normal two-dimensional singularity over an algebraically closed field of characteristic \(p>0\). Then, the following are equivalent_ 1. \(X\) _is F-regular (resp. Gorenstein and F-regular)._ 2. \(X\) _is the quotient singularity by a finite and linearly reductive subgroup scheme_ \(G\) _of_ \(\mathbf{GL}_{2,k}\) _(resp._ \(\mathbf{SL}_{2,k}\)_)._ _Moreover, if \(p\geq 7\), then this is equivalent to_ 1. \(X\) _is a log terminal (resp. canonical) singularity._ By the results of the previous sections, we thus obtain the following. **Corollary 5.6**.: _Let \(X=\operatorname{Spec}R\) be a normal two-dimensional and F-regular singularity over an algebraically closed field of characteristic \(p>0\)._ 1. _If_ \(R=S^{G}\) _with_ \(S=k[[x_{1},x_{2}]]\) _and_ \(G\) _as in Theorem_ 5.5_.(2), then_ \(\operatorname{End}_{R}(S)\) _is an NCCR of_ \(R\)_._ 2. _If_ \(e\) _is sufficiently large, then_ \(\operatorname{End}_{R}(R^{1/p^{e}})\) _is an NCCR of_ \(R\)_._ 3. _If_ \(e\) _is sufficiently large, then_ \(\operatorname{FB}_{e}(X)\) _is the minimal resolution of singularities of_ \(X\)_._ **Remark 5.7**.: Assertion (3) is a theorem of Hara [12], which we recover here in the context of LRQ singularities and \(G\)-Hilbert schemes. ### Canonical surface singularities If \(X\) is a canonical surface singularity over an algebraically closed field \(k\) of characteristic \(p\geq 0\), then it is a rational double point. If \(p>0\), then these have been classified by Artin [1] and they are all of the form \(X=\operatorname{Spec}R\) with \[R\,=\,k[[x_{1},x_{2},x_{3}]]/(f)\] for a suitable polynomial \(f=f(x_{1},x_{2},x_{3})\). If \(p\geq 7\), then the results recalled in Section 5.2 show that all canonical surface singularities are F-regular and LRQ singularities and thus, the results about NCCRs and F-blowups of the previous sections apply. However, if \(p<7\), then not all canonical surface singularities are F-regular. The following result is more or less well-known. **Proposition 5.8**.: _Every canonical surface singularity over an algebraically closed field admits an NCCR._ Proof.: Let \(\operatorname{Spec}R\) be a canonical surface singularity over an algebraically closed field. By [1], there are only finitely many indecomposable maximal Cohen-Macaulay modules over \(R\) up to isomorphism. Let \(M\) be the direct sum of all of them. By [1, Theorem 6], \(\operatorname{End}_{R}(M)\) is an NCCR. **Remarks 5.9**.: If \(0<p\leq 5\), then not all canonical surface singularities are LRQ singularities. 1. If \(p=5\), then the singularities \(E_{8}^{0}\) and \(E_{8}^{1}\) (notation as in [1]) are quotient singularities \[R\,=\,S^{G}\,\subseteq\,S\,=\,k[[x_{1},x_{2}]]\] with \(G\) isomorphic to \(\boldsymbol{\alpha}_{5}\) and \(\mathbf{C}_{5}=\mathbb{Z}/5\mathbb{Z}\), respectively. This is in contrast to the group that is usually assigned to \(E_{8}\)-singularities if \(p=0\) or \(p\geq 7\), namely the binary icosahedral group, which is a non-abelian group of order \(120\). We have \(S*G\cong\operatorname{End}_{R}(S)\) by Proposition 2.3, but this ring does not have finite global dimension by Theorem 2.4. Thus, although these singularities admit NCCRs, they are not given by \(\operatorname{End}_{R}(S)\). 2. If \(p=3\), then the singularity \(E_{8}^{0}\) is not a quotient singularity by [1, Theorem 1.12]. 3. If \(X\) is a canonical surface singularity, then \(\operatorname{FB}_{e}(X)\) has only rational singularities and it is dominated by the minimal resolution of \(X\) by [1, Proposition 3.2]. However, it is not necessarily true that \(\operatorname{FB}_{e}(X)\) is a resolution of singularities even if \(e\) is sufficiently large, see [1, Theorem 1.1] for a couple of examples, which include the \(E_{8}^{0}\)-singularity in characteristic \(p=5\). This has to do with the fact that for every \(e\geq 1\), there exists an indecomposable maximal Cohen-Macaulay module of \(R\) that is not a summand of \(R^{1/p^{e}}\). Thus, an NCCR \(\operatorname{End}_{R}(M)\) of such a singularity, which exists by Proposition 5.8, is not of the form \(\operatorname{End}_{R}(R^{1/p^{e}})\).
2309.00940
Content Prompting: Modeling Content Provider Dynamics to Improve User Welfare in Recommender Ecosystems
Users derive value from a recommender system (RS) only to the extent that it is able to surface content (or items) that meet their needs/preferences. While RSs often have a comprehensive view of user preferences across the entire user base, content providers, by contrast, generally have only a local view of the preferences of users that have interacted with their content. This limits a provider's ability to offer new content to best serve the broader population. In this work, we tackle this information asymmetry with content prompting policies. A content prompt is a hint or suggestion to a provider to make available novel content for which the RS predicts unmet user demand. A prompting policy is a sequence of such prompts that is responsive to the dynamics of a provider's beliefs, skills and incentives. We aim to determine a joint prompting policy that induces a set of providers to make content available that optimizes user social welfare in equilibrium, while respecting the incentives of the providers themselves. Our contributions include: (i) an abstract model of the RS ecosystem, including content provider behaviors, that supports such prompting; (ii) the design and theoretical analysis of sequential prompting policies for individual providers; (iii) a mixed integer programming formulation for optimal joint prompting using path planning in content space; and (iv) simple, proof-of-concept experiments illustrating how such policies improve ecosystem health and user welfare.
Siddharth Prasad, Martin Mladenov, Craig Boutilier
2023-09-02T13:35:11Z
http://arxiv.org/abs/2309.00940v1
# Content Prompting: Modeling Content Provider Dynamics ###### Abstract Users derive value from a recommender system (RS) only to the extent that it is able to surface content (or items) that meet their needs/preferences. While RSs often have a comprehensive view of user preferences across the entire user base, content providers, by contrast, generally have only a _local_ view of the preferences of users that have interacted with their content. This limits a provider's ability to offer _new_ content to best serve the broader population. In this work, we tackle this _information asymmetry_ with _content prompting policies_. A _content prompt_ is a hint or suggestion to a provider to make available novel content for which the RS predicts _unmet user demand_. A _prompting policy_ is a sequence of such prompts that is responsive to the dynamics of a provider's beliefs, skills and incentives. We aim to determine a _joint_ prompting policy that induces a set of providers to make content available that optimizes _user social welfare in equilibrium_, while respecting the incentives of the providers themselves. Our contributions include: (i) an abstract model of the RS ecosystem, including content provider behaviors, that supports such prompting; (ii) the design and theoretical analysis of sequential prompting policies for individual providers; (iii) a mixed integer programming formulation for optimal joint prompting using path planning in content space; and (iv) simple, proof-of-concept experiments illustrating how such policies improve ecosystem health and user welfare. \({}^{1}\)Carnegie Mellon University \({}^{2}\)Google Research [email protected], [email protected], [email protected] ## 1 Introduction Recommender systems (RSs) play a critical role in surfacing items (e.g., products, services or content) to their users, especially when the item corpus is vast. Of course, RSs not only create value for their users; they provide a significant service to _item providers_ by helping them identify a market or audience for their offerings. Given the central role RSs play in facilitating the interaction between providers and users, we view the RS as lying at the heart of a complex _cosystem_ comprising all users, all providers, and the RS itself. A healthy recommender ecosystem requires that the item corpus be updated constantly to reflect the ever-changing needs and preferences of its users. Naturally, one might expect the RS to assist providers in developing _new_ content (or products, services, etc.) to meet changing user needs. However, this is rarely done--providers are generally left to their own devices to explore, design and test new offerings. As a result, the ecosystem is often in an _economically inefficient state_, generating less-than-optimal social welfare for its users (and its providers). Generally, while RSs model user preferences for items _in the corpus_, these models often generalize _out-of-corpus_ to some extent, thus revealing user preferences for "hypothetical" items the RS has not yet seen. This can be understood as a comprehensive view of _latent consumer demand_. Unfortunately, providers lack this global view, since they generally interact with only a small subset of users. Moreover, the RS has a holistic view of the corpus, and possibly the abilities of providers to source or create new items; this can be interpreted as a deep perspective on _potential supply_, a degree of insight also not generally shared by providers. It is this _information asymmetry_ between the RS and the providers that induces economic friction, preventing providers from making optimal design, sourcing or creation decisions regarding new items they might offer to users through the RS. In this work, we develop a stylized model for understanding this information asymmetry and propose techniques to minimize its impact on (user) social welfare. Specifically, we propose the use of _provider prompting policies_ by the RS. These policies suggest novel items a provider might offer, over time, in a way that accounts for: (i) a provider's incentives and beliefs w.r.t. audience/market and its skills; (ii) the dynamics of such audience and skill beliefs; and (iii) the potential clash of policy prompts to different providers. Under certain conditions, we show that our prompting policies lead to a socially optimal provider equilibrium in which providers are incentivized to make items available that maximize user welfare (i.e., sum of user utilities for recommended items). The remainder of the paper is organized as follows. After a brief discussion of related work (Sec. 2), we provide a detailed problem formulation of the process dynamics that incorporates: user affinities and social welfare; content provider skills, beliefs, incentives and best responses; and RS matching and prompting policies (Sec. 3). In Sec. 4, we detail _single-provider prompting policies_ in which the RS suggests a sequence of content points to one provider, assuming others remain fixed. These policies _incentivize_ the provider to follow a content path that reaches a socially optimal equilibrium (w.r.t. user welfare) in polynomial time. We extend this model in Sec. 5 to allow for the coordinated, _joint prompting_ of all providers, and develop a mixed integer programming (MIP) formulation to solve the induced _multi-agent path planning problem_. In Sec. 6, we evaluate our procedures on random problem instances to show the significant improvement in social welfare that can be created by prompting providers. While somewhat stylized, our model provides important first steps to a more _holistic, incentive-aware design of recommender ecosystems_. ## 2 Related Work RSs typically rely on some form of prediction of a user's interests or intent, based on past usage or ratings data Konstan et al. (1997); Jacobson et al. (2016); Covington et al. (2016). We do not commit to a specific method for generating such models, but our approach can be understood in terms of latent-space user and item embeddings of the type generated by collaborative filtering (CF) methods (e.g., matrix factorization Salakhutdinov and Mnih (2007) or neural CF He et al. (2017); Beutel et al. (2018)) to estimate user affinity to specific items/content. Recently there has been a growing appreciation for the multiagent interactions between users, or user groups, and content providers in an RS Abdollahpouri and Burke (2021). In this work, we focus on the incentives and behaviors of providers and assume user interests are fixed. As such, of some relevance is work that considers "fairness of exposure" to the items offered by different providers, and modifications of the RS policy to increase such fairness Singh and Joachims (2018); Biega et al. (2018); Wu et al. (2022); Heuss et al. (2022). More directly connected is work that explores the behavior of providers in response to RS policies. Of special relevance is a recent line of work investigating the incentives that providers have to change the content they offer, and game-theoretic equilibria in such settings. Ben-Porat and Tennenholtz (2018) develop a game-theoretic model of RSs whose providers act strategically--by making available or withholding content--to maximize their user engagement in an RS. Ben-Porat et al. (2019) draw a direct connection between facility location (or Hotelling) games and RSs with strategic providers. Both Hron et al. (2022) and Jagadeesan et al. (2022) study multi-dimensional content models, showing under what conditions Nash equilibria exist, how providers "position" themselves in content space, and the effect this can have on (say) generalization vs. specialization of content production. While this work is similar to ours in its study of adaptive provider behavior, it differs by assuming that providers have full knowledge of supply and demand, something that is far from true in most RSs. By contrast, we focus on the _information asymmetry between providers and the RS_, and the active role the RS can play to reduce it. Our approach allows self-interested providers to make more informed content decisions that induce equilibria that better serve both users and providers. We also adopt a more nuanced model of provider skill and beliefs. We focus on how the RS can intervene to shape the desired equilibrium. Ben-Porat et al. (2019) (see above) examine Nash equilibria in the 1-D case under different RS matching policies while Ben-Porat et al. (2020) extend and generalize this analysis further. Mladenov et al. (2020) investigate (non-strategic) provider behaviors driven by the engagement they obtain, and RS policies that optimize long-term user welfare via matchings that anticipate the equilibria of the induced dynamical system. Neither model addresses information asymmetry or active provider intervention. ## 3 Problem Formulation We begin with a formulation that adopts a somewhat stylized model of a content RS, based on user and item embeddings. We also adopt a simplified model of content provider _skills_ and _beliefs_, their dynamics, their decisions, and the RS knowledge of these elements. While simpler than a real RS ecosystem, this model contains the essential elements required to reason effectively about provider-prompting policies. Our multi-stage model of the RS ecosystem uses a single-stage model based on that of Mladenov et al. (2020), though we develop a very different dynamics model. ### Providers, Users and One-stage Recommendations We assume an RS that matches _users_ to the content made available by _content providers_. Our process is organized into \(T\) stages. We use \(T\) for both finite and infinite-horizons, taking \(\lim_{T\to\infty}\) in the latter case.) At the beginning of each stage \(t\leq T\), each provider determines the content they will make available. During the stage, as users arrive, the RS matches them to providers (and their content) using some fixed policy. **Providers, Content Points and Skill**: We assume a finite set of _providers_\(\mathcal{K}=\{1,\ldots,K\}\). At stage \(t\), each provider determines the content it will make available for recommendation, from a finite set of _content points_\(\mathcal{J}=\{1,\ldots,J\}\subset\mathbb{R}^{d}\), where each point is an embedding of that content.1 Let \(\ell_{k}^{t}\in\mathcal{J}\) be the content generated by provider \(k\) at stage \(t\), which we call \(k\)'s _location_ at \(t\). The _location vector_\(L^{t}=\langle\ell_{1}^{t},\ldots,\ell_{K}^{t}\rangle\) reflects the content decisions of all providers. Footnote 1: This can be generalized to a continuous set of points in some embedding space. However, our user utility and provider reward models ensure a provider only chooses from a finite set of points in equilibrium. This contrasts with approaches that use simpler provider rewards (e.g., those that simply count matched users Jagadeesan et al. (2022), arity, and Steinhardt (2022), which can induce a continuum of equally good equilibria). Some providers are more adept at producing certain types of content than others (e.g., due to specific talent, interests, facilities, etc.). Let \(s_{k,j}\in[0,1]\), be \(k\)'s (true) _skill_ w.r.t. to point \(j\), reflecting this aptitude. Let \(S\) be the (true) _provider skill matrix_, and \(S_{k}\) (the \(k\)th row of \(S\)) the _skill vector_ for provider \(k\). We treat skill as fixed. Skill will exhibit generalization across "similar" content points, but we make no such assumption here. **Users, Affinity, Utility and Reward**: We assume a finite set \(\mathcal{Q}\subset\mathbb{R}^{d}\) of users (or queries). At stage \(t\), a set of users/queries is drawn from (known or estimated) distribution \(P^{t}(\mathcal{Q})\). When \(|\mathcal{Q}|\) is small, we interpret each \(q\in\mathcal{Q}\) as representative of a _user type_. For ease of exposition, we take \(P^{t}\) to be uniform. Each user \(i\in\mathcal{Q}\) has an inherent _affinity_ for content \(j\in\mathcal{J}\) given by a non-negative, bounded _affinity function_\(\sigma(i,j)\), e.g., dot product or cosine similarity if we use some _latent space_\(X\subseteq\mathbb{R}^{d}\) to embed users and content, as in matrix factorization or neural CF. Together with affinity, the skill of the provider dictates user utility: if \(j\) is offered by provider \(k\) and recommended to \(i\), \(i\)'s _utility_ for \(j\) is given by _utility function_\(f(\sigma(i,j),s_{k,j})\). This might be as simple as the product \(\sigma(i,j)\cdot s_{k,j}\). We assume a provider's _reward_ for that recommendation is equal to the user's utility. Increasingly, RSs and content providers focus on user satisfaction beyond simple engagement metrics [1, 13] and long-term utility; we use the term "utility," rather than engagement, to emphasize this. We assume the RS aims to maximize total user utility. Equating user utility with provider reward also aligns RS, user, and provider incentives; but if providers optimize for different forms of engagement, that can be incorporated below. **Matching Policies and Value**: When user \(q\) arrives, the RS recommends the content of some provider \(k\). A _(stochastic) matching_\(\mu:\mathcal{Q}\rightarrow\Delta(\mathcal{K})\) associates each \(q\in\mathcal{Q}\) with a distribution over providers.2 We write \(\mu(q,k)\) to denote the match probability. The _value_ of \(\mu\) given a location vector \(L^{t}\) is its expected user utility: Footnote 2: We adopt this model for simplicity. Generally, users issue different queries, or request recommendations in different contexts, under which the best match differs, a detail handled by all modern RSs. Likewise, providers offer multiple content items, and RSs match to specific provider content. Our formulation applies _mutatis_ _mutatis_ if we match _queries_ to provider _content_ rather than users to providers. \[V(\mu,L^{t})=\sum_{q\in\mathcal{Q}}\sum_{k\in\mathcal{K}}\mu(q,k)f(\sigma(q, \ell_{k}^{t}),s_{k,\ell_{k}^{t}}).\] The _natural matching_\(\mu_{L^{t}}^{*}\) which maximizes this value would be optimal if \(L^{t}\) were stationary, but given the dynamics below, we avoid the term "optimal" to describe this policy. Implementing \(\mu_{L^{t}}^{*}\) requires complete knowledge of user and content embeddings and provider skills, which we assume the RS has. ### Provider Beliefs and Decisions, and RS Prompts We now describe the process by which the \(T\) stages unfold, including how providers update their beliefs about their skills and audience, and how these beliefs influence their content decisions. **Skill and Audience Beliefs**: Providers base their decisions about which content to make available at stage \(t\) by estimating their own audience's utility. This requires estimating both their own skill and the (affinity-weighted) audience they expect to be generated for them by the RS for any content point \(j\in\mathcal{J}\) they might offer. Each provider \(k\) has a _skill belief vector_\(\widetilde{S}_{k}^{t}\), where entry \(\widetilde{s}_{k,j}^{t}\in[0,1]\) denotes \(k\)'s _estimated skill or skill belief_ w.r.t. point \(j\). Let \(\widetilde{S}^{t}\) be the matrix of such skill vectors (\(k\)'s beliefs are over its own skill, not that of other providers \(i\neq k\)). While true skill \(S\) is fixed, estimates \(\widetilde{S}^{t}\) change over time as, say, the provider gains experience with new content. We assume a skill belief update function of the form \(\widetilde{S}_{k}^{t+1}=bu_{S}(L^{t}_{k},\widetilde{S}_{k}^{t},E^{t}_{k})\), i.e., \(k\)'s skill belief at stage \(t+1\) depends on its prior belief \(\widetilde{S}_{k}^{t}\) and the utility \(E^{t}_{k}\) garnered at its location \(L^{t}_{k}=j\). For simplicity, we sometimes assume a simple deterministic (and non-generalizing) skill update \(bu_{S,D}\): if \(\ell^{t}_{k}=j\), \(k\)'s belief for point \(j\) collapses to its true skill \(s_{k,j}\) for all \(t^{\prime}>t\), with no other \(j^{\prime}\) being updated at time \(t+1\). This is easily generalized, though the optimization below requires (an estimate of) an update model. Each provider \(k\) also has an _audience (affinity) belief_\(\widetilde{a}_{k,j}^{t}\leq Q\) that measures its estimate of the total audience affinity it expects to attain if it offers content at point \(j\). Define vector \(\widetilde{A}_{k}^{t}\) and matrix \(\widetilde{A}^{t}\) in the obvious way. These estimates also vary with time. As with skills, we assume an audience update function \(\widetilde{A}_{k}^{t+1}=bu_{A}(L^{t}_{k},\widetilde{A}_{k}^{t},E^{t}_{k})\), and sometimes assume a simple deterministic model \(bu_{A,D}\): if \(\ell^{t}_{k}=j\), \(k\)'s audience belief for point \(j\) collapses to its realized audience \(A_{k,j}\) at time \(t+1\), so \(\widetilde{A}_{k,j}^{t+1}=A_{k,j}\). Unlike skill beliefs, these can change with each new experience at point \(j\). Together with deterministic skill update, this means a provider can determine the (expected) total user affinity--assuming the _number_ of users is observable while affinity is estimated--to which is was matched from the utility signal under simple, say, linear utility models (e.g., if user utility is the product of provider skill and user affinity). This too is easily generalized. One exception to this form of audience belief update is if the RS provides a _prompt_, defined next. **Prompts**: To encourage providers to generate content that increases user utility, the RS uses _prompts_, that is, suggestions to providers to make content available at certain points. To incentivize such content, the RS can (temporarily) commit some amount of audience affinity to a provider. Formally, an RS _prompt_\(\nu_{k}^{t-1}=(j,E)\) of provider \(k\) at time \(t-1\) consists of: (i) a suggested content point/location \(j=\ell_{k}^{t}\) for \(k\) to produce at stage \(t\); and (ii) a commitment to a (minimum) level of audience utility \(E=E_{k}^{t}\) at stage \(t\) if \(k\) produces \(j\). The effect on \(k\)'s audience belief depends on their level of _trust_ in the RS. Let \(\lambda_{k}^{t}\in[0,1]\) denote \(k\)'s current trust; then \(\widetilde{A}_{j}^{t}\gets bu_{P}(\widetilde{A}_{j}^{t},E,\lambda^{t})\), where \(bu_{P}\) is an update function that determines \(k\)'s _prompted belief_ used to make its next content decision (see below). This prompted belief is tentative, as it will be updated given the _realized audience_ at stage \(t+1\) (via \(bu_{A}\)). We consider two example updates. The first is _incremental-trust belief update_, where: (i) \(bu_{P,I}(\widetilde{A}_{j}^{t},E,\lambda^{t})=(1-\lambda^{t})\cdot\widetilde{A}_ {j}^{t}+\lambda^{t}\cdot E\); and (ii) the trust parameter \(\lambda^{t+1}=\tau(\lambda^{t},E,E_{r}^{t})\) is updated given \(k\)'s realized utility using a _trust update function_\(\tau\). The second is _full-trust belief update_, where \(bu_{P,F}(\widetilde{A}_{j}^{t},E,\lambda^{t})=E\). **Process Dynamics**: RS dynamics evolve as follows. The state \(s^{t}=\langle L^{t},\widetilde{S}^{t},\widetilde{A}^{t}\rangle\) at stage \(t\) consists of: location vector \(L^{t}\), skill belief matrix \(\widetilde{S}^{t}\), and audience belief matrix \(\widetilde{A}^{t}\). The location \(\ell_{k}^{t}\) chosen by provider \(k\) is consistent with their beliefs (see below). The RS generates a matching \(\mu^{t}\) the realization of which induces value for users, and utility for providers. Based on their realized utility, each provider updates its (skill and audience) beliefs--thought of as a _half state_\(s^{t+\frac{1}{2}}=\langle L^{t},\widetilde{S}^{t+\frac{1}{2}},\widetilde{A}^{t+ \frac{1}{2}}\rangle\) (locations do not yet change). The RS may then _prompt_ providers by providing information \(\nu^{t}\) about the audience they will receive if they offer content at a specific point, which can induce a further update in their audience belief. Prompts influence only audience beliefs, not skill beliefs, so \(\widetilde{S}^{t+1}=\widetilde{S}^{t+\frac{1}{2}}\). Given updated beliefs \(\widetilde{S}^{t+1}_{k}\) and \(\widetilde{A}^{t+1}_{k}\), each \(k\) selects their next location \(\ell^{t+1}_{k}\), determining state \(s^{t+1}\). The process, schematically represented below, then repeats \[\langle L^{t},\widetilde{A}^{t},\widetilde{S}^{t}\rangle\xrightarrow{\mu} \langle L^{t},\widetilde{A}^{t+\frac{1}{2}},\widetilde{S}^{t+\frac{1}{2}} \rangle\xrightarrow{\nu}\langle L^{t+1},\widetilde{A}^{t+1},\widetilde{S}^{t +1}\rangle\] **Provider Best Responses**: To model provider choices of location at each stage, we assume providers are _myopic utility maximizers_ w.r.t. their own beliefs, but are not strategic in the "dynamic" sense (that is, they do not reason about how their actions might impact the RS policy or the behavior of other providers). Thus, given beliefs \(\widetilde{S}^{t}_{k},\widetilde{A}^{t}_{k}\) at stage \(t\), \(k\) chooses the location \(\ell^{t}_{k}\) corresponding to their _best response_, i.e., the content point \(\ell^{t}_{k}=BR(\widetilde{S}^{t}_{k},\widetilde{A}^{t}_{k})=\arg\max_{j\in \mathcal{J}}\widetilde{s}^{t}_{k,j}\widetilde{a}^{t}_{k,j}\) for which it predicts the greatest utility. We call state \(s^{t}=\langle L^{t},\widetilde{S}^{t},\widetilde{A}^{t}\rangle\)_rationalizable_ if \(\ell^{t}_{k}=BR(\widetilde{S}^{t}_{k},\widetilde{A}^{t}_{k})\) for all providers \(k\). We assume that provider location choice satisfies rationalizability in what follows. **Stable Matchings and States**: A matching is _(myopically) stable_ if--with no RS prompts--it gives no incentive for a location change. Let \(s^{t}=\langle L^{t},\widetilde{S}^{t},\widetilde{A}^{t}\rangle\), let \[E^{t}_{k}=\sum_{i\in\mathcal{Q}}\mu^{t}(i,k)f(\sigma(i,\ell^{t}_{k}),s_{k, \ell^{t}_{k}})\] be \(k\)'s realized utility under \(\mu^{t}\), and let \[\widetilde{S}^{t+1}_{k}=bu_{S}(L^{t}_{k},\widetilde{S}^{t}_{k},E^{t}_{k})\] be \(k\)'s updated skill belief. Without prompts, \(\widetilde{A}^{t+1}_{k}=\widetilde{A}^{t+\frac{1}{2}}_{k}\). We say that \(\mu^{t}\) is _myopically stable_ w.r.t. \(s^{t}\) if, for all \(k\in\mathcal{K}\): \[\widetilde{a}^{t+1}_{k,\ell^{t}_{k}}\widetilde{s}^{t+1}_{k,\ell^{t}_{k}} \geq\widetilde{a}^{t+1}_{k,j}\widetilde{s}^{t+1}_{k,j},\ \forall j\neq\ell^{t}_{k},\] i.e., \(k\)'s current location \(\ell^{t}_{k}\) remains a best response after it experiences the utility induced by \(\mu^{t}\). While myopic stability is a desirable equilibrium property, it does not ensure stability/equilibrium of the dynamical system. A matching/state may be myopically stable simply due to the slowness of the process by which providers update their beliefs. A true equilibrium notion requires that a stable state persists indefinitely: we say state \(s^{t}\) is _(non-myopically) stable_ if there is a sequence of matchings \(\mu^{t},\mu^{t+1},\ldots\) such that for all providers \(k\) and stages \(t^{\prime}>t\), \[\widetilde{a}^{t^{\prime}}_{k,\ell^{t}_{k}}\widetilde{s}^{t^{\prime}}_{k,\ell ^{t}_{k}}\geq\widetilde{a}^{t^{\prime}}_{k,j}\widetilde{s}^{t^{\prime}}_{k,j}, \ \forall j\neq\ell^{t}_{k}.\] A matching \(\mu\) is _(non-myopically) stable_ w.r.t. \(s^{t}\) if the matching sequence \(\mu^{t^{\prime}}=\mu,\forall t^{\prime}\geq t\) renders \(s^{t}\) stable in the sense above. Once a desirable system state is reached, ideally it will be stable w.r.t. a _single_ matching. **Overall Objective**: Our overall objective is the maximization of user social welfare: this may be \(V(\mu,L^{T})\) at stage \(T\) in the finite-horizon case, or long-term expected average reward \(\lim_{T\to\infty}\mathbb{E}[\frac{1}{T}\sum_{t=0}^{T-1}V(\mu^{t},L^{t})]\). If provider content decisions are fixed/cannot be influenced by the RS, this simply requires the application of the natural matching \(\mu^{*}_{LT}\) relative to \(L^{T}\). At the other extreme, if the RS could persuade providers to move to arbitrary locations at will, the problem is akin to a facility location or \(k\)-medians problem (Whelan, Harrell, and Wang 2015) where users are clients whose (inverse) affinities reflect travel costs and provider skill captures service costs. The reality of course is different--providers are independent decision makers whose content decisions accord with their beliefs and incentives. The aim of a _prompting policy_ is to break the information asymmetry described above to allow providers to better calibrate their audience and skill beliefs so that their decisions lead to a (close to) welfare-optimizing matching. Moreover, since providers make their own content decisions, we want the final matching to be in _equilibrium_, that is, be stable given providers' incentives. The RS also uses its _matching policy_ by (perhaps temporarily) creating specific audience/utility levels that incentivize providers to make suitable moves. Taken together, an RS policy has two parts: a _matching policy_ that associates a matching \(\mu^{t}\) with each state \(s^{t}=\langle L^{t},\widetilde{S}^{t},\widetilde{A}^{t}\rangle\); and a _prompting policy_ that, for each half state \(s^{t+\frac{1}{2}}=\langle L^{t},\widetilde{S}^{t+\frac{1}{2}},\widetilde{A}^{t+ \frac{1}{2}}\rangle\), generates a prompt \(\nu^{t}\). ## 4 Single-Provider Prompting Policies A cornerstone of our approach and analysis is the _single-provider prompting policy_. Here, we assume that the content decisions of all providers except one, \(k\in\mathcal{K}\), are fixed, and construct a prompting policy that ensures \(k\) moves to the location \(j\in\mathcal{J}\) that maximizes user welfare given the fixed locations of the other providers. Specifically, assume a fixed location vector \(L_{-k}\) that specifies \(\ell_{k^{\prime}}\) for all \(k^{\prime}\neq k\), where \(\ell^{t}_{k^{\prime}}=\ell_{k^{\prime}}\) for all \(t\leq T\); and an initial state \(s^{0}=\langle L^{0},\widetilde{S}^{0},\widetilde{A}^{0}\rangle\). Our goal is to design a joint matching/prompting policy that induces \(k\) to move to a target \(j^{*}_{k}\in\mathcal{J}\), inducing location vector \(L^{\texttt{prompt}}=L_{-k}\circ j^{*}_{k}\), that is optimal w.r.t. long-term average reward: \[\lim_{T\to\infty}\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}V(\mu^{*}_{L^{ \texttt{prompt}}},L^{\texttt{prompt}})\right].\] Furthermore, the policy should induce an _equilibrium_, i.e., a stable state where \(k\) offers \(j^{*}\) without additional prompts. Proofs of all results in this section along with additional details of our formulation are provided in Appendix A. ### Equilibrium with No Prompting We illustrate the value of prompting policies with a simple example to show the potential loss in total user utility/welfare that accrues without prompting. Assume initial state \(s^{0}\), where \(k\)'s beliefs are \(\widetilde{S}^{0}_{k}\) and \(\widetilde{A}^{0}_{k}\), a matching policy \(\mu_{L}\) that is fixed for any location vector \(L\) (e.g., the natural matching), and let \(E_{k,j}\) be \(k\)'s utility under \(\mu_{L}\) if \(\ell_{k}=j\). Let \(O\subseteq\mathcal{J}\) be \(k\)'s _undominated overestimates_--these are content points \(j\) for which \(k\) initially overestimates expected utility, but for which no other point's _true_ and _estimated_ values exceed \(j\)'s true utility: \[O=\{j:\widetilde{s}_{k,j}^{0}\widetilde{a}_{k,j}^{0}>E_{k,j}, \nexists j^{\prime}\\ \text{s.t.}(\widetilde{s}_{k,j^{\prime}}^{0}\widetilde{a}_{k,j^ {\prime}}^{0}>E_{k,j}\text{ and }E_{k,j^{\prime}}>E_{k,j})\}.\] Under reasonable (e.g., monotonically converging) belief updates, and best response behavior, \(k\) will eventually try all undominated, overestimated content points. For instance, under immediately collapsing skill beliefs, the system reaches equilibrium in exactly \(|O|\) steps: for any undominated \(j\), any other \(j^{\prime}\) whose estimated utility is greater than \(j\)'s estimate is tried before \(j\), but \(k\)'s beliefs immediately collapse to give the true value of \(j^{\prime}\), which by assumption is less than that of \(j\). Thus each undominated \(j\) will be tried once. Critically, _no dominated point \(j^{\prime\prime}\)_ will ever be tried, since at least one undominated point has a _true_ utility greater than \(k\)'s estimate for \(j^{\prime\prime}\). This lack of exploration can lead to an _arbitrarily suboptimal_. For instance, suppose there are only two points such that \(\widetilde{s}_{k,2}^{0}\widetilde{a}_{k,2}^{0}>E_{k,1}>E_{k,2}>\widetilde{s}_{ k,1}^{0}\widetilde{a}_{k,1}^{0}\). Then \(k\) only offers the suboptimal point \(2\), whose true value is greater than \(k\)'s estimate for point \(1\), despite the fact that \(1\)'s true value is greater than \(2\)'s. An RS prompt can reveal to \(k\) the true utility at point \(2\), thus incentivizing \(k\) to offer \(2\) (and realize its true value). This issue is further exacerbated by the fact that \(k\) will never visit _any dominated point_, as detailed above. The provider thus faces an exploration problem: indeed, natural schemes like "optimism in the face of uncertainty" would ensure \(k\) tries points such as point 1. However, the cost of exploration (e.g., creating new content), the lack of direct control (e.g., predicting audience), and inherent risk aversion may prevent adequate _self-exploration_ by providers. Prompting reduces this risk by providing additional certainty, through promised utility, and incentivizing behavior that allows the provider to update their audience/skill beliefs where they might otherwise not. That said, we note that in some cases, provision of certain points may not be incentivizable by the RS, e.g., when \(k\) underestimates it skill \(s_{k,j}\) so drastically that no promised audience utility can incentivize \(k\) to move to \(j\), even with full trust. ### Prompting under Non-generalizing Belief Updates Let \(\mu^{t}\) be a stable matching w.r.t the current state \(s^{t}\), with \(\ell_{k}^{t}=j\) and \(E_{j}^{k}\) being \(k\)'s expected utility/engagement. The willingness of \(k\) to move to the RS target \(j^{*}\) depends on its trust in the RS prompts. We refer to Appendix A for details (our theorems require specific assumptions on how provider trust is updated), but at a high-level, our prompting policies comprises two phases: (1) the RS takes steps to increase \(k\)'s level of trust in the RS by promising \(k\) its expected audience (given \(\mu^{t}\)) at its _current_ point \(\ell_{k}^{t}\)--and continuing to match users using \(\mu^{t}\)--until \(k\)'s trust reaches a sufficient level; (2) the RS then prompts \(k\) with \((j^{*},\phi)\), where \(\phi\) is the minimum level of audience affinity given \(k\)'s skill beliefs needed to induce the move; and matches using the optimal target policy \(\mu_{L^{\text{rec}}}^{t}\). More precisely, our policy, parameterized by a trust threshold \(\Lambda\) and a belief convergence rate \(T(\varepsilon,\delta)\), is: 1. While \(\lambda_{k}<\Lambda\), repeat match-prompt pair \[(\mu,\nu)=\big{(}\mu^{t},(\ell_{k},\sum_{q\in\mathcal{Q}}\mu^{t}(q,k)\sigma(q,\ell_{k}^{t}))\big{)}.\] 2. Issue match-prompt pair \[(\mu^{t},(j^{*},\phi(j^{*};\Lambda,\widetilde{S}_{k},\widetilde{A}_{k}))).\] 1. If Step 2 has been done \(T(\varepsilon,\delta/2)\) times, terminate. 2. Else if \(\lambda_{k}\geq\Lambda\) go to Step 2. 3. Else if \(\lambda_{k}<\Lambda\) go to Step 1. If \(\phi(j^{*};\Lambda,\widetilde{S}_{k},\widetilde{A}_{k})\leq\sum_{q\in\mathcal{ Q}}\mu^{t}(q,k)\cdot\sigma(q,j^{*})\) then \(\mu^{t}\) itself (stochastically) delivers the promised audience. If not, the provider trust might take a hit, but is rebuilt in step (c). First, assume the special case where \(\mu^{t}\) is deterministic and \(k\)'s beliefs collapse immediately. Then, we may run the above policy with \(\Lambda=1\) and ignore steps (a)-(c). Provider \(k\)'s trust updates are governed by a learning rate \(\eta\). **Theorem 1**.: _Let \(s^{t}=\langle L^{t},\widetilde{A}^{t},\widetilde{S}^{t}\rangle\) be rationalizable and \(\mu^{t}\) be a non-myopically stable matching w.r.t. \(s^{t}\). After \(\zeta=1/\eta\) iterations of the above policy, the resulting state \(s^{t+\zeta}\) remains rationalizable and \(\mu^{t}\) non-myopically stable w.r.t. \(s^{t+\zeta}\). Furthermore, it maximizes the minimum trust, \(\min_{t}\lambda^{t}\), among all policies that reach \(s^{t+\zeta}\)._ More generally, we prove the complexity of the above policy under the assumption of a sample complexity measure \(T(\varepsilon,\delta)\) that determines the number of times a content item must be visited for \(k\)'s predicted utility to be nearly perfect. **Theorem 2**.: _Let \(|\mathcal{Q}|\geq\Omega(1/(1-\Lambda)^{2})\), let \(s^{t}=\langle L^{t},\widetilde{A}^{t},\widetilde{S}^{t}\rangle\) be rationalizable, and \(\mu^{t}\) be non-myopically stable w.r.t. \(s^{t}\). If we run the above policy with \(\varepsilon<\frac{1}{2}\min_{j,j^{\prime}}|\mathbb{E}[E_{k,j}]-\mathbb{E}[E_{k,j^ {\prime}}]|\), the following hold w.h.p.: (1) the resulting state \(s^{t+\zeta}\) has \(\ell_{k}=j^{*}\), and remains rationalizable, while \(\mu\) remains non-myopically stable w.r.t. \(s^{t+\zeta}\); and (2) the policy terminates in \(\zeta=O(T(\varepsilon,\delta)^{3}\Lambda^{2}/(\eta^{2}\delta)^{2})\) rounds._ As \(\Lambda\) varies, our policy trades off (1) quickly reaching equilibrium and (2) keeping trust high. See Appendix A for further details and discussion. ### Prompting when Beliefs Generalize across Content The policy above will succeed only if \(k\)'s skill belief at \(j^{*}\) is such that the promised audience will induce it to move. The fact that provider and cumulative user utility are aligned ensure that \(\mu_{L^{\text{rec}}}^{s}\) will, in fact, satisfy \(k\) if it can be convinced to move. If \(k\)'s skill beliefs for any \(j^{\prime}\) are only updated when it generates content at \(j^{\prime}\), then this immits the set of reachable points, and if \(j^{*}\) is reachable, it can be reached with a single prompt. By contrast, suppose \(k\)'s skill beliefs about \(j^{\prime}\) can be influenced by its experience at a _different \(j^{\prime\prime}\)_; e.g., if producing successful snorkelling content increases \(k\)'s belief it can produce scuba content. Then the RS faces a _path planning problem_: determining a sequence of points \(j=j_{0},j_{1},\cdots j_{n-1},j_{n}=j^{*}\) such that each \(j_{i}\) is _incentivizable_ (i.e., can be prompted using the two-phase policy above) given \(k\)'s skill beliefs given that it has generated content at all \(j_{\leq i}\). We assume only the following about generalizing belief updates: (1) beliefs at unvisited points never become less accurate, and (2) belief updates for unvisited points are independent of the order in which other points are visited. Full details are provided in Appendix A. A trivial policy that eventually reaches the optimal incentivizable target \(j^{*}\) prompts \(k\) to visit any reachable point \(j^{\prime}\) the moment \(j^{\prime}\) is incentivizable. Of course, this may induce visits to unnecessary points. In Appendix A, we formulate a MIP to find the _shortest promptable content path_. While this problem is NP-hard, fortunately, there is a simple, efficient greedy algorithm: at each step, if \(j^{*}\) is incentivizable, prompt with \(j^{*}\) and terminate; otherwise prompt with the (currently) incentivizable point \(j\) that maximizes the increase in accuracy in \(k\)'s predicted reward at \(j^{*}\) if \(j\) is visited. **Theorem 3**.: _[Informal] The greedy algorithm runs in polynomial time and returns a content path at most a constant factor longer than the shortest content path._ See Appendix A for a formal statement and proof. ## 5 Joint Prompting Policies The problem of simultaneously prompting a set of providers, that is, ensuring that the _joint_ location (content) of all providers serves users effectively, presents a non-trivial escalation in complexity. However, the main insight above--viewing provider evolution as _paths through content space_--suggests treating audience availability for _multiple providers_ as the key requirement and challenge in designing _joint prompting policies_. This introduces an additional planning component to the problem, requiring a joint prompting policy to _schedule prompts_ in a way that ensures enough available audience to fulfil the promises needed to incentivize all providers to move to their optimal locations. This unfortunately rules out the greedy application of the single provider policies above, which can be suboptimal. Consider a simple counterexample with: two providers \(a,b\); two points \(x,y\); current locations \(\ell_{a}^{t}=x,\ell_{b}^{t}=y\); skills such that \(s_{a,y}>s_{a,x}\), \(s_{a,y}>s_{b,y}\), \(s_{b,x}>s_{b,y}\) and \(s_{b,x}>s_{a,x}\); and user affinities such that the optimal matching \(\mu^{*}\) has equal total affinity at points \(a\) and \(b\) whenever providers at those points have skills at least (resp.) \(s_{a,x},s_{b,y}\). In this situation, a Pareto improving move is to swap the locations of \(a\) and \(b\), since each has greater skill at the opposite location than at their current location and than their peer. However, any policy that moves one provider at a time must have both providers at the same location for at least one period, imposing significant cost on user utility. ### Mixed Integer Programming Formulation The complications above mean that joint prompting policies must plan provider paths under "sufficient audience" constraints. We develop a MIP formulation to solve this problem over a finite horizon \(T\).3 Footnote 3: We briefly discuss alternative approaches below. In our formulation, provider behavior is represented by decision variables \(\mathit{Act}^{t}_{k,j}\in\{0,1\}\) indicating whether provider \(k\) offers content \(j\) at stage \(t\), with the requirement that each \(k\) be active at exactly one point \(j\) at each \(t<T\). Decisions by the RS are captured by three sets of optimization variables: the matching policy \(\pi^{t}_{q,k}\in[0,1]\); the prompting policy \(\nu^{t}_{k,j}\in\{0,1\}\) indicating whether the RS prompts \(k\) to offer content \(j\) at stage \(t\); and the commitments, \(C^{t}_{k,j}\in\mathbb{R}^{+}\), or audience promised for any such prompt. With these variables, we can express relevant key quantities: * Provider utility: \[E^{t}_{k}=\sum_{q\in\mathcal{Q}}\pi^{t}_{q,k}\sum_{j\in\mathcal{J}}\mathit{ Act}^{t}_{k,j}s^{*}_{k,j}\sigma(q,j);\] * Provider audience: \[\mathit{Aud}^{t}_{k}=\sum_{q\in\mathcal{Q}}\pi^{t}_{q,k}\sum_{j\in\mathcal{J}} \mathit{Act}^{t}_{k,j}\sigma(q,j);\] * Induced skill beliefs: We assume skill belief collapses to true skill the moment \(j\) is visited, hence: \[\widehat{s}^{t}_{k,j}=\sum_{j\in\mathcal{J}}s^{0}_{k,j}(1-\mathit{Vis}^{t-1}_ {k,j})+s^{*}_{k,j}\mathit{Vis}^{t-1}_{k,j}\] where \(\mathit{Vis}^{t}_{k,j}\) indicates whether \(k\) has visited \(j\) at or before stage \(t\). We outline only key MIP constraints that govern the RS dynamics here; the full MIP is detailed in Appendix B. Indices are universally quantified. We use \(\lessapprox\) and \(\pm\) to denote two constraints (in the obvious way). \(\widehat{M}\) in the "big-\(M\)" constraints is an upper bound on audience utility. \[C^{t}_{k,j}\leq\nu^{t}_{k,j}M\] (Cons. 1(a)) \[\mathit{Aud}^{t}_{k}\geq C^{t-1}_{k,j}Act^{t}_{k,j}\] (Cons. 1(b)) \[\widehat{s}^{t}_{k,j}\widehat{a}^{t}_{k,j}\geq\widehat{s}^{t}_{k,j }\widetilde{a}^{t}_{k,j^{\prime}}-(1-\mathit{Act}^{t}_{k,j})M\] (Cons. 2) \[\widetilde{a}^{t}_{k,j}\leq Aud^{t-1}_{k}\pm(1-\mathit{Act}^{t-1}_ {k,j})M\pm\nu^{t-1}_{k,j}M\] (Cons. 3) \[\widehat{a}^{t}_{k,j}\leq\widehat{a}^{t-1}_{k,j}\pm\mathit{Act}^{t-1 }_{k,j}M\pm\nu^{t-1}_{k,j}M\] (Cons. 4) \[\widehat{a}^{t}_{k,j}\leq C^{t-1}_{k,j}\pm(1-\nu^{t-1}_{k,j})M\] (Cons. 5) Cons. 1(a) and Cons. 1(b) are prompting constraints: the first limits the promised audience if a prompt is given, and ensures the promise is zero if no prompt is given; the second ensures that if \(k\) "accepted" a prompt (i.e., moved to the prompted location), the matching delivers the promised audience. Cons. 2 ensures providers only offer content that are best responses. The final constraints control audience belief updates: Cons. 3 when \(k\) is at \(j\) but was not prompted (belief updated to realized audience); Cons. 4 points that are inactive and unprompted (beliefs persist); and Cons. 5 handles prompts (belief updated to promised audience). Our objective is to maximize time-averaged user utility \(\frac{1}{T}\sum_{t=0}^{T-1}\sum_{k\in\mathcal{K}}E^{t}_{k}\). The only quadratic terms in the MIP involve the product of a binary and a (non-negative) real-valued variable, which can be linearized in standard fashion.4 ## 6 Empirical Evaluation We present some simple proof-of-concept experiments to demonstrate the value that can be generated by prompting policies. We generate random problem instances, using relatively small numbers of users and providers to illustrate key points, and compare the user welfare (or utility) generated over time of specific policies. While large-scale experiments and empirically determined models of behavior and incentives will be needed for practical deployment, these results suggest that prompting has an important role to play in improving RS ecosystem health. **Problem Generation**: Due to the infeasibility of testing our approach in a fully data-driven way, we generate synthetic (fully simulated) and semi-synthetic (preferences extracted from the MovieLens dataset Harper and Konstan (2016)) scenarios. For each scenario, we generate multiple instances of problem size \((J,K,Q)\) with \(J\) content points \(\mathcal{J}\), \(K\) providers \(\mathcal{K}\), and \(Q\) users \(\mathcal{Q}\). Sizes vary across families of instance, but are kept small. In synthetic scenarios, instances are generated from a cluster model in 2-D space that emulates (implicit) communities of users with similar preferences. User content affinities decay linearly with Euclidean distance from a content point. The semi-synthetic model adopts user and movie embeddings obtained by factorizing the MovieLens dataset. Movie embeddings are clustered with \(k\)-means to generate the possible content types. For each type \(j\), the top \(U_{j}\) users w.r.t. affinity are selected, where \(U_{j}\) is determined based on the number of users with non-negative affinity to the cluster. (Affinities are inner products). Full data generation details are in Appendix C. See Fig. 1 for an illustration. **Evaluation of the MIP**: We evaluate the MIP formulation on random small problem instances, using various metrics, which we describe here: * Let \(E^{T}=\sum_{k\in\mathcal{K}}E^{T}_{k}\) be the utility of the _optimal prompting policy_, as determined by the MIP, at the final stage \(T\), and \(E=\frac{1}{T}\sum_{t=0}^{T-1}\sum_{k\in\mathcal{K}}E^{t}_{k}\) be its _time-averaged utility_--the latter is the MIP objective. * Let \(\overline{E}^{T}\) and \(\overline{E}\) denote the same quantities for the _optimal policy that does not prompt providers_, but that does adapt its matching as providers update their locations (content). * Let \(E_{0}\) be the utility of the _optimal stationary policy_ (no prompting, a fixed matching). * Define \(P^{T}=(E^{T}-\overline{E}^{T})/\overline{E}^{T}\) to be the _final prompt gap_, i.e., the improvement obtained (at the final stage) by prompting, and \(\widehat{P}=(E-\overline{E})/\overline{E}\) the time-averaged prompt gap, that is, the time-averaged improvement in utility obtained by prompting. * Let \(D=(\overline{E}-E_{0})/E_{0}\) be the improvement obtained by the adaptive, non-prompting policy over the stationary policy. * Finally, let \[U_{q}=\frac{1}{T}\sum_{t=0}^{t-1}\sum_{k\in\mathcal{K}}\pi^{t}_{q,k}\sum_{j \in\mathcal{J}}Ac^{t}_{k,j}s^{*}_{k,j}\sigma(q,j)\] denote the _time-averaged user utility_ of user \(q\). Figure 1: 2-D problem instance illustration. Red crosses are content points (locations). Blue markers are user/query locations. Orange and green circles represent content providers: Green reflects a provider’s true skills, while the connected orange circles capture’s that providers skill beliefs. Figure 3: User utility \(U_{q}\) counts (10 MovieLens instances) for different policies (\(J=20,K=10,Q=100,T=5\)). Figure 2: Total per-period utility \(E^{t}\) (20 synthetic instances) for different policies (\(J=20,K=5,Q=50,T=10\)). We solve the MIP with Gurobi (the fastest commercial MIP solver).5 Fig. 2 plots per-period utility \(E^{t}\) as a function of \(t\) for each the described policies averaged over 20 random synthetic instances with \(J=20\) content points, \(K=5\) providers, \(Q=50\) users over a time horizon of \(T=10\), and Table 1 contains key quantities for these instances in addition to a smaller set of instances with \(J=10\). The larger \((J=20)\) instances exhibit larger final (9.3%) and time-averaged (10.7%) prompt gaps (roughly \(2\times\)) than the smaller \((J=10)\) instances. Footnote 5: [https://www.gurobi.com/](https://www.gurobi.com/). Table 2 displays the same quantities averaged over 20 random MovieLens instances with \(J\in\{10,20\}\), \(K=10\), \(Q=100\), and \(T=5\). Here, increasing the number of content locations seems to have a negligible impact on \(P^{T}\) and \(\hat{P}\), but the improvement over the stationary policy is significant (61.8% for the \(J=20\) instances), showing the importance of dynamically matching and prompting based on the RS dynamics. The standard deviations of all quantities are fairly large, illustrating that for some instances the utility improvement due to prompting is not too large, but for other instances it is dramatic. Finally, as another means of visualization, Fig. 3 displays the user utility histogram for the larger (\(J=20\)) MovieLens instances, showing a clear improvement due to prompting in both aggregate user utility and its distribution across users. The full set of experimental results is provided in Appendix C. We note that our MIP does not scale well beyond the instance sizes above. For example, on a MovieLens instance with \(J=50\), \(K=10\), \(Q=100\), \(T=5\), Gurobi was unable to find a feasible solution after 12 hours. Experimentally, the MIP scales poorly in the number of providers \(K\) and the time horizon \(T\). Improving our methods work at the scale of modern content ecosystems is a critical direction for future research. We discuss one such extension next. **Column Generation**: In Appendix C we outline the initial ideas behind a _column generation (CG)_ approach that offers both a more general and more scalable solution to this complex _multi-agent (i.e., multi-provider) planning problem_. The problem of "moving" providers through location space has strong analogies with multi-robot path planning [1], where the goal is determine the best deployment of a set of robots to specific tasks (e.g., in-warehouse order fulfillment), together with collision-free paths that allow them to complete their assigned tasks. While there are significant differences--providers are self-determining agents with their own goals and incentives--our CG formulation draws inspiration from the model of [1]. ## 7 Concluding Remarks We have developed a model of content provider incentives and behaviors that allows an RS to prompt providers to offer novel content items that help improve user social welfare and overall ecosystem health. Our prompting policies incentivize providers to "explore" w.r.t. their own skills and audience beliefs, and _de-risk_ this exploration, nudging the RS to an equilibrium that improves user welfare and the utility of individual providers. Our prompting policies effectively break the fundamental information asymmetry that exists in many RS ecosystems. Apart from theoretical guarantees, our empirical results demonstrate that such prompting can significantly improve outcomes. A number of important theoretical and practical extensions of this model are needed. Scalability is of critical importance--CG appears quite promising, but online adaptive policies using multi-agent RL should be investigated as well. Various extensions and generalizations of our model should prove valuable, including relaxing some of its more restrictive assumptions, such as: the stationarity of user/query affinities and provider skills; the non-strategic decision making of providers (who best respond only myopically); and the extensive knowledge of the RS (e.g., of provider skills, user affinities). Of special interest is the case when the RS is uncertain of provider abilities (skill) and incentives (reward), which moves us into the realm of dynamic mechanism design [1]. A formal study of supply/demand information "leakage" embedded in prompts should generate useful insights as well. in Finally, the _practical_ prompting of providers requires translating the abstract notion of content points into _actionable_ prompts for providers (e.g., using generative modeling techniques to describe/suggest novel items), a topic of considerable importance.
2307.06645
Multivariate Time Series characterization and forecasting of VoIP traffic in real mobile networks
Predicting the behavior of real-time traffic (e.g., VoIP) in mobility scenarios could help the operators to better plan their network infrastructures and to optimize the allocation of resources. Accordingly, in this work the authors propose a forecasting analysis of crucial QoS/QoE descriptors (some of which neglected in the technical literature) of VoIP traffic in a real mobile environment. The problem is formulated in terms of a multivariate time series analysis. Such a formalization allows to discover and model the temporal relationships among various descriptors and to forecast their behaviors for future periods. Techniques such as Vector Autoregressive models and machine learning (deep-based and tree-based) approaches are employed and compared in terms of performance and time complexity, by reframing the multivariate time series problem into a supervised learning one. Moreover, a series of auxiliary analyses (stationarity, orthogonal impulse responses, etc.) are performed to discover the analytical structure of the time series and to provide deep insights about their relationships. The whole theoretical analysis has an experimental counterpart since a set of trials across a real-world LTE-Advanced environment has been performed to collect, post-process and analyze about 600,000 voice packets, organized per flow and differentiated per codec.
Mario Di Mauro, Giovanni Galatro, Fabio Postiglione, Wei Song, Antonio Liotta
2023-07-13T09:21:39Z
http://arxiv.org/abs/2307.06645v1
# Multivariate Time Series characterization and forecasting of VoIP traffic in real mobile networks ###### Abstract Predicting the behavior of real-time traffic (e.g., VoIP) in mobility scenarios could help the operators to better plan their network infrastructures and to optimize the allocation of resources. Accordingly, in this work the authors propose a forecasting analysis of crucial QoS/QoE descriptors (some of which neglected in the technical literature) of VoIP traffic in a real mobile environment. The problem is formulated in terms of a multivariate time series analysis. Such a formalization allows to discover and model the temporal relationships among various descriptors and to forecast their behaviors for future periods. Techniques such as Vector Autoregressive models and machine learning (deep-based and tree-based) approaches are employed and compared in terms of performance and time complexity, by reframing the multivariate time series problem into a supervised learning one. Moreover, a series of auxiliary analyses (stationarity, orthogonal impulse responses, etc.) are performed to discover the analytical structure of the time series and to provide deep insights about their relationships. The whole theoretical analysis has an experimental counterpart since a set of trials across a real-world LTE-Advanced environment has been performed to collect, post-process and analyze about \(600,000\) voice packets, organized per flow and differentiated per codec. VoIP traffic characterization, multivariate time series forecasting, machine learning for time series forecasting, mobility scenarios. ## I Introduction and Motivation Performance prediction of real-time traffic (such as VoIP) is a crucial topic in the network management field. Predicting and optimizing Quality of Service (QoS) and Quality of Experience (QoE) metrics allows to better dimensioning network infrastructures, improving the battery life of devices, and optimizing the resource allocation strategies [1, 2]. This is even more critical in cellular environments, where the high unpredictability of variables such as the interference, but also the concurrency of real-time sessions, and the time-varying load of mobile network nodes pose intriguing challenges. We tackle these issues through a multivariate predictive time series analysis of VoIP traffic across an urban LTE-A environment. At the moment, LTE represents the dominant broadband technology, accounting for 57% of users worldwide [3]. Older technologies such as 2G and 3G continue to be intensively used for their robustness, with about 38% of subscriptions; whereas 5G accounts for about 5% of subscriptions due to its market immaturity. Interestingly, one of the most adopted deployment today is the Non-Standalone (NSA) 5G, where a substantial part of LTE core network is reused to implement voice-based services such as VoLTE [4]. The access to LTE technology has stimulated a series of studies devoted to analyzing the performance of QoS/QoE metrics involving, for example: various deployment strategies [5], resource allocation [6], probabilistic models [7], and coexistence with other technologies [8]. On the other hand, the main contribution offered in this work pertains to a multivariate time series characterization of the dynamic (time-varying) behavior of crucial VoIP metrics which mutually influence each other. Such a cross-dependency has a great impact on forecasting, since the future values of a specific metric (e.g., the bandwidth consumption) will depend not only on the temporal evolution of the same metric, but also on the evolution of other metrics (e.g., round-trip time, jitter) for a given VoIP flow. Accordingly, we formalize analytically such a cross-dependency by means of a vector autoregressive (VAR) model, along with a set of analyses (e.g., stationarity, causality) useful to capture some insights characterizing the mutual influence among the metrics at stake. Such a formalization is then compared to classic (e.g. tree-based) and novel (e.g., deep-based) machine learning approaches. As a first step, we carry out an experimental campaign to collect real-world mobile VoIP traffic deriving variables such as bandwidth consumption, mean opinion score (MOS) and signal-to-noise ratio (SNR), among others. In a second step, we perform a predictive analysis aimed at discovering temporal dependencies among the variables and forecast their behavior in future time periods. At this aim, we consider two approaches: _i_) a statistical approach relying on VAR models, useful to analytically describe the dependencies among interrelated time series; and _ii_) a machine learning approach, employed by turning a time series structure into a supervised learning problem. It is worth noting that a time series analysis would be of little use when dealing with data collected in controlled environments (e.g. testbeds). In such a case, in fact, the forecast would be biased since it is possible to manually tune quantities such as interference or noise figures. Conversely, in real settings we deal with uncontrollable variables, which impact the overall performance, such as: time-varying load of radio and network nodes; physical obstacles; weather conditions; and hand over procedures. The paper is organized as follows. Section II proposes an overview of similar works, highlighting how our work contributes to the state-of-the-art. In Sect. III, we offer a description of the experimental environment along with details about the time series construction. In Sect. IV, we formulate the problem in terms of multivariate time series characterization, and we introduce statistical and machine learning (ML) based models. In Sect. V, we present the experimental comparison among the different forecasting techniques by taking into account both performance and times. Section VI concludes the work along with some ideas for future research. ## II Related Work and offered contribution Due to the rapid evolution of telecommunication infrastructures, themes involving the network traffic characterization are becoming decisive from a network management point of view. QoS and QoE metrics, for instance, are typically used as benchmarks to evaluate the quality of a network service; thus, predicting their behavior is crucial to the aim of network optimization and protocol design. Accordingly, in this section we propose an _excursus_ of relevant works in the field of traffic characterization/forecasting, where we highlight a set of novelties emerging from our work along different directions. A first aspect concerns the network traffic forecasting through statistical models, where a common trend is to exploit autoregressive moving average (ARMA) [9, 10] or autoregressive integrated moving average (ARIMA) models [11, 12, 13]. Although based on a robust methodology, ARMA and ARIMA models allow to characterize the behavior of individual network variables (in terms of _univariate_ time series models), but are not able to capture the mutual influence among the variables, which is crucial, for example, to understand the interdependency between objective indicators (e.g., bandwidth) and subjective ones (e.g., MOS). A univariate time series perspective is adopted also by that part of the technical literature which employs machine learning models for network traffic forecasting, including neural networks [14]; support vector machines [15]; general supervised models [16]; deep learning models [17, 18, 19, 20, 21, 22, 23]. To fill this gap, we formulate the problem in terms of a _multivariate_ time series, where each variable is expressed as a function of values of the variable itself and values of the other variables. This approach allows to characterize the interdependency among variables by enabling joint analyses (e.g., orthogonal impulse response analysis) which would have no meaning in a univariate setting. Another limitation which emerges in the part of the technical literature focusing on traffic characterization (especially in mobile environments as in our case) is the lack of real-world data. This issue is typically faced through the usage of network simulators, where many variables or models are artificially generated (e.g., mobility models, interference, packet loss, data burst, weather conditions, and many others). Examples include: [24] and [25], where LTE environments are simulated through NS-2; and [26, 27, 28], where some LTE metrics are characterized via NS-3. Other works employ customized LTE simulators to model QoE [29, 30] and QoS indicators [32, 33], respectively. Even when network experiments are carried out within real mobile scenarios [34, 35, 36, 37], a set of limited metrics are considered, often due to the fact that standard communication protocols (e.g., RTP/RTCP) allow to natively collect only classic metrics, typically relating to bandwidth consumption or network delay. To overcome such restrictions we have set up an experimental campaign where, through the RTP Control Protocol Extended Reports (RTCP-XR), we are able to analyze QoS/QoE metrics that are usually neglected in traffic characterization, including MOS, round trip delay, playout delay buffer, and SNR. In summary, the following contributions emerge from our work. First, we formalize the multivariate time series problem of mobile VoIP traffic through the VAR model, which allows to govern analytically the forecasting process. Moreover, through specific analyses including the Dickey-Fuller test, the OLS-CUSUM test and the orthogonal impulse response, we are able to discover interesting insights and hidden relationships among the considered VoIP metrics. Then, we turn a set of machine learning techniques (random forest, recurrent networks, etc.) into forecasting methods, by reframing the multivariate time series problem into a supervised learning one through a sliding window approach. This step is needed to evaluate and compare performance and time complexity of the statistical approach against the learning-based ones. Finally, we remark that the whole time series analysis relies on an experimental campaign carried on a real-world LTE-A network. During this campaign we: _i_) collect and elaborate a series of VoIP flows exploiting different voice codecs; _ii_) elaborate a set of performance metrics (most of them neglected in classic literature) through the support of the RTCP-XR protocol. ## III Network Scenario and Time series construction The location chosen for mobile VoIP traffic collection and analysis is an urban area (about 2000 people/km\({}^{2}\)) near Salerno (Italy). Figure 1 shows the area map derived from cellmapper [38], a crowd-sourced cellular tower and coverage mapping service. The number of evolved nodes B (eNB) aimed at handling radio links amounts approximately to 100. All the VoIP traffic is collected between two nodes: a mobile node (a car traveling at about 60 km/h) and a fixed node with a station to collect/elaborate the VoIP flows. The distance between the two nodes ranges from 30 to 70 kilometers. Both nodes are equipped with Linphone [39], one of the few softphones supporting the RTCP-XR protocol defined in the RFC 3611 [40]. Such a protocol allows to gather a rich set of metrics not available through the classic RTCP protocol, such as MOS, SNR, round trip delay (or round trip time), and playout delay buffer. The overall collected flows amount to about 600,000 voice packets, and are divided per codecs, including the following ones: G.722 (64 kb/s of bit rate and 16 KHz of sampling rate); G.729 (8 kb/s of bit rate and 8 KHz of sampling rate); MPEG-16 (16 kb/s of bit rate and 16 KHz of sampling rate); OPUS (6 to 128 kb/s of bit rate and 48 KHz of sampling rate); GSM (8 kb/s of bit rate and 8 KHz of sampling rate); and SPX-8000 (8 kb/s of bit rate and 8 KHz of sampling rate). Such a choice is justified by the fact that each codec is able to react differently to diverse network conditions (e.g., the consumed bandwidth or the playout delay buffer) and accordingly adjust the quality of the voice flow. In this way, we obtain a more ample view of the time-based variables behavior and how these are influenced by the different codecs. In Table I we report some useful information about the collected dataset including: codec type (first column), number of RTP packets per VoIP conversation (second column), stream length, namely, the duration of conversation (third column), lost packets (fourth column). We derive such information from _RTP stream statistics_ section available in Wireshark, the open source sniffer tool aimed at network traffic inspection [41]. Upon collecting the traffic, we have performed a post-processing stage to extract and process six crucial time-based variables. Precisely, for each voice flow (namely for each codec) we built a \((6\times 1)\) time-based vector \(y_{t}=(y_{t},\ldots,y_{6t})^{T}\) whose components are the following six time series: * \(y_{1t}\): time series representing the _MOS_, which quantifies the human subjective experience of a voice call in a dimensionless range between 1 (low perceived quality) and 5 (high perceived quality); this metric has been derived from the R-factor (R), a QoE indicator obtainable via RTCP-XR. Then, we have applied the conversion formula provided by ITU-T G.107 standard [42] to derive the MOS, namely: MOS \(=1+0.035R+7\cdot 10^{-6}\cdot R(R-60)(100-R)\). * \(y_{2t}\): time series representing the bandwidth (often \(BW\) for brevity) which provides information about the bandwidth consumption and is measured in kb/s; * \(y_{3t}\): time series representing the round-trip time (_RTT_), a key performance indicator measuring the time interval (in ms) of a voice packet sent from a source and the ack received from the destination; * \(y_{4t}\): time series representing the _jitter_ (measured in ms), namely the variation in voice packet latency evaluated through the formula: \(J_{n}=|(t_{r(n)}-t_{t(n)})-(t_{r(n-1)}-t_{t(n-1)})|\) which quantifies the jitter of the \(n\)-th packet depending on the transmitting (\(t_{t(n)}\)) and on the receiving (\(t_{r(n)}\)) time; * \(y_{5t}\): time series representing the playout delay buffer (often _Buffer_ for brevity and measured in ms), a mechanism to compensate for the encountered jitter by buffering voice packets and playing them out in a steady stream; * \(y_{6t}\): time series representing the signal-to-noise ratio (_SNR_), defined as the ratio between the power of a signal and the power of the background noise, and measured in decibel (dB). Figure 2 shows all the six time series for a single voice flow (G.722 codec). Please note that this representation is meant to offer just a big picture of time series behaviors, since the measurements units are different for each series (e.g., the bandwidth is measured in kb/s, the RTT in ms, the SNR in dB, etc.). At this aim, MOS and jitter have been magnified into two separate insets (MOS in grey and jitter in light blue) to better appreciate their behaviors. We can preliminarily notice some interesting facts. For instance, it is immediate to see how the RTT time series (in green) has two noteworthy peaks (approximately at \(t=30\) and \(t=415\) seconds) probably due to some obstacles in the VoIP flow path. Yet, the bandwidth time series (in red) seems to be quite stable (around 80 kb/s) with a peak at about \(t=360\) seconds and a more irregular behavior after \(t=450\) seconds, probably due to a more unstable connection. Finally, the jitter time series is more or less regular and lying below 40 ms, as prescribed by telco standards for VoIP flows [43]. In order to inspect data variability for this flow, we also report a box-plot representation in Fig. 3. Such a representation reveals that metrics such a MOS and SNR seem to be quite stable having 8 and 2 outliers (onto a stream length of 566 s, see Table I). The reason is that MOS naturally varies in a bounded range of values, whereas SNR is typically regularized thanks to the underlying codec. Remaining metrics exhibit more instability basically due to the uncontrollable external factors (e.g., interferences, mobility) thus, the number of outliers is greater: BW (39), RTT (36), Jitter (52), Buffer (97). We finally note that, to add more value to our work we make available: _i_) raw datasets divided per codec (as described in Table I); _ii_) post-processed datasets useful to directly test a given forecasting technique [44]. \begin{table} \begin{tabular}{c c c c} \hline **Codec** & **RTP pkts** & **Stream length (s)** & **Lost pkts** \\ \hline G.722 & 55890 & 566 & 0.56\% \\ G.729 & 28045 & 447 & 0.4\% \\ MPEG-16 & 39181 & 654 & 0.2\% \\ OPUS & 39425 & 405 & 0.4\% \\ GSM & 38357 & 434 & 1.3\% \\ Speex-8 & 43128 & 436 & 0.2\% \\ \hline \end{tabular} \end{table} TABLE I: Some dataset statistics Fig. 1: Experimental setting including a mobile user (top-left) and a fixed user with a control server for data collection and elaboration (bottom-right). Main parameters are summarized in the bottom-left table. ## IV Problem formulation and Forecasting Models In this section we examine in depth the problem of multivariate time series forecasting, by exploiting different techniques. Basically, through such a formalization, we try to predict future values of the time series on the basis of the current information set, namely the set consisting of current and past values of the series. Let \(y_{1t},y_{2t},\ldots,y_{Nt}\) be a set of \(N\) related variables. The forecast of the \(n\)-th variable \(y_{n,T+H}\) at the end of period \(T\) can be expressed as: \[\hat{y}_{n,T+H} = f(y_{1,T},y_{2,T},\ldots,y_{N,T}, \tag{1}\] \[y_{1,T-1},y_{2,T-1},\ldots,y_{N,T-1},\ldots),\] where \(H\) is the forecast horizon (number of future samples to predict), whereas \(f(\cdot)\) is a function of the past observations and can be: \(i)\) a statistical model, or \(ii)\) derived by a machine learning algorithm. One of the most powerful classes of statistical multivariate models for time series forecasting is represented by the vector autoregressive (VAR) models. In contrast, machine learning models exploit their data-driven features to give a prediction of future values of a time series. It is useful to anticipate that to exploit a VAR model correctly, some preliminary analyses are needed (e.g., stationarity, residual correlation, etc.). Thus, in the next section we formally introduce the VAR model along with its employment into the multivariate time series field. We should clarify that the term _variable_ used in the classic statistics field is equivalent to the term _feature_ typically encountered in the machine learning realm. To adopt a uniform notation, we will use the term variable (or time variable), to highlight the temporal dependency. ### _Vector Autoregressive Model_ The vector autoregressive model is a generalization of the univariate autoregressive (AR) model used for time series prediction. In the classic AR model, a value from a time series is expressed in terms of its preceding values. The number of preceding values chosen for the prediction is called _order_ or _lag_. We remark that, in cases where more time series are present, the AR model does not allow to capture their mutual influence. Conversely, in the VAR model a value from a time series can be expressed in terms of preceding values of the time series itself and the preceding values of other time series. It results in a _multivariate_ time series prediction problem since multiple time series can influence each other. Let \(y_{t}=(y_{1t},y_{2t},\ldots,y_{Nt})^{T}\) be an \((N\times 1)\) vector of time series. The \(p\)-lag vector autoregressive model VAR(\(p\)) has the following form: \[y_{t}=c+\Phi_{1}y_{t-1}+\Phi_{2}y_{t-2}+\cdots+\Phi_{p}y_{t-p}+\epsilon_{t}, \tag{2}\] where \(c=(c_{1},\ldots,c_{N})^{T}\) denotes an \((N\times 1)\) vector of constants, \(\Phi_{k}\) an \((N\times N)\) matrix of autoregressive coefficients \((k=1,\ldots,p)\) estimated by estimating each equation by Ordinary Least Squares (OLS), and \(\epsilon_{t}=(\epsilon_{1t},\ldots,\epsilon_{Nt})^{T}\) an Fig. 3: Box-plot representation of time series (Codec G.722 flow). Fig. 2: Time series behavior (Codec G.722 flow) of the six crucial variables: MOS, Bandwidth, RTT, Jitter, Playout Delay Buffer, SNR. (\(N\times\)1) unobservable zero mean white noise (or residual) vector process with non singular covariance matrix \(\Sigma_{\epsilon}=\mathbb{B}(\epsilon_{t}\epsilon_{t}^{T})\), being \(\mathbb{B}(\cdot)\) the expectation operator. Let \(\phi_{ij}^{(k)}\) be the element of \(\mathbf{\Phi}_{k}\) at row \(i\) and column \(j\). For instance, a bivariate VAR(2) model can be expressed in the following matrix form: \[\begin{bmatrix}y_{1t}\\ y_{2t}\end{bmatrix} = \begin{bmatrix}c_{1}\\ c_{2}\end{bmatrix}+\begin{bmatrix}\phi_{11}^{(1)}&\phi_{21}^{(1)}\\ \phi_{12}^{(1)}&\phi_{22}^{(1)}\end{bmatrix}\begin{bmatrix}y_{1t-1}\\ y_{2t-1}\end{bmatrix} \tag{3}\] \[+ \begin{bmatrix}\phi_{11}^{(2)}&\phi_{21}^{(2)}\\ \phi_{12}^{(1)}&\phi_{22}^{(2)}\end{bmatrix}\begin{bmatrix}y_{1t-2}\\ y_{2t-2}\end{bmatrix}+\begin{bmatrix}\epsilon_{1t}\\ \epsilon_{2t}\end{bmatrix}.\] The preliminary operation to perform when employing a VAR model is to determine the _best_ lag \(p^{*}\), which allows to build a VAR(\(p^{*}\)) model embodying most of information of the \(N\)-dimensional time series. Choosing the optimal lag length is not trivial since many criteria exist (often contradicting each other). We start by applying the Akaike Information Criterion (AIC) [45], a selection rule aimed at minimizing the forecast mean square error (MSE). Specifically, the approach is to fit VAR(\(p\)) models having orders \(p=0,\ldots,p_{max}\) and pick the value of \(p\) which minimizes the criterion. Formally, the AIC criterion obeys to the following rule: \[AIC(p)=\log[\widetilde{\Sigma}_{\epsilon}|+\frac{2pN^{2}}{L}, \tag{4}\] being \(L\) the time series length and \(|\widetilde{\Sigma}_{\epsilon}|=L^{-1}\sum_{t=1}^{L}\hat{\epsilon}_{t}\hat{ \epsilon}_{t}^{T}\) the determinant of covariance matrix of the residuals estimated by OLS. The results of the AIC criterion applied to our VAR model made of 6 time series is shown in Fig. 4. We can see that the order \(p\) which minimizes the AIC criterion amounts to 11. Actually, when choosing the optimal lag for a VAR model, the lag resulting from a selection criteria (e.g., the AIC) represents only a preliminary choice. Often, such a value must be adjusted to encounter also other needs [46], such as minimizing the residual correlations as explained in the next subsection. #### Iii-B1 Residual Analysis When fitting a model to time series data, it is likely to find autocorrelation in the residuals (differences between observed and fitted values). In such a case, the model violates the assumption of no autocorrelation in the errors, and the forecast can be inaccurate [47]. A powerful residuals-autocorrelation test is the Breusch-Godfrey test [48] (also referred to as Lagrange Multiplier, LM test in brief), which considers the VAR model of the error vector \[\epsilon_{t}=\Psi_{1}\epsilon_{t-1}+\cdots+\Psi_{h}\epsilon_{t-h}+v_{t}, \tag{5}\] where \(h\) is the maximum lag of the error model and \(v_{t}\) a white noise at time \(t\). In the LM test, the null hypothesis \(\mathcal{H}_{0}\) is that there is no residual autocorrelation, whereas the alternative hypothesis \(\mathcal{H}_{a}\) is that residual autocorrelation exists: \[\left\{\begin{array}{l}\mathcal{H}_{0}:\Psi_{1}=\cdots=\Psi_{h}=0,\\ \\ \mathcal{H}_{a}:\Psi_{\xi}\neq 0\quad\text{for at least one }\xi\in\{1,\ldots,h\}. \end{array}\right. \tag{6}\] A common way to compute the LM test statistic based on the residuals of the VAR(\(p\)) model is to take into account the following auxiliary regression model [49]: \[\hat{\epsilon}_{t}=c+\Phi_{1}y_{t-1}+\cdots+\Phi_{p}y_{t-p}+\Psi_{1}\hat{ \epsilon}_{t-1}+\cdots+\Psi_{h}\hat{\epsilon}_{t-h}+v_{t}^{*}, \tag{7}\] where the \(\hat{\epsilon}_{t}\) represent the residuals from the original VAR(\(p\)) model (where \(\hat{\epsilon}_{t}=0\) for \(t\leq 0\)), and \(v_{t}^{*}\) is an auxiliary error term. Accordingly, the LM statistic can be computed as \[Q_{LM}\left(h\right)=L\left(N-\text{tr}(\tilde{\Sigma}_{\epsilon}^{-1}\tilde{ \Sigma}_{\nu})\right), \tag{8}\] where \(\tilde{\Sigma}_{\nu}=\frac{1}{L}\sum_{t=1}^{L}\tilde{v}_{t}^{*}\tilde{v}_{t}^{* \tau}\) are the residuals from the estimated auxiliary model and \(\text{tr}(\cdot)\) is the trace of a matrix. Under the null hypothesis of no autocorrelation, it is possible to show [55] that \(Q_{LM}\left(h\right)\xrightarrow{d}\chi^{2}(hN^{2})\) where \(\xrightarrow{d}\) indicates the convergence in distribution (as \(L\rightarrow\infty\)). Moreover, a correction has been proposed by Edgerton and Shukur [50] that exploits the \(F\) statistic (based on the Fisher-Snedecor distribution \(F(m,l)\) with \(m\) and \(l\) degrees of freedom) in place of the \(\chi^{2}\), showing interesting results especially in unstable VAR models [51]. Accordingly, the Edgerton-Shukur (ES) statistic has the following form: \[F(hN^{2},\beta), \tag{9}\] where \(\beta=L-N(1+h)+1/2[N(h-1)-1]\). Once we have chosen the optimal lag value suggested by AIC criterion (\(p=11\)), we tested the residual correlation through the hypothesis test (6). Such a test has been implemented by exploiting both \(\chi^{2}\) statistic of the LM test and \(F\) statistic of the Edgerton-Shukur test. The results are shown in Table II - left side - in terms of p-values. Moreover, as suggested by credited literature [47, 52, 55], we choose a not so large value for \(h\), namely \(h=10\). By choosing a type \(I\) error probability \(\alpha=0.05\) to reject the null hypothesis when it is actually true, a p-value lower than \(\alpha\) allows us to reject the null hypothesis (no residual correlation), Fig. 4: AIC values for different lags applied to the considered multivariate time series model. and thus to accept the alternative hypothesis \(\mathcal{H}_{a}\) with an error probability of 95%, at most. We highlight in red the p-values in correspondence of \(\xi\) values where the alternative hypothesis (presence of residual correlation) \(\mathcal{H}_{a}\) of test (6) is accepted, given \(\alpha=0.05\). Such a condition occurs in both LM and ES tests, but the latter seems to be more "conservative" and allows to reject the null hypothesis less frequently than LM test. We explore also some higher lags1 and we find interesting results for \(p=12\), which represents the second optimal choice from the AIC criterion (see Fig. 4). Footnote 1: Too much high values of lags could lead the system to an overfitting. The corresponding results in terms of p-values are shown in Table II - right side. It is possible to notice that the null hypothesis of no residual correlation is satisfied for all values of \(\xi\) in the case of the ES test, with p-values significantly higher than 0.05. Remarkably, also in the case of the LM test we have p-values higher than 0.05. As mentioned before, we have also tried to further increase the order \(p\) of the VAR(\(p\)) model, but we obtained more p-values allowing to reject the \(\mathcal{H}_{0}\) hypothesis of no serial correlation (needed for accurate forecasting) than those obtained for the lag length amounting to 12, which was finally elected as the optimal choice. #### Iii-B2 Stationarity When dealing with VAR models, another important operation consists in removing possible trending behaviors of the involved variables to avoid spurious regressions. Otherwise stated, we have to guarantee the _stationarity_ of the time series, meaning that first and second moments must be time invariant. Pragmatically, the stationarity check is often performed through OLS-based unit root tests. In particular, Dickey and Fuller [53] developed a procedure (DF) for testing whether a variable has a unit root or, equivalently, that the variable follows a random walk. We use the augmented Dickey-Fuller test (which, differently form classic DF, includes higher-order autoregressive terms in the regression) where the following test model (in its more general form, see [54]) is considered: \[\Delta y_{t}=\omega_{0}+\omega_{1}t+\theta y_{t-1}+\sum_{k=1}^{p}\delta_{k} \Delta y_{t-k}+\epsilon_{t}, \tag{10}\] where: \(\Delta y_{t}=y_{t}-y_{t-1}\) is the difference operator, \(\omega_{0}\) is the intercept term (constant), \(\omega_{1}t\) is the time trend, and \(p\) the lag of the autoregressive process. Finally, the test statistic on the \(\theta\) coefficient is used to test whether the data need to be differenced to make it stationary. The DF test is the following: \[\left\{\begin{array}{l}\mathcal{H}_{0}:\theta=0\quad\quad\text{(null hypothesis)}\quad\quad\text{non-stationarity},\\ \\ \mathcal{H}_{a}:\theta<0\quad\quad\text{(alternative hypothesis)}\quad\quad \text{stationarity}.\end{array}\right. \tag{11}\] For our experiments, we have performed the augmented DF test for each variable, verifying that the variables are stationary at first differences, thus, there is no need to apply the differentiation operator. The results are reported in Table III, where the negative values of \(\theta\) (second column) for each variable and the corresponding low p-values (third column) suggest to reject the null hypothesis, and to accept the stationarity hypothesis by assuming a type \(I\) error probability of 0.05. #### Iii-B3 Stability Stability conditions are typically required to avoid explosive solutions of the stochastic difference equations characterizing a time series expressed in terms of an autoregressive part and a moving average part. At this aim, it is possible to show [55, 52] that the VAR(\(p\)) process (2) can be written in the following \(Np-\) dimensional VAR(1) form: \[\left[\begin{array}{l}y_{t}\\ y_{t-1}\\ y_{t-2}\\ \vdots\\ y_{t-p+1}\end{array}\right]=\underbrace{\begin{bmatrix}c\\ 0\\ 0\\ \vdots\\ 0\end{bmatrix}}_{\begin{array}{l}\\ 0\\ \vdots\\ 0\end{bmatrix}}_{\begin{array}{l}\\ \end{array}}\begin{bmatrix}c\\ \Phi_{1}&\Phi_{2}&\ldots&\Phi_{p-1}&\Phi_{p}\\ I_{N}&0&\ldots&0&0\\ 0&I_{N}&\ldots&0&0\\ \vdots&&\ddots&\vdots&\vdots\\ 0&0&\ldots&I_{N}&0\end{bmatrix}Y_{t-1}+\begin{bmatrix}\epsilon_{t}\\ 0\\ 0\\ \vdots\\ 0\end{bmatrix}, \tag{12}\] being \(I_{N}\) the order \(N\) identity matrix. The process \(Y_{t}\) is stable if the eigenvalues of the _companion matrix_\(\mathbf{\Phi^{*}}\) in (12) have modulus less than one. Such a property is satisfied for the considered VAR(\(p\)) model as can be observed in Fig. 5. Although such an analysis is formally correct to verify the stability condition, it does not allow to capture the behavior in the time domain. Accordingly, we perform in addition an OLS-based cumulative sum (CUSUM) test [56]. Through such a test, it is possible to evaluate the cumulative sums of residuals resulting from the VAR model in order to highlight potential structural changes (a.k.a. structural breaks) in the residuals which can lead to a non-stationary behavior. The test is based on the intuition that if the VAR model coefficients (the \begin{table} \begin{tabular}{c c c|c c} \hline \hline \multicolumn{2}{c|}{**lag length=11**} & \multicolumn{3}{c}{**lag length=12**} \\ \hline \hline \(\xi\) & **p-value (LM)** & **p-value (ES)** & \(\xi\) & **p-value (LM)** & **p-value (ES)** \\ \hline 1 & 0.0067 & 0.031 & 1 & 0.328 & 0.56 \\ 2 & 0.0044 & 0.036 & 2 & 0.520 & 0.81 \\ 3 & 0.0601 & 0.278 & 3 & 0.604 & 0.90 \\ 4 & 0.0518 & 0.305 & 4 & 0.642 & 0.94 \\ 5 & 0.1336 & 0.554 & 5 & 0.538 & 0.93 \\ 6 & 0.2292 & 0.733 & 6 & 0.244 & 0.78 \\ 7 & 0.1062 & 0.582 & 7 & 0.336 & 0.89 \\ 8 & 0.1414 & 0.691 & 8 & 0.399 & 0.93 \\ 9 & 0.0182 & 0.310 & 9 & 0.125 & 0.75 \\ 10 & 0.0097 & 0.255 & 10 & 0.107 & 0.75 \\ \hline \hline \end{tabular} \end{table} TABLE II: Results (in terms of p-values) for models with \(p=11\) and \(p=12\) lags, respectively. Two residual correlation tests have been considered: Lagrange Multiplier (LM) based on the \(\chi^{2}\) statistic, and Edgerton-Shukur (ES) based on the \(F\) statistic, both computed under the null hypothesis. \begin{table} \begin{tabular}{|c|c|c|} \hline **Variable** & **test statistic (\(\theta\))** & **p-value** \\ \hline \hline MOS & -3.445 & 9.501 \(\cdot\)\(10^{-3}\) \\ \hline Bandwidth & -7.974 & 2.718 \(\cdot\)\(10^{-12}\) \\ \hline RTT & -2.934 & 4.149 \(\cdot\)\(10^{-2}\) \\ \hline Jitter & -4.157 & 7.780 \(\cdot\)\(10^{-4}\) \\ \hline Buffer & -2.638 & 8.525 \(\cdot\)\(10^{-2}\) \\ \hline SNR & -3.192 & 2.044 \(\cdot\)\(10^{-2}\) \\ \hline \end{tabular} \end{table} TABLE III: Augmented Dickey-Fuller test per variable. autoregressive coefficients) change over the time, the accuracy of the one-step-ahead forecast will decrease and the forecast error will increase. The panels of Fig. 6 show the results of the OLS-based CUSUM test for all the six variables. The x-axis represents the normalized time between 0 and 1, where the y-axis reports the cumulative sums of residuals (interpretable as random processes). It is possible to notice that all the processes are substantially stable with oscillations around the zero value. A slight exception is represented by SNR where is it possible to see some little drifts from the stability value but never exceeding the 95% confidence boundaries (red lines). #### Iv-B4 Time Series relationships One of the most interesting aspects when dealing with VAR models is to understand how the time series composing the process are mutually influenced. Precisely, it is useful to know the _response_ of one variable to an external disturbance (_unit impulse_ or _unit shock_ in the econometrics jargon) of another variable, which allows to examine more in-depth the cause/effect relation among the involved variables. In particular, if we observe a _reaction_ of one variable to an impulse in another variable, the latter will be _causal_ for the former [55]. In many real-world cases, there is a correlation among the variables in a system. This means that an impulse of one variable should be accompanied by an impulse of some other variables correlated with the modified one. In other words, the impulse response allows to trace the transmission of a single shock within a system of equations [57]. Often, it is interesting to isolate the effect of a single variable shock onto another variable of the system to better capture the interdependencies. At this aim, we implement the orthogonal impulse response functions (OIRF) method [58] which allows to rewrite the process (2) as: \[y_{t}=c+\sum_{i=0}^{\infty}\Phi_{i}PP^{T}\epsilon_{t-i}=c+\sum_{i=0}^{\infty} \Theta_{i}w_{t-i}, \tag{13}\] where: \(\Sigma_{e}=PP^{T}\) being \(P\) a lower triangular nonsingular matrix with positive diagonal elements (also known as _Choleski_ decomposition, see Appendix A.9.3 in [55]), \(\Theta_{i}=\Phi_{i}P\) and \(w_{t}=P^{-1}\epsilon_{t}\) being a white noise with covariance matrix \(\Sigma_{w}=\mathbb{E}(w_{t}w_{t}^{T})=I_{K}\). Since the white noise errors \(w_{t}\) have uncorrelated components \(w_{1t},\ldots,w_{Kt}\) with unit variance \(I_{K}\), they are often known as _orthogonal_ residuals or innovations. Thus, it is reasonable to assume that a change in one component of \(w_{t}\) has no effect on the other components due to orthogonality. In particular, the \(jk\)-th element of \(\Theta_{i}\) is assumed to represent the effect on variable \(j\) of one unit innovation (namely, one unit standard deviation) in the \(k\)-th variable that has occurred \(i\) periods before. In our setting, we have 6 variables resulting in 36 orthogonal impulse responses2 as shown in the panels of Fig. 7. The causal variables are grouped per columns. The x-axis reports the observation period, thus it is possible to evaluate the disturbance effects for various observation periods (25 in our case). The blue continuous curves represent the oscillating values of the affected variables around their stability point (horizontal black line at 0), namely if the impulse were not applied at all. The black dashed curves surrounding the blue ones represent the asymptotic confidence intervals at 95% confidence level. Such an analysis has the merit of highlighting some relationships among variables which are often hidden at a first sight. For example, the sub-figure in the first row and second column allows to visualize the effect of a bandwidth shock on the MOS variable (BW \(\rightarrow\) MOS). In particular, it is possible to see that a bandwidth impulse causes a slight increase of the MOS by approximately 0.006 units of innovation after about 15 observation periods. Then, it decreases. Likewise, a BW impulse causes a decrease of a couple of units of innovation in Jitter after 10 observation periods before exhausting its effect. Such behaviors are in line with the fact that having more bandwidth is typically beneficial for other metrics. It is useful to notice that, after a shock, some variables can have a decrease before raising up to their stability point. This is the case of Jitter and Buffer after a MOS impulse which experiment a decrease of 2.5 and \(-\)5 units of innovation, respectively (MOS \(\rightarrow\) Jitter and MOS \(\rightarrow\) Buffer sub-figures). Also in this case, we can reasonably admit that a better voice quality can be associated to lower values of jitter which, in turn, is associated to smaller values of the playout delay buffer. Interestingly, the mutual influence between two variables can be quite different when the "causing" and "caused" roles are Fig. 5: Companion matrix eigenvalues. Fig. 6: OLS-CUSUM tests applied to the six time series. inverted. For example, in the RTT\(\rightarrow\)BW case, an RTT shock causes a slight oscillation of BW (with a peak of about 2 units of innovation around an observation period amounting to 5) before decaying rapidly to the stability point. In contrast, a BW shock causes a substantial decrease in RTT (BW\(\rightarrow\)RTT) with two peaks (around \(-50\) and \(45\) units of innovation) and a slower re-stabilization. Such apparently unusual behavior can be explained by the fact that BW (red curve in Fig. 2) exhibits a certain robustness, thus it is not dramatically impaired by unit shocks, whereas RTT (green curve in Fig. 2) appears to be more sensitive due to its oscillating behavior, and is then more susceptible to exogenous interventions. ### _Learning models for time series forecasting_ The application of machine learning techniques to time series forecasting is a recent issue with interesting applications to econometrics [61]. When dealing with time series, in fact, the temporal information is crucial, whereas a machine learning dataset is typically a list of information equally treated from a time perspective. This notwithstanding, it is possible to manipulate these models (especially supervised ones) to train on historical time series data and provide future predictions. Moreover, some deep learning methods have been explicitly designed to take into account temporal information through memory-based cells, as detailed below. **Recurrent Neural Networks (RNNs)**: such a technique relies on a network architecture able to handle variable-length sequences naturally. In such a way, through the RNNs it is possible to track the state of a system (by retaining past information) and update its state as the time elapses. The memory state is recursively updated with new observations, thus the hidden state \(z\) at time \(t\) can be represented as a function of the input at time \(t\) and the previous hidden state at time \(t-1\), namely: \[z(t)=f(z(t-1),y(t)), \tag{14}\] that, in turn, is used to evaluate the output (namely, the prediction): \[\hat{y}_{t+1}=g(z(t)). \tag{15}\] A weak point of RNNs is to manage long-range dependencies connected with transferring information from earlier to later times steps across too long sequences (known as the vanishing gradient problem [60]). Such an issue can be solved through the techniques explained below. The following hyper-parameters have been used for RNN: 30 RNN units; dropout rate amounting to 0.25; Adam optimization algorithm (learning rate = 0.1); tanh activation function; 30 epochs. **Long Short-Term Memory (LSTM)**: represents an evolved RNN network [62] with some internal state cells acting as long-term or short-term memory cells. The output of the LSTM network is modulated by the state of these cells and by three _gates_ which tune the information flow: the _input_ gate responsible to update the cell state; the _forget_ gate in charge of keeping or discarding information on the basis of the input data \(y(t)\) and the previous hidden state \(z(t-1)\); the _output_ Fig. 7: Orthogonal Impulse Response. gate which takes decision about which information to pass to the next LSTM unit. The hidden state at time \(t\) is: \[z(t)=o(t)\cdot\tanh(c(t)), \tag{16}\] being \(o(t)\) the output gate, and \(c(t)\) the cell state at time \(t\). The following hyper-parameters have been used for LSTM: 30 LSTM units; dropout rate amounting to 0.25; Adam optimization algorithm (learning rate = 0.1); tanh activation function; 30 epochs. **Gated Recurrent Unit (GRU)**: a lighter version of LSTM [63] with two gates. The _update_ gate which embodies functionalities offered by the LSTM forget and input gates. The _reset_ gate which is in charge of deciding how much past information to forget. The GRU hidden state can expressed as: \[z(t)=(1-u(t))z(t-1)+u(t)\widetilde{z}(t), \tag{17}\] where \(u(t)\) is the update gate which decides about the updating amount of its candidate activation \(\widetilde{z}(t)\). The following hyper-parameters have been used for GRU: 30 GRU units; dropout rate amounting to 0.25; Adam optimization algorithm (learning rate = 0.1); tanh activation function; 30 epochs. **Convolutional Neural Networks (CNN)**: they are typically used when dealing with classification problems involving spatial information where an image matrix (2D array) is provided to the CNN structure. On the other hand, when dealing with time-series problems, CNNs can be feed with a 1D array since only temporal dimension must be taken into account. Also when applied to temporal data, the CNN uses: \(i\)) the convolutional layer aimed at applying filtering to derive the most representative features; \(ii\)) the pooling layer to reduce the size of the series while preserving important extracted from convolutional layers; \(iii\)) the fully connected layer to map the features extracted by the network into specific classes or values. The following hyper-parameters have been used for CNN: 30 CNN filters (each of which with size 6); dropout rate amounting to 0.25; Adam optimization algorithm (learning rate = 0.1); tanh activation function; 30 epochs. **Multi Layer Perceptron (MLP)**: it is the most common form of neural networks and one of the first to be exploited in time series forecasting problems. The lagged observations (say \(x_{i}\)) are used as inputs of an MLP structure to evaluate the forecast \(\hat{y}_{t+1}\): \[\hat{y}_{t+1}=\phi\left(\sum_{i=1}^{n}w_{i}x_{i}+b\right), \tag{18}\] where \(\phi(\cdot)\) is an activation function (e.g., sigmoid, linear, etc.) to produce the output, and \(w_{i}\) and \(b\) are the weights and bias, respectively. The input data activate the hidden layers (intermediate layers) by following the forward activation direction, and, in turn, hidden layers neurons feed forward into output neurons. The MLP process is regulated by the _backpropagation_, a mechanism able to update neurons weights to progressively minimize the error. The following hyper-parameters have been used for MLP: 30 dense units; dropout rate amounting to 0.25; Adam optimization algorithm (learning rate = 0.1); tanh activation function; 30 epochs. **Random Forest (RF)**: a technique based on the _bootstrap aggregation_ over decision trees. In practice, during the training stage, each _tree_ within a random forest learns from random samples drawn with replacement (_bootstrapping_) so as to reach a lower variance. For each sample \(b\), \(b=1,\ldots,B\), the desired forecast is the average of forecasts of each tree applied to the input data \(x_{i}\), namely \[\hat{y}_{t+1}=\frac{1}{B}\sum_{b=1}^{B}\hat{f}_{b}(x). \tag{19}\] The following hyper-parameters have been used for RF: 30 estimators (or trees); 10 as the maximum depth of the tree. **Extreme Gradient Boosting (XGB)**: an improved version of gradient boosting, an iterative technique allowing to fit a decision tree on the residuals obtained from a base learner aimed at improving the prediction model by adding new decision trees. The output forecast can be written as: \[\hat{y}_{t+1}=\sum_{k=1}^{K}\hat{f}_{k}(x), \tag{20}\] where \(K\) is the number of trees and \(f_{k}\) is the base learner. An objective function is used to train the model by measuring how well it fits the training data. The following main hyper-parameter has been used for XGB in our experiment: 30 gradient boosted trees. ## V Experimental forecasting results In this section we present a comparative analysis of the methods described in the previous section based on the experimental measurements. Before delving into details of the numerical results, we need to provide some clarifications about the processing we have performed on the gathered data. A preliminary operation is to re-frame the time series forecasting into a supervised learning problem. We first split the multivariate time series into training and testing sets, by adopting the classic \(70/30\) split (70% of data is used for training, 30% for testing) as shown in Fig. 8. It is useful to highlight that the classic \(k\)-fold cross validation method cannot be applied in this setting, which assumes that there is no relationship among the observations. In contrast, when dealing with time series problems, the temporal _continuum_ has to be preserved. Accordingly, we adopt the sliding window mechanism where a part of the input sequence (window of lagged values represented by the past observations within shaded blue area in Fig. 8) is used to forecast new samples (future observations within shaded red area in Fig. 8). The sliding window approach has been profitably employed also in other fields involving time series forecasting such as the smart manufacturing [64] or radar [65]. Moreover, as in the aforementioned works, we perform the so-called one-step forecasting where the prediction is made one step at a time to avoid the forecast uncertainty [47]. As regards the tuning of the various learning-based techniques, we have empirically chosen their structures so that the resulting accuracy would be in the same range of the VAR model (which does not require any fine tuning other than the choice of the optimal lag \(p^{*}\)). Such an approach is in line to what suggested by credited literature [66]. We start by visually analyzing the behavior of the various presented techniques for two specific VoIP flows identified by their specific codecs, namely G.722 and G.729 (for space constraints we omit the visualization for remaining codecs but a summary of performance results for each codec is reported in the Table IV). The two aforementioned codecs represent two extreme trade-off choices between conversation quality and bandwidth utilization. Indeed, among the codecs used in our experiments, G.722 provides the better audio quality (for instance, in terms of MOS) but the bandwidth consumption is not very efficient (bit rate of 64 kb/s). In contrast, G.729 offers a slightly lower audio quality but allows a greater bandwidth saving with just 8 kb/s of bit rate. The panels of Figs. 9 and 10 show the temporal behavior of each variable for codecs G.722 and G.729, respectively. Superimposed onto the actual values of variables (black dashed lines) we report, with different colors as specified in the figures legends, the behavior of each forecasting technique described \begin{table} \begin{tabular}{l c c c c c c|c c c c c c c c|c c c c c c} \hline \hline \multicolumn{13}{c}{**Wide Track \#1 (G.722 code)**} & \multicolumn{13}{c}{**Video Track \#2 (G.729 code)**} & \multicolumn{13}{c}{**Video Track \#3 (MPEC-16 code)**} \\ \hline **VARB(\%+12)** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **VARB(\%+18)** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **VARB(\%+11)** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** \\ RMSE & 0.017 & 1.401 & 32.52 & 48.74 & 11.41 & RMSE & 0.011 & 3.57 & 66.2 & 21.99 & 58.1 & 7.27 & RMSE & 0.006 & 3.97 & 34.90 & 34.07 & 43.12 & 9.17 \\ MAE & 0.019 & 6.92 & 35.72 & 14.64 & 37.7 & 10.5 & MAE & 0.01 & 2.52 & 56.55 & 19.41 & 45.3 & 5.55 & MAE & 0.003 & 2.84 & 10.55 & 9.84 & 34.26 \\ MAPE (\%) & 0.3 & 5.0 & 16.0 & 14.0 & 31.0 & 60.0 & MAPE (\%) & 0.2 & 17.0 & 32.0 & 27.0 & 24.0 & 165.0 & MAPE (\%) & 0.07 & 9.7 & 32.0 & 11.0 & 44.0 & 94.0 \\ \hline **RNN** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **RNN** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** \\ RMSE & 0.044 & 18.72 & 32.64 & 36.38 & 10.42 & RMSE & 0.04 & 6.21 & 64.4 & 28.4 & 52.5 & 5.03 & RMSE & 0.007 & 5.18 & 325.5 & 5.8 & 68.7 & 12.4 \\ MAE & 0.088 & 19.31 & 58.62 & 17.37 & 23.54 & 9.44 & MAE & 0.04 & 5.69 & 48.9 & 25.5 & 49.8 & 2.37 & MAE & 0.006 & 4.08 & 15.54 & 64.3 & 62.4 & 11.1 \\ MAPE (\%) & 0.36 & 28.2 & 21.0 & 21.6 & 24.4 & 24.0 & MAE & **0.05** & 1.0 & 26.0 & 23.0 & 28.0 & 28.0 & _5.0_ & **MAPE (\%)** & 0.14 & 14.0 & 64.0 & 38.0 & 11.0 & 65.0 \\ \hline **LSTM** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **LSTM** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **LSTM** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** \\ RMSE & 0.016 & 14.05 & 14.21 & 16.29 & 33.73 & 5.73 & RMSE & 0.065 & 3.97 & 33.69 & 21.8 & 14.8 & 4.58 & RMSE & 0.006 & 4.23 & 229.15 & 32.79 & 41.6 & 53.1 \\ MAE & 0.007 & 6.75 & 34.55 & 7.99 & 20.31 & 4.92 & MAE & 0.06 & 3.04 & 20.19 & 9.9 & 4.32 & MAE & 0.003 & 2.75 & 67.1 & 23.83 & 33.5 & 33.5 \\ MAPE (\%) & 0.6 & 9.3 & 22.9 & 8.5 & 15.7 & 29.0 & MAPE (\%) & 0.14 & 18.0 & 11.0 & 25.0 & 4.9 & 90.0 & MAPE (\%) & 0.07 & 9.6 & 164.2 & 12.0 & 14.0 & 88.0 \\ \hline **GRN** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **RMS** & **RMS** & **RTT** & **Jinter** & **Buffur** & **SNR** & **RMS** & **RMS** & **RMS** & **RTT** & **Jinter** & **Buffur** & **SNR** \\ RMSE & 0.016 & 14.08 & 14.87 & 16.94 & 24.05 & 4.04 & RMSE & 0.04 & 5.24 & 31.36 & 11.5 & 35.8 & 4.71 & RMSE & 0.004 & 4.39 & 235.75 & 21.1 & 34.52 & 5.79 \\ MAE & 0.011 & 6.32 & 52.37 & 7.34 & 13.25 & 3.14 & MAE & 0.083 & 4.62 & 18.7 & 8.25 & 33.7 & 3.32 & MAE & 0.003 & 3.13 & 7.45 & 15.08 & 18.46 & 3.97 \\ MAPE (\%) & 0.26 & 9.4 & 20.5 & 7.3 & 11.0 & 29.0 & MAE (\%) & 0.92 & 20.0 & 11.0 & 19.0 & 9.0 & 69.0 & MAPE (\%) & 0.06 & 20.0 & 8.0 & 15.0 & 79.0 \\ **CNN** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **RMS** & **RMS** & **RTT** & **Jinter** & **Buffur** & **SNR** & **CNN** & **MOS** & **BW** & **RTT** & **Jinter** & **Buffur** & **Buffur** & **SNR** \\ RMSE & 0.038 & **BW** & **RTT** & **Jinter** & **Buffur** & **SNR** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS** & **RMS**RMS** & **RMS** & in the previous section. In each panel of Figs. 9 and 10, the forecasting zone (the gray area on the right) defines the area where each technique tries to predict future values. In order to highlight the behavior of VAR compared to the learning techniques, we also report, in shaded pale red, the 95% forecast intervals (for the VAR) which represent an estimate of the intervals where we expect a future value will fall. Since the interval width amounts to \(1.96\cdot\sigma_{\epsilon}\) (with \(\sigma_{\epsilon}\) the standard deviation of residuals for each time series, see [55]), the shape and the width of each interval strongly depends on the residuals behavior. For instance, as regards the MOS - G.722 codec case (see the first panel in Fig. 9), the low residual standard deviation directly results in a narrow forecast interval. Conversely, the unexpected peak of RTT - G.722 codec case (see the third panel in Fig. 9) at about 415 seconds has a negative impact on the prediction accuracy (for all examined techniques) and, in turn, implies a growth of residual standard deviation. This directly translates into a broad forecast interval. The first aspect to highlight is that the actual behavior of variables basically depends on two factors which impact on the performance forecast: the codec type and the network conditions. For instance, the overall bandwidth consumption amounts, on average, to 90 kb/s in the G.722 case (top-middle panel of Fig. 9), whereas it is around 30 kb/s in the G.729 case (top-middle panel of Fig. 10). In principle, this implies that more fluctuations are possible in the G.722 case due to a wider span of values. Unfortunately, even if a codec is able to guarantee a kind of temporal "stability", the high unpredictability of network conditions is the main responsible of fluctuations which are very challenging to predict due to their extremely time-variant behavior. This notwithstanding, in both Figs. 9 and 10 we can observe that each technique is able to produce a satisfying forecast of the original variables by remaining within the area delimited by the forecast intervals. By visual inspection, we observe that the VAR (red curve) shows good performance when the time series do not exhibit excessive fluctuations. This notwithstanding, when important fluctuations are present, the VAR model is able to follow the mean value of the oscillating time series (see, for instance, the case of MOS - G.729 codec case). The reason is that VAR is governed by a set of linear equations (see (2)), thus it can suffer when representing some non-linear behaviors. Occasionally, also the MLP technique shows slight difficulty to fit the original values, once again due to the underlying linear model (see (18)). Such a behavior emerges in particular in Fig. 9 where the MLP prediction moves a bit away from the corresponding forecast intervals. Conversely, remaining techniques show enough good adaptation to fluctuations, and, in particular, the deep-based techniques whose internal structure allows to keep the state at time \(t-1\) to improve the prediction at time \(t\). To better quantify the behavior of each technique, we have evaluated the performance for each voice flow (namely for each codec), for each technique, and for each time-based variable as shown in Table IV. Each sub-table contains the performance per voice flow in terms of test Root Mean Square Error (RMSE), test Mean Absolute value of Errors (MAE), Mean Absolute Percentage Error, defined, respectively, as \[RMSE_{j}=\sqrt{\frac{\sum_{t=1}^{L}(y_{jt}-\hat{y}_{jt})^{2}}{L}}, \tag{21}\] \[MAE_{j}=\frac{\sum_{t=1}^{L}|y_{jt}-\hat{y}_{jt}|}{L}, \tag{22}\] \[MAPE_{j}=\frac{100}{L}\sum_{t=1}^{L}\left|\frac{y_{jt}-\hat{y}_{jt}}{y_{jt}} \right|. \tag{23}\] Such metrics are computed for each time series \(j=1,...,N\), with \(N=6\) and \(L\) the time series length. These three indicators are often used jointly when evaluating the forecasting accuracy. The RMSE is a quadratic score rule which gives a relatively high weight to large errors since these errors are squared before they are averaged. The MAE is a linear score rule designed to equally weight the individual differences. The MAPE includes a normalization to actual values and is expressed as percentage. Being such an indicator often used as a summarizing metric, to easily pinpoint the best forecasting technique in Table IV, the corresponding MAPE value is indicated in red. For each voice flow we have repeated a lag analysis (just as seen in Sect. IV-A), and we have reported the optimal lag value \(p^{*}\) close to VAR model in each sub-table of Table IV. Let us start to notice some general facts valid for all the experiments. For each voice flow, we can notice that the three performance indicators (RMSE, MAE, and MAPE) exhibit very different ranges for each variable. For instance, in the case of MOS, RMSE and MAE never reach the value 1. This is due to the fact that all the chosen codecs guarantee a good perceived quality, with a MOS varying within a limited range of values (MOS values never lie below 4). This directly reflects into low values of all the indicators. Conversely, RTT varies within a great range of values with some unusual peaks due to the temporary network conditions (see, e.g., the peaks of RTT in green at about \(t=30\) s and \(t=415\) s in Fig. 2). In such cases, all the forecasting techniques are (obviously) not able to predict this behavior, thus the prediction error is quite large. Such a condition directly reflects onto performance indicators and, in particular, onto Fig. 8: Time series reframed into supervised learning through the sliding window method. ## Appendix C Fig. 10: G.729 codec: Multivariate time series forecasting for: MOS, Bandwidth, RTT, Jitter, Buffer, SNR, along with the 95% forecasting intervals in shaded pale red. In the gray area on the right, the results of forecasting for each technique. Fig. 9: G.722 codec: Multivariate time series forecasting for: MOS, Bandwidth, RTT, Jitter, Buffer, SNR, along with the 95% forecasting intervals in shaded pale red. In the gray area on the right, the results of forecasting for each technique. RMSE whose values are high and hugely different between them since RMSE tends to magnify large errors. Yet, the SNR exhibits a quite standard smooth behavior with weak oscillations and not unusual peaks. Thus, the performance indicators are not dramatically high, indicating a satisfying forecasting accuracy. Indeed, if we compare the performance accuracy for each technique, we can observe that: VAR technique, as also mentioned before, could return satisfactory result when the time series does not exhibit too many non-linearities; on average, deep techniques such as LSTM, GRU, and CNN produce satisfactory results (see MAPE values in red). For LSTM and GRU the results are justified thanks to the presence of a memory-based internal structure able to keep track of past values. For CNN, results are justified by the presence of a convolutional structure able to derive the most significant temporal features. Remaining deep based techniques (RNN and MLP) exhibit less performing accuracy results due to their naive internal structure which does not exploit any particular characteristic of temporal data. Among standard machine learning techniques, XGBoost exhibits the best performance since it relies on a combination of ensemble models useful to improve the quality of prediction. The performance analysis should be complemented by a time evaluation to better compare the considered techniques. Figure 11 shows the results of such a comparison obtained by exploiting a platform equipped with 2 virtual CPUs (Intel Xeon @2.30 GHz) and 13 GB of RAM. In this time analysis we take into account all the experiments (namely, all the voice flows) in order to highlight possible variability through a boxplot representation. First, such an analysis reveals that the VAR technique is able to perform the forecasting in few milliseconds as shown within the top-right inset. This is basically due to the fact that VAR is built through a linear combination of lagged values to forecast the next sample. In contrast, deep methods require more time (in particular during the training time) to perform the forecasting due to their internal structure which can be more or less complicated (e.g., memory-based cells, convolution operations). In the middle we find XGB and RF which, relying on an optimized tree-based structure, are quite fast. The boxplot representation also highlights that, when using a technique with a complex internal structure (typically, deep-based techniques) the time variability directly increases. This is obviously connected to the fact that a more complex structure may produce higher delays. This behavior is captured through the inter-quartile range (IQR) defined as the difference between third and first quartiles. Large IQR value imply more dispersed values. In case of deep-based techniques we observe the following IQR values: RNN (0.88), LSTM (1.59), GRU (1.58), CNN (0.42), MLP (0.79). In case of standard learning techniques we have: RF (0.15), XGB (0.14). Finally, VAR is the more stable having the smallest IQR value amounting to 0.006. ### _Main Findings_ Through the proposed assessment we are able to infer some general considerations about the evaluated forecasting techniques. First of all, we can reasonably say that there is not a definitive winner, since the forecasting complexity does not allow to select an outperforming technique in an absolute sense. An insightful comparison can be made between the statistical approach (represented by VAR) and the learning techniques. First, we highlight that the VAR method allows a complete control on the analytical structure of each time series. In particular, its ancillary analyses (e.g., residuals, impulse response) provide deep insights about the time series composition and their mutual relationships. In contrast, the data-driven approach adopted by learning techniques does not allow to capture many analytical details. For instance, guaranteeing the stationarity condition (not required by learning approaches) allows to obtain useful descriptors (mean, variance, correlation) of the future behavior of a time series. Conversely, in case the stationarity condition is violated (namely if the series is consistently increasing over time) the sample mean and variance will grow with the sample size, and will tend to underestimate the mean and variance in future periods. Second, VAR is a good choice in case the time series exhibits a good stability over time, or when the observation time is wider than tens of minutes (e.g., per-month or per-year). In this latter case, in fact, the temporal irregularities tend to be smoother, and the linear combination of past lagged values offer better performance. Conversely, being intrinsically adaptive, learning techniques are more responsive in presence of network parameters fluctuations. Furthermore, VAR offers challenging performance in terms of compute times due to the simplicity of the model. On the other hand, deep recurrent methods (RNN, LSTM, GRU, CNN, MLP) exhibit slower computation times along with high temporal uncertainty (high IQR values) mainly due to the complex internal structure. Among standard ML techniques, XGBoost offers an interesting trade-off between accuracy and time. We finally notice that, differently from all the learning techniques, VAR does not need any hyper-parameter tuning (other than the optimal lag) which, if not accurate, could lead to poor performance. Fig. 11: Box-plots of the elapsed times for each forecasting technique applied to all the available voice flows. ## VI Conclusion In this work we tackle the problem of forecasting mobile VoIP traffic in a real cellular environment. The main purpose is to provide precious information to network operators allowing them to optimize the network planning in mobile environments. In particular, we characterize the temporal evolution of the most important QoS/QoE descriptors of VoIP traffic through a multivariate time series assessment. Forecasting techniques such as Vector Autoregression and machine learning approaches have been compared to highlight _pros_ and _cons_ both in terms of performance and times. The work presents a series of novelty. First, we propose a multivariate time series characterization of network descriptors, an approach currently used by econometricians to model and predict the market stock evolution. Through such an approach it is possible to analytically capture the interdependencies among the stochastic processes which govern the network variables behavior. Then, the time series problem has been turned into a supervised learning framework through the sliding window technique. Such reframing of the problem is useful to: _i_) reinterpret the classic concepts of training/test sets in terms of temporal values of a time series aimed at forecasting future values of network descriptors; _ii_) compare in a critical manner statistical techniques (here represented by the VAR model) and machine learning methods. Results show that VAR is the optimal choice when a complete analytical control on the variables is needed, when the network fluctuations are not so persistent, or when strict elaboration time constraints are present. In contrast, learning-based techniques provide excellent accuracy in case of network instability due their data-driven approach. Finally, the whole assessment is supported by an experimental campaign in a real mobility LTE-A environment, where through the evolved RTCP-XR protocol, we are able to derive network metrics typically neglected in the literature (e.g., MOS, SNR, playout delay buffer). Such a work remains open for future investigations along several directions: _i_) the main techniques adopted for this analysis could be extended to technologies such as 5G as they become more pervasive and with the possibility of acquiring data from real settings; _ii_) many derived models could be used as benchmark to design more realistic network simulators; _iii_) new parameters such as the car's speed could be gathered and related to the behavior of the VoIP metrics; _iv_) it could be possible to repeat the whole analysis in a transformed domain (e.g., wavelet domain in place of time domain).
2304.08338
Weighted extremal metrics on blowups
We show that if a compact K\"ahler manifold admits a weighted extremal metric for the action of a torus, so too does its blowup at a relatively stable point that is fixed by both the torus action and the extremal field. This generalises previous results on extremal metrics by Arezzo--Pacard--Singer and Sz\'ekelyhidi to many other canonical metrics, including extremal Sasaki metrics, deformations of K\"ahler--Ricci solitons and $\mu$-cscK metrics. In a sequel to this paper, we use this result to study the weighted K-stability of weighted extremal manifolds.
Michael Hallam
2023-04-17T15:00:40Z
http://arxiv.org/abs/2304.08338v2
# Weighted extremal metrics on blowups ###### Abstract. We show that if a compact Kahler manifold admits a weighted extremal metric for the action of a torus, so too does its blowup at a relatively stable point that is fixed by both the torus action and the extremal field. This generalises previous results on extremal metrics by Arezzo-Pacard-Singer and Szekelyhidi to many other canonical metrics, including extremal Sasaki metrics, deformations of Kahler-Ricci solitons and \(\mu\)-cscK metrics. In a sequel to this paper, we use this result to study the weighted K-stability of weighted extremal manifolds. ###### Contents * 1 Introduction * 2 Background * 2.1 Kahler geometry * 2.2 Weighted extremal metrics * 3 Setting up the problem * 3.1 Burns-Simanca metric * 3.2 The approximate solution * 3.3 The deformation problem * 3.4 Weighted norms * 4 Moment map estimates * 5 Estimates for the weighted linearisation * 6 A right-inverse of the linearised operator * 7 Completing the proof * 8 Examples ## 1. Introduction Ever since the seminal work of Calabi [1], the problem of whether a compact Kahler manifold admits a canonical Kahler metric has been a driving force in the field of Kahler geometry. The most studied among canonical metrics are the Kahler-Einstein metrics, but much research in the last thirty years has been motivated by the constant scalar curvature Kahler (cscK) or, more generally, extremal problem [1]. Alongside these, there are many other notions of canonical metric available, for example Kahler-Ricci solitons which arise as (possibly singular) limits of the Kahler-Ricci flow [13, 14]. Other important examples include extremal Sasaki metrics [1], which provide a notion of canonical metric on manifolds of odd real dimension, and conformally Kahler-Einstein-Maxwell (cKEM) metrics [10]. It was shown recently that all of these examples can be treated under the same framework through the notion of a _weighted extremal metric_, due to Lahdiil [14]. Around the same time, Inoue independently introduced a closely related generalisation of cscK and Kahler-Ricci solitons called \(\mu\)_-cscK_ metrics, which form a special subclass of Lahdiil's metrics [15]. To define the weighted extremal equation, one considers a compact real torus \(T\) acting on a compact Kahler manifold \((M,\omega)\) by hamiltonian isometries. This has a moment map \(\mu:M\to\mathfrak{t}^{*}\) with image \(P\subset\mathfrak{t}^{*}\) a convex polytope. The "weights" of the weighted cscK equation are then smooth positive functions \(v,w:P\to\mathbb{R}_{>0}\). Using these functions, one can deform the scalar curvature \(S(\omega)\) to the \((v,w)\)_-weighted scalar curvature_: \[S_{v,w}(\omega):=\frac{1}{w(\mu)}\left(v(\mu)S(\omega)-2\Delta(v(\mu))+\frac{ 1}{2}\mathrm{Tr}(g\circ\mathrm{Hess}(v)(\mu))\right).\] A \((v,w)\)_-weighted extremal metric_ is then a metric \(\omega\) such that the gradient of \(S_{v,w}(\omega)\) is a real holomorphic vector field, called the _extremal field_. For the special case in which \(S_{v,w}(\omega)\) is constant, we call \(\omega\) a \((v,w)\)_-weighted cscK metric_. Through various choices of the weight functions \(v\) and \(w\), one can recover all of the examples of canonical metrics described above; we refer to Section 2 for more details and precise definitions. In the setting of cscK metrics, an important general construction was given by Arezzo-Pacard, who showed that if a manifold with discrete automorphism group admits a cscK metric then so too does its blowup at a point [1]. This was later generalised by Arezzo-Pacard-Singer to the extremal setting [1], where one instead blows up a certain collection of fixed points of the extremal field. Szekelyhidi then strengthened the results of Arezzo-Pacard-Singer on extremal metrics [21, 22], including to the case of blowing up a single fixed point satisfying a stability condition. Recently, Dervan-Sektnan gave a complete result, classifying when the blowup at an arbitrary point (not necessarily fixed) admits an extremal metric [16]. Not only do these results furnish many examples of extremal metrics, but they are also historically important for their use in partially proving various formulations of the Yau-Tian-Donaldson conjecture. For instance, after Donaldson proved that a manifold admitting a cscK metric is K-semistable [13], Stoppa used Donaldson's result along with the theorem of Arezzo-Pacard [1] to strengthen this to K-stability in the case of discrete automorphisms [15, 16]. Later, Stoppa-Szekelyhidi used the blowup result of Arezzo-Pacard-Singer for extremal metrics [1] to prove relative K-polystability of extremal manifolds [17]. In this paper, we generalise the results [1, 21] of Arezzo-Pacard-Singer and Szekelyhidi to the weighted extremal setting. Let \((M,\omega)\) be a compact Kahler manifold, and let \(T\) be a real torus acting effectively on \(M\) by hamiltonian isometries, with moment map \(\mu:M\to\mathfrak{t}^{*}\). Given a \(T\)-invariant Kahler potential \(\varphi\) with respect to \(\omega\), one can write down an explicit moment map \(\mu_{\varphi}\) for \(\omega_{\varphi}:=\omega+i\partial\overline{\partial}\varphi\) such that \(\mu_{\varphi}(M)=P=\mu(M)\). Thus, modifying the moment map in this way, one can define the weighted scalar curvature \(S_{v,w}(\omega_{\varphi})\) for any such \(\varphi\), and seek a \(T\)-invariant solution of the weighted extremal equation in the class \([\omega]\). Let \(p\in M\) be a fixed point of the \(T\)-action, and denote by \(\pi:\mathrm{Bl}_{p}M\to M\) the blowup of \(M\) at \(p\), with exceptional divisor \(E\subset\mathrm{Bl}_{p}M\). The \(T\)-action on \(M\) lifts to \(\mathrm{Bl}_{p}M\), and for \(\epsilon>0\) sufficiently small the cohomology class \([\pi^{*}\omega-\epsilon E]\) contains a \(T\)-invariant Kahler metric \(\omega_{\epsilon}\). In Lemma 3.2 below, we show that there exists a moment map \(\mu_{\epsilon}:\mathrm{Bl}_{p}M\to\mathfrak{t}^{*}\) for \(\omega_{\epsilon}\) whose moment polytope \(P_{\epsilon}:=\mu_{\epsilon}(\mathrm{Bl}_{p}M)\) is contained in the moment polytope \(P:=\mu(M)\). Thus, given a choice of weight functions \(v\) and \(w\) on \(P\), we can restrict these to \(P_{\epsilon}\), and search for a \((v,w)\)-weighted extremal metric in the class \([\pi^{*}\omega-\epsilon E]\). In order for the blowup \(\mathrm{Bl}_{p}M\) to admit a weighted extremal metric, we will need the point \(p\) to satisfy a stability condition. Roughly, we first construct a certain subgroup \(H\) of the group of \(T\)-commuting hamiltonian isometries \(G\) of \((M,\omega)\). Choosing an invariant inner product on the Lie algebra \(\mathfrak{h}\) of \(H\), we may identify \(\mathfrak{h}\) with its dual \(\mathfrak{h}^{*}\), and thus consider the moment map \(\mu_{H}:M\to\mathfrak{h}^{*}\) for the \(H\)-action as a map \(\mu_{H}^{\#}:M\to\mathfrak{h}\). The point \(p\) is _relatively stable_ if \(\mu_{H}^{\#}(p)\in\mathfrak{h}_{p}\), i.e. the vector field generated by \(\mu_{H}^{\#}(p)\) fixes \(p\). We refer to Section 3.3 for the full definition of \(H\), but for now we remark that if the torus \(T\) is maximal then \(H=T\), and any fixed point of \(T\) will automatically be relatively stable. **Theorem 1.1**.: _Let \((M,\omega)\) be a \((v,w)\)-weighted extremal manifold, and let \(p\in M\) be a relatively stable point that is fixed by both the \(T\)-action and the extremal field. Denote by \(\pi:\mathrm{Bl}_{p}M\to M\) the blowup of \(M\) at \(p\) with exceptional divisor \(E\subset\mathrm{Bl}_{p}M\). Then for all \(\epsilon>0\) sufficiently small, the class \([\pi^{*}\omega-\epsilon^{2}E]\) contains a \((v,w)\)-weighted extremal metric._ In fact, although we will not explicitly prove this here, the proof generalises easily to blowing up a finite collection of points satisfying a suitable joint stability condition. **Theorem 1.2**.: _Let \((M,\omega)\) be a \((v,w)\)-weighted extremal manifold of dimension \(n\), and let \(p_{1},\dots,p_{N}\) be a collection of \(T\)-fixed points in \(M\) that are also fixed by the extremal field. Let \(a_{1},\dots,a_{N}\in\mathbb{R}_{>0}\) be such that the vector field generated by_ \[\sum_{j=1}^{N}a_{j}^{n-1}\mu_{H}^{\#}(p_{j})\in\mathfrak{h}\] _vanishes at each of the \(p_{j}\). Denote by \(\pi:\mathrm{Bl}_{p_{1},\dots,p_{N}}M\to M\) the blowup of \(M\) at the points \(p_{1},\dots,p_{N}\) with exceptional divisors \(E_{j}:=\pi^{-1}(p_{j})\). Then for all \(\epsilon>0\) sufficiently small, the class \([\pi^{*}\omega-\epsilon^{2}(a_{1}E_{1}+\dots+a_{N}E_{N})]\) contains a \((v,w)\)-weighted extremal metric._ The structure of the proof follows closely the work [10]. Namely, we first construct an approximate solution \(\omega_{\epsilon}\) of the weighted extremal equation by gluing \(\omega\) to a rescaling of a model metric near the exceptional divisor \(E\), called the _Burns-Simanca metric_. The approximate solution \(\omega_{\epsilon}\) is then deformed to a genuine solution via the contraction mapping theorem. Despite the structural similarities, there are many new technical obstacles that arise in the course of the proof. First among these is the construction of the moment map \(\mu_{\epsilon}\), whose image \(P_{\epsilon}\) is contained in \(P\). It is not enough to know that \(\mu_{\epsilon}\) exists, but we also require a careful understanding of \(\mu_{\epsilon}\) near the exceptional divisor. Furthermore, there are many new terms in the weighted extremal equation that must be estimated in order to apply the contraction mapping theorem. The bulk of the groundwork towards these estimates is carried out in Section 4, where we use the explicit description of \(\mu_{\epsilon}\) from Lemma 3.2. There is one conceptual point of interest that arises in the proof, namely why it is possible to glue in the Burns-Simanca metric--a scalar flat metric--rather than some \((v,w)\)-weighted analogue which has vanishing \((v,w)\)-weighted scalar curvature. To give a rough justification, near the exceptional divisor \(E\), the image of \(\mu_{\epsilon}\) is a small region of \(P_{\epsilon}\), on which the weight functions \(v\) and \(w\) appear approximately constant. The Burns-Simanca metric can be considered as a weighted cscK metric with _constant weight functions_. Thus, it is plausible that on this region we can deform the Burns-Simanca metric to a weighted cscK metric with the required weight functions. This is partly justified by deformation results on weighted extremal metrics found in [10, Chapter 6.1]. Rather than taking this path of deforming the Burns-Simanca metric to some other metric which we glue in, we simply deform the approximate metric \(\omega_{\epsilon}\) built from the Burns-Simanca metric directly to a weighted extremal metric in one clean stroke. Taking different choices of weight functions \(v\) and \(w\), we obtain analogues of [1, 2] for other kinds of canonical Kahler metrics. For example, suppose that \([\omega]\) is the first Chern class \(c_{1}(L)\) of an ample line bundle \(L\). For certain choices of \(v,w\) depending on an element \(\xi\in\mathfrak{t}\), \((v,w)\)-weighted extremal metrics on \(M\) correspond to extremal Sasaki metrics on the unit circle bundle \(S\) of \(L^{*}\) with Sasaki-Reeb vector field \(\xi\)[1]. **Corollary 1.3**.: _Let \((M,L)\) be a \(T\)-equivariant polarised manifold, and suppose that \(\omega\in c_{1}(L)\) induces an extremal Sasaki metric on \(S\subset L^{*}\) with Sasaki-Reeb field \(\xi\in\mathfrak{t}\). Let \(p\in M\) be a relatively stable point fixed by \(T\) and the extremal field. Then for all rational \(\epsilon>0\) sufficiently small there exists a Kahler metric \(\omega_{\epsilon}\) on the blowup \(\operatorname{Bl}_{p}M\) in the class \(c_{1}(\pi^{*}L-\epsilon E)\) satisfying the following: for all \(k>0\) such that \(k\epsilon\in\mathbb{Z}\), \(k\omega_{\epsilon}\) induces an extremal Sasaki metric on the unit circle bundle of \(k(-\pi^{*}L+\epsilon E)\) with Sasaki-Reeb field \(\xi\)._ We have stated the application to extremal Sasaki metrics because this is the main instance in which the interpretation of the theorem is clear cut. For example, we cannot conclude that the blowup of a manifold with a Kahler-Ricci soliton also admits a Kahler-Ricci soliton. For starters, the blowup may not be Fano, and even if it is, the class \([\pi^{*}\omega-\epsilon E]\) is not the canonical class anyway. What's more, the Kahler-Ricci soliton equation corresponds to a weighted extremal metric with weight functions \(v=w=e^{(\xi,-)}\)_and_ extremal field \(\xi\in\mathfrak{t}\). Our blowup result leaves the weight functions \(v\) and \(w\) unchanged, however we do not have control over the extremal field--in general it will only be a small deformation of the original extremal field \(\xi\), so will not match the weight functions. Thus, the metric we produce is only a small deformation of a Kahler-Ricci soliton, rather than a genuine Kahler-Ricci soliton. Nonetheless, it is still useful to know of the existence of a weighted extremal metric on the blowup. For example, this carries interesting ramifications about the automorphism group of the blowup, via the Matsushima-Lichnerowicz theorem for weighted extremal manifolds [15, Theorem B.1]. Furthermore, the blowup result is still useful for proving weighted stability of manifolds admitting weighted extremal metrics. Indeed, in a sequel to this paper currently in preparation, we will use the blowup result for weighted extremal metrics to refine the weighted K-semistability of weighted cscK manifolds proven by Lahdili [15] and Inoue [14] to weighted K-polystability relative to a maximal torus; this also extends the weighted K-polystability proven in [1] with respect to _smooth_ test configurations to arbitrary (possibly singular) test configurations. Acknowledgements. I thank Eveline Legendre for a helpful remark on moment polytopes, and Lars Sektnan for valuable advice on weighted Holder spaces and comments on the manuscript. I also thank Zakarias Sjostrom Dyrefelt and Ruadhai Dervan for their interest and comments. ## 2. Background ### Kahler geometry We briefly lay out our notation and terminology for Kahler metrics. Let \((M,\omega)\) be a compact \(n\)-dimensional Kahler manifold. The Ricci curvature of \(\omega\) is \[\operatorname{Ric}(\omega):=-\frac{i}{2\pi}\partial\overline{\partial}\log \omega^{n},\] and the scalar curvature is \[S(\omega):=\Lambda_{\omega}\operatorname{Ric}(\omega)=\frac{n\operatorname{ Ric}(\omega)\wedge\omega^{n-1}}{\omega^{n}}.\] For a smooth function \(f\in C^{\infty}(M,\mathbb{C})\), we write \(\nabla^{1,0}f\) for the projection of the gradient \(\nabla f\) to \(T^{1,0}M\). In local coordinates, \(\nabla^{1,0}f=g^{j\bar{k}}\partial_{\bar{k}}f\). The operator \(\mathcal{D}:C^{\infty}(M,\mathbb{C})\to\Gamma(M,T^{1,0}\otimes\Omega^{0,1})\) is defined by \[\mathcal{D}f:=\overline{\partial}\nabla^{1,0}f,\] where \(\overline{\partial}\) is the del-bar operator of the holomorphic vector bundle \(T^{1,0}M\). The _Lichnerowicz operator_ is \[f\mapsto\mathcal{D}^{*}\mathcal{D}f,\] where the adjoint of \(\mathcal{D}\) is taken with respect to the \(L^{2}\)-metric on tensors determined by \(\omega\). It is straightforward but tedious to derive the following product rule for the adjoint of \(\mathcal{D}\): \[\mathcal{D}^{*}(fA)=f\mathcal{D}^{*}A-(\overline{\partial}^{*}A,\nabla^{1,0} f)-(PA,\overline{\partial}f)+(\mathcal{D}f,A),\] for \(f\in C^{\infty}(M,\mathbb{C})\) and \(A\in\Gamma(M,T^{1,0}\otimes\Omega^{0,1})\). Here \((\,,)\) is the pointwise hermitian inner product on tensors, and \(P:\Gamma(M,T^{1,0}\otimes\Omega^{0,1})\to\Gamma(M,\Omega^{0,1})\) is the first order linear differential operator \[PA:=-g^{\bar{k}m}\partial_{m}(g_{\bar{k}\ell}A^{\ell}_{\bar{j}})d\overline{z}^{j}.\] Note the adjoint operator \(\overline{\partial}^{*}\) is given by the formula \[\overline{\partial}^{*}A=-g^{\bar{k}m}\partial_{m}(g_{\bar{j}\ell}A^{\ell}_{ \bar{k}})g^{\bar{j}i}\frac{\partial}{\partial z^{i}}. \tag{1}\] This appears similar to the operator \(P\), but it is not the quite same since the indices \(\bar{j}\) and \(\bar{k}\) are swapped. However, if \(A\) has the symmetry \[g_{\bar{j}\ell}A^{\ell}_{\bar{k}}=g_{\bar{k}\ell}A^{\ell}_{\bar{j}}\] then we will indeed have \(\overline{\partial}^{*}A=(PA)^{\#}\), where \(\#\) is conversion from a \((0,1)\)-form to a \((1,0)\)-vector field via the metric. This relation is satisfied in the important situation \(A=\mathcal{D}h\), in which case \(g_{\bar{j}\ell}A^{\ell}_{\bar{k}}=\partial_{\bar{j}}\partial_{\bar{k}}h\) at the centre of a normal coordinate system. For later use, we therefore record the following: **Lemma 2.1**.: _For \(f,g\in C^{\infty}(M,\mathbb{C})\),_ \[\mathcal{D}^{*}(f\mathcal{D}g)=f\mathcal{D}^{*}\mathcal{D}g-2(\overline{ \partial}^{*}\mathcal{D}g,\nabla^{1,0}f)+(\mathcal{D}f,\mathcal{D}g).\] If \(\nabla^{1,0}f\) is a holomorphic vector field, we call \(f\) a _holomorphy potential_. The Lichnerowicz operator is a self-adjoint elliptic operator whose kernel consists of the holomorphy potentials. Although we will not need this, we remark that a holomorphic vector field has a holomorphy potential precisely if the vector field vanishes somewhere [10], and so the set of holomorphic vector fields arising from holomorphy potentials is independent of the choice of Kahler metric. Let \(\mathcal{H}\) be the space of smooth Kahler potentials with respect to \(\omega\). For \(\varphi\in\mathcal{H}\), we write \(\omega_{\varphi}:=\omega+i\partial\overline{\partial}\varphi\) for the corresponding Kahler metric. The scalar curvature determines an operator \(S:\mathcal{H}\to C^{\infty}(M,\mathbb{R})\), \(\varphi\mapsto S(\omega_{\varphi})\). The linearisation \(L_{\varphi}\) of the scalar curvature operator at \(\varphi\in\mathcal{H}\) is given by \[L_{\varphi}\psi=\mathcal{D}^{*}_{\varphi}\mathcal{D}_{\varphi}\psi+\frac{1}{ 2}\nabla_{\varphi}S(\omega_{\varphi})\cdot\nabla_{\varphi}\psi,\] for \(\psi\in C^{\infty}(M,\mathbb{R})=T_{\varphi}\mathcal{H}\), where \(\mathcal{D}_{\varphi}\) and \(\nabla_{\varphi}\) denote the operators defined by the Kahler metric \(\omega_{\varphi}\). ### Weighted extremal metrics In this section, we review the weighted cscK metrics introduced by Lahdiil [11]. Take: 1. \((M,\omega)\) a compact Kahler manifold, 2. \(T\) a real torus acting effectively on \((M,\omega)\) by hamiltonian isometries, 3. \(\mu:M\to\mathfrak{t}^{*}\) a moment map for the \(T\)-action, 4. \(P:=\mu(M)\subset\mathfrak{t}^{*}\) the moment polytope, 5. \(v,w:M\to\mathbb{R}_{>0}\) positive smooth functions. Note \(\mu(M)\) is indeed a convex polytope by a theorem of Atiyah [1] and Guillemin-Sternberg [12]. Our convention for moment maps is the following: given an element \(\xi\in\mathfrak{t}\), \[\langle d\mu,\xi\rangle=-\omega(\xi,-),\] where we abuse notation by conflating the element \(\xi\) of \(\mathfrak{t}\) with the real holomorphic vector field it generates on \(M\); here \(\langle-,-\rangle\) denotes the natural pairing between \(\mathfrak{t}^{*}\) and \(\mathfrak{t}\). **Definition 2.2**.: _Given the above data, we define the \(v\)-weighted scalar curvature of \(\omega\) to be_ \[S_{v}(\omega):=v(\mu)S(\omega)-2\Delta(v(\mu))+\frac{1}{2}\mathrm{Tr}(g\circ \mathrm{Hess}(v)(\mu)).\] _Here \(S(\omega)=\Lambda_{\omega}\mathrm{Ric}(\omega)\) is the scalar curvature, \(\Delta=-\partial^{*}\partial\) is the Kahler Laplacian of \(\omega\), and \(g\) is the Riemannian metric determined by \(\omega\)._ Concretely, the term \(\mathrm{Tr}(g\circ\mathrm{Hess}(v)(\mu))\) may be written \[\sum_{a,b}v_{,ab}(\mu)g(\xi_{a},\xi_{b}),\] where \(\xi_{1},\dots,\xi_{r}\) is a basis of \(\mathfrak{t}\), and \(v_{,ab}\) denotes the \(ab\)-partial derivative of \(v\) with respect to the dual basis of \(\mathfrak{t}^{*}\). **Remark 2.3**.: While the definition of \(S_{v}(\omega)\) may seem arbitrary at first, the formula arises naturally as an infinite-dimensional moment map on the space \(\mathcal{J}^{T}\) of \(T\)-invariant almost complex structures compatible with \(\omega\), when one perturbs the metric on this space using the weight function \(v\)[1, Section 4]. What makes this curvature worth studying is that it can further recover many well-known and important examples of canonical metrics in Kahler geometry--see Example 2.7 below. **Remark 2.4**.: In [1], the \(v\)-weighted scalar curvature is instead written \[S_{v}(\omega)=v(\mu)S(\omega)+2\Delta(v(\mu))+\mathrm{Tr}(g\circ\mathrm{Hess} (v)(\mu)).\] Our weighted scalar curvature is equal to half of this, and the differences in signs and constants are due to the differences between the Riemannian and Kahler curvatures and Laplacians. **Definition 2.5** ([1]).: _The metric \(\omega\) is:_ 1. \(a\) \((v,w)\)-weighted cscK metric_, if_ \[S_{v}(\omega)=c_{v,w}w(\mu),\] _where_ \(c_{v,w}\) _is a constant;_ 2. \(a\) \((v,w)\)-weighted extremal metric _if the function_ \[S_{v,w}(\omega):=S_{v}(\omega)/w(\mu)\] _is a holomorphy potential with respect to_ \(\omega\) Sometimes we shorten the full name to just a \((v,w)\)-cscK metric, or a \((v,w)\)-extremal metric. If the weight functions \(v\) and \(w\) are understood or irrelevant, we may also refer to such a metric simply as a weighted cscK metric, or a weighted extremal metric. **Remark 2.6**.: Our definition of weighted extremal metric is slightly different to that in [10]. Namely, in [10] it is required that the function \(S_{v}(\omega)/w(\mu)\) is of the form \(w_{\mathrm{ext}}(\mu)\), where \(w_{\mathrm{ext}}:P\to\mathbb{R}\) is an affine linear function. Here we do not require that \(S_{v}(\omega)/w(\mu)\) can be described as such, but note that since this function is a \(T\)-invariant holomorphy potential, we can enlarge the torus to \(T^{\prime}\supset T\) by taking the torus generated by a basis \(\xi_{1},\ldots,\xi_{r}\) for \(\mathfrak{t}\) together with the holomorphic vector field \(\xi\) determined by \(S_{v}(\omega)/w(\mu)\). If \(P^{\prime}\) is the moment polytope of \(T^{\prime}\), \(S_{v}(\omega)/w(\mu)\) is then the composition of the affine linear function \(\langle\xi,-\rangle:P^{\prime}\to\mathbb{R}\) with \(\mu_{\mathfrak{t}^{\prime}}:M\to\mathfrak{t}^{\prime}\). When the torus \(T\) is maximal, our two definitions therefore coincide. **Example 2.7**.: Fix an element \(\xi\in\mathfrak{t}\), and denote by \(\ell_{\xi}:\mathfrak{t}^{*}\to\mathbb{R}\) the corresponding element of \((\mathfrak{t}^{*})^{*}\). Let \(a\) be a constant such that \(a+\ell_{\xi}>0\) on \(P\). Many standard canonical metrics can be obtained from certain choices of the functions \(v,w\)[10, Section 3]: 1. **CscK:** Taking \(v\) and \(w\) constant, the weighted cscK equation reduces to \[S(\omega)=c,\] which is the usual cscK equation. 2. **Extremal:** Taking \(v\) and \(w\) constant again, a weighted extremal metric is precisely an extremal metric in the usual sense, meaning \(\nabla^{1,0}S(\omega)\) is a holomorphic vector field. 3. **Kahler-Ricci soliton:** For \(M\) Fano and \(\omega\in c_{1}(X)\), the Kahler-Ricci soliton equation is \[\mathrm{Ric}(\omega)-\omega=\mathcal{L}_{\xi}\omega,\] where \(\mathcal{L}_{\xi}\) is the Lie derivative with respect to the real holomorphic vector field generated by \(\xi\). A weighted extremal metric in \(c_{1}(X)\) with weights \(v=w=e^{\ell_{\xi}}\) is a Kahler-Ricci soliton provided the extremal field is \(\xi\). That is, the Kahler-Ricci soliton equation may be written \[\nabla^{1,0}S_{v,w}(\omega)=\xi\] for the choice of weights \(v=w=e^{\ell_{\xi}}\). The Kahler-Ricci soliton equation has an extensive literature. One important context in which they arise is as the Gromov-Hausdorff limits of solutions to the Kahler-Ricci flow, which is a powerful theorem of Chen-Wang [14] and Chen-Sun-Wang [14]. 4. **Extremal Sasaki:** Suppose that \([\omega]\) is the first Chern class \(c_{1}(L)\) of an ample line bundle \(L\to M\). A choice of Kahler metric \(\omega_{\varphi}\in[\omega]\) then corresponds to a Sasaki metric on the unit circle bundle \(S\) of \(L^{*}\). Letting \[v:=(a+\ell_{\xi})^{-n-1},\quad w:=(a+\ell_{\xi})^{-n-3},\] a \((v,w)\)-extremal metric on \(M\) then corresponds to an extremal Sasaki metric on \(S\) with Sasaki-Reeb field \(\xi\)[1, 1]. 5. **Conformally Kahler-Einstein Maxwell:** Letting \[v=(a+\ell_{\xi})^{-2n+1},\quad w=(a+\ell_{\xi})^{-2n-1},\] a \((v,w)\)-cscK metric on \(M\) then corresponds to a conformally Kahler Einstein-Maxwell metric [1]. 6. \(v\)**-soliton:** Suppose \(M\) is Fano and \([\omega]=c_{1}(X)\). Taking \(v\) arbitrary and defining \[w(p)=2v(p)(n+\langle d\log v(p),p\rangle),\] the \((v,w)\)-cscK equation then becomes \[\operatorname{Ric}(\omega)-\omega=i\partial\overline{\partial}\log v(\mu),\] which is the \(v\)-soliton equation [1]. 7. \(\mu\)**-cscK:** In [11, 12], Inoue has introduced and studied a class of \(\mu\)_-cscK metrics_. These are a special class of weighted extremal metrics, given by the same weight functions \(v=w=e^{\ell_{\xi}}\) and extremal field \(\xi\) as for Kahler-Ricci solitons, only one drops the condition of \(M\) being Fano [11, Section 2.1.6]. For an element \(\xi\in\mathfrak{t}\), we will write \(\mu^{\xi}:=\langle\mu,\xi\rangle=\ell_{\xi}\circ\mu\), where the pairing \(\langle-,-\rangle\) is the natural one on \(\mathfrak{t}^{*}\otimes\mathfrak{t}\). When we have chosen a basis \(\{\xi_{a}\}\) for \(\mathfrak{t}\), we will also write \(\mu^{a}\) in place of \(\mu^{\xi_{a}}\). The function \(\mu^{\xi}\) is then a hamiltonian for the infinitesimal action of \(\xi\) on \(M\). We are interested in finding a weighted extremal metric in the class \([\omega]\). Given a \(T\)-invariant Kahler potential \(\varphi\in\mathcal{H}^{T}\), let \[\mu_{\varphi}:=\mu+d^{c}\varphi.\] That is, for any \(\xi\in\mathfrak{t}\), \[\mu^{\xi}_{\varphi}:=\mu^{\xi}+d^{c}\varphi(\xi),\] where we abuse notation by writing \(\xi\) for the vector field it generates on \(M\). Our convention is that \(d^{c}:=\frac{i}{2}(\overline{\partial}-\partial)\), so that \(dd^{c}=i\partial\overline{\partial}\). **Lemma 2.8** ([1, Lemma 1]).: _With the above definition, \(\mu_{\varphi}\) is a moment map for the \(T\)-action with respect to \(\omega_{\varphi}\), and \(\mu_{\varphi}(M)=P\), where \(P\) is the moment polytope for \(\mu=\mu_{0}\)._ With this lemma in mind, it then makes sense to search for a \((v,w)\)-extremal metric in the class \([\omega]\), which is \(T\)-invariant by fiat. **Lemma 2.9** ([1, Lemma 2]).: _With the above choice of moment map \(\mu_{\varphi}\), the following quantities are independent of the choice of \(\varphi\in\mathcal{H}^{T}\):_ 1. \(\int_{M}v(\mu_{\varphi})\,\omega_{\varphi}^{n}\)_,_ 2. \(\int_{M}v(\mu_{\varphi})\operatorname{Ric}(\omega_{\varphi})\wedge\omega_{ \varphi}^{n-1}+\int_{M}\langle dv(\mu_{\varphi}),-\Delta_{\varphi}\mu_{\varphi} \rangle\,\omega_{\varphi}^{n}\)_,_ 3. \(\int_{M}S_{v}(\omega_{\varphi})\,\omega_{\varphi}^{n}\)_._ _It follows that the constant \(c_{v,w}\) of Definition 2.5 is fixed, given by_ \[c_{v,w}=\frac{\int_{M}S_{v}(\omega)\,\omega^{n}}{\int_{M}w(\mu)\,\omega^{n}}.\] **Remark 2.10**.: The significance of \(-\Delta_{\varphi}\mu_{\varphi}\) in _(2)_ of Lemma 2.9 is that it is a moment map for the Ricci curvature \(\operatorname{Ric}(\omega_{\varphi})\), see [14, Lemma 5]. That is, for any \(\xi\in\mathfrak{t}\) and \(x\in M\), \[\langle d(-\Delta_{\varphi}\mu_{\varphi})(x),\xi\rangle=\operatorname{Ric}( \omega_{\varphi})(x)(-,\xi_{x}).\] We will also need to understand well the linearisation of the weighted scalar curvature operator. Recall for a Kahler metric \(\omega\), the linearisation of the usual scalar curvature operator \(S:\mathcal{H}\to C^{\infty}(M,\mathbb{R})\) at \(\varphi\in\mathcal{H}\) is \[L_{\varphi}\psi=\mathcal{D}_{\varphi}^{*}\mathcal{D}_{\varphi}\psi+\frac{1}{2 }\nabla_{\varphi}S(\omega_{\varphi})\cdot\nabla_{\varphi}\psi. \tag{2}\] In the weighted setting, a very similar formula holds: **Proposition 2.11** ([14, Lemma B.1]).: _The linearisation of the weighted scalar curvature operator \(S_{v,w}:\mathcal{H}^{T}\to C^{\infty}(M,\mathbb{R})^{T}\) at \(\varphi\in\mathcal{H}^{T}\) is given by_ \[\check{L}_{\varphi}(\psi)=\frac{v(\mu_{\varphi})}{w(\mu_{\varphi})}\mathcal{ D}_{v,\varphi}^{*}\mathcal{D}_{\varphi}\psi+\frac{1}{2}\nabla_{\varphi}S_{v,w} (\omega_{\varphi})\cdot\nabla_{\varphi}\psi\] _for \(\psi\in C^{\infty}(M,\mathbb{R})^{T}=T_{\varphi}\mathcal{H}^{T}\). Here_ \[\mathcal{D}_{v,\varphi}^{*}A:=\frac{1}{v(\mu_{\varphi})}\mathcal{D}_{\varphi} (v(\mu_{\varphi})A)\] _for \(A\in\Gamma(T^{1,0}\otimes\Omega^{0,1})^{T}\)._ We now show how to rewrite \(\check{L}_{\varphi}\) in terms of \(L_{\varphi}\); for simplicity of notation we will drop the omnipresent subscript \(\varphi\). First, for any metric \(\omega\) we will write \[S_{v,w}(\omega)=\frac{v(\mu)}{w(\mu)}S(\omega)+\Phi_{v,w}(\omega),\] where \[\Phi_{v,w}(\omega):=-\frac{2}{w(\mu)}\Delta(v(\mu))+\frac{1}{2w(\mu)} \mathrm{Tr}(g\circ\mathrm{Hess}(v)(\mu)). \tag{3}\] It follows that \[\nabla S_{v,w}(\omega)\cdot\nabla\psi=\frac{v(\mu)}{w(\mu)}\nabla S(\omega) \cdot\nabla\psi+S(\omega)\nabla\left(\frac{v(\mu)}{w(\mu)}\right)\cdot\nabla \psi+\nabla\Phi_{v,w}(\omega)\cdot\nabla\psi.\] Applying Lemma 2.1, \[v(\mu)\mathcal{D}_{v}^{*}\mathcal{D}\psi =\mathcal{D}^{*}(v(\mu)\mathcal{D}\psi)\] \[=v(\mu)\mathcal{D}^{*}\mathcal{D}\psi-2(\overline{\partial}^{*} \mathcal{D}\psi,\nabla^{1,0}(v(\mu)))+(\mathcal{D}\psi,\mathcal{D}(v(\mu))).\] Putting this all together: **Lemma 2.12**.: _The linearisation \(\check{L}\) of the weighted scalar curvature operator \(S_{v,w}\) can be written_ \[\check{L}(\psi) =\frac{v(\mu)}{w(\mu)}L(\psi)-\frac{2}{w(\mu)}(\overline{ \partial}^{*}\mathcal{D}\psi,\nabla^{1,0}(v(\mu)))+\frac{1}{w(\mu)}(\mathcal{ D}\psi,\mathcal{D}(v(\mu)))\] \[+\frac{1}{2}S(\omega)\nabla\left(\frac{v(\mu)}{w(\mu)}\right) \cdot\nabla\psi+\frac{1}{2}\nabla\Phi_{v,w}(\omega)\cdot\nabla\psi,\] _where \(L\) is the linearisation (2) of the usual scalar curvature operator \(S\)._ It will also be important to understand the extra term \(\Phi_{v,w}(\omega)\). For this, let us pick a normal Riemannian coordinate system \(x_{1},\ldots,x_{2n}\) and compute at the centre: \[2\Delta(v(\mu))\] \[= \sum_{k}\frac{\partial^{2}}{\partial x_{k}^{2}}(v(\mu))\] \[= \sum_{k}\frac{\partial}{\partial x_{k}}\left(\sum_{a}v_{,a}(\mu) \frac{\partial\mu^{a}}{\partial x_{k}}\right)\] \[= \sum_{k}\sum_{a}v_{,a}(\mu)\frac{\partial^{2}\mu^{a}}{\partial x_ {k}^{2}}+\sum_{k}\sum_{a,b}v_{,ab}(\mu)\frac{\partial\mu^{a}}{\partial x_{k}} \frac{\partial\mu^{b}}{\partial x_{k}}\] \[= 2\sum_{a}v_{,a}(\mu)\Delta\mu^{a}+\sum_{a,b}v_{,ab}(\mu)g( \nabla\mu^{a},\nabla\mu^{b})\] \[= 2\sum_{a}v_{,a}(\mu)\Delta\mu^{a}+\sum_{a,b}v_{,ab}(\mu)g(\xi_{a },\xi_{b}).\] Of course, the last term here is just \(\operatorname{Tr}(g\circ\operatorname{Hess}(v)(\mu))\). From this calculation and (3) we conclude: **Lemma 2.13**.: _The term \(\Phi_{v,w}(\omega)\) is a linear combination of functions of the form \(u_{a}(\mu)\Delta\mu^{a}\) and \(u_{ab}(\mu)g(\xi_{a},\xi_{b})\), where the \(u_{a}\) and \(u_{ab}\) are among finitely many fixed smooth functions on the moment polytope \(P\) depending only on \(v\), \(w\) and the basis \(\{\xi_{a}\}\)._ ## 3. Setting up the problem Now that we have reviewed the relevant background material, we can proceed with setting up the proof of Theorem 1.1. Structurally this will largely follow [10], although the technicalities differ as there are many new terms that arise in the weighted setting. Let \((M,\omega)\) be a weighted extremal manifold, and let \(p\in M\) be a fixed point of the \(T\)-action and the extremal field. We wish to show that, under a certain stability condition on \(p\), the blowup \(\operatorname{Bl}_{p}M\) admits a weighted extremal metric in the class \([\pi^{*}\omega-\epsilon^{2}E]\) for all \(\epsilon>0\) sufficiently small, where \(\pi:\operatorname{Bl}_{p}M\to M\) is the blowup map and \(E\) is the exceptional divisor of the blowup. We begin by defining an approximate solution \(\omega_{\epsilon}\) to the weighted extremal equation on \(\operatorname{Bl}_{p}M\). This approximate solution is constructed by gluing the given weighted extremal metric \(\omega\) on \(M\) to a suitable rescaling of a model metric \(\eta\) on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) over a small neighbourhood of \(E\). This model metric \(\eta\) is called the _Burns-Simanca metric_, and we cover its properties in Section 3.1. The gluing construction is then described in Section 3.2. Given the approximate solution \(\omega_{\epsilon}\), we then seek to deform it to a metric \(\widetilde{\omega}_{\epsilon}\) that solves the weighted extremal equation up to a finite dimensional obstruction. A result of Szekelyhidi shows that if \(p\) is relatively stable then this obstruction can be overcome, and hence a weighted extremal metric on \(\operatorname{Bl}_{p}M\) exists. We set up the deformation problem in Section 3.3. The main technical tool used in deforming \(\omega_{\epsilon}\) to \(\widetilde{\omega}_{\epsilon}\) is a family of weighted Holder norms on \(\operatorname{Bl}_{p}M\), depending on \(\epsilon\). These are introduced in Section 3.4, where we cover some of their basic properties. ### Burns-Simanca metric In this section we describe the Burns-Simanca metric \(\eta\) on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\), which is a scalar-flat and asymptotically Euclidean Kahler metric. On \(\operatorname{Bl}_{0}\mathbb{C}^{2}\), it can be written explicitly: \[\eta:=i\partial\overline{\partial}(|\zeta|^{2}+\log|\zeta|), \tag{4}\] where \(\zeta=(\zeta_{1},\zeta_{2})\) are the standard coordinates on \(\mathbb{C}^{n}\backslash\{0\}\cong\operatorname{Bl}_{0}\mathbb{C}^{2} \backslash\mathbb{P}^{1}\). This metric was first shown to be scalar-flat by Burns (see [1, p. 594] and [15, Remark 1]). On \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) for \(n>2\), the metric \(\eta\) was constructed by Simanca [15]. In this case there is no explicit formula available, however there is an asymptotic expansion of the metric. To describe this, first on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\backslash E\) write \(\zeta=(\zeta_{1},\dots,\zeta_{n})\) for the standard complex coordinates pulled back from \(\mathbb{C}^{n}\backslash\{0\}\). The metric \(\eta\) satisfies \[\eta=i\partial\overline{\partial}(|\zeta|^{2}+g(\zeta)) \tag{5}\] on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\backslash E\), where \[g(\zeta)=-|\zeta|^{4-2n}+\operatorname{O}(|\zeta|^{3-2n})\] as \(|\zeta|\to\infty\). Here a smooth real-valued function \(h\) is declared to be \(\operatorname{O}(|\zeta|^{\ell})\) if it lies in the weighted Holder space \(C^{k,\alpha}_{\ell}(\operatorname{Bl}_{0}\mathbb{C}^{n})\) for all \(k\) and \(\alpha\in(0,1)\). We will go over the weighted Holder norms in Section 3.4, but for now an equivalent definition is that for any multi-index \(I=(i_{1},\dots,i_{n},j_{1},\dots,j_{n})\), there exists a constant \(C_{I}>0\) such that \[\partial_{I}h:=\frac{\partial^{|I|}h}{\partial x_{1}^{i_{1}}\cdots\partial x_ {n}^{i_{n}}\partial y_{1}^{j_{1}}\cdots\partial y_{n}^{j_{n}}}\] satisfies \(|\partial_{I}h|\leq C_{I}|\zeta|^{\ell-|I|}\) for all \(|\zeta|\gg 0\), where \(\zeta_{k}=x_{k}+iy_{k}\) and \(|I|:=i_{1}+\dots+i_{n}+j_{1}+\dots+j_{n}\). In our situation, we will have a compact torus \(T\) acting on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\), lifting a linear \(T\)-action on \(\mathbb{C}^{n}\). Changing the basis of \(\mathbb{C}^{n}\), this action can be assumed to be diagonal, in which case it is straightforward from (4) and the construction in [15] that \(\eta\) is \(T\)-invariant. We claim that there exists a moment map \(\mu_{\eta}:\operatorname{Bl}_{0}\mathbb{C}^{n}\to\mathfrak{t}^{*}\) for the \(T\)-action. To see this, note from [15, Proposition 1] that on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\backslash E\), \(\eta\) can be written \(i\partial\overline{\partial}(s(u)+\log(u))\), where \(u=|\zeta|^{2}\), and \(s:[0,\infty)\to\mathbb{R}\) is a smooth function with \(s^{\prime}(0)>0\). The component \(i\partial\overline{\partial}\log u\) simply is the pullback of the Fubini-Study metric on \(\mathbb{P}^{n-1}\) to \(\mathcal{O}_{\mathbb{P}^{n-1}}(-1)\cong\operatorname{Bl}_{0}\mathbb{C}^{n}\), for which we already have a moment map. So it suffices to construct a moment map for the remaining term \(i\partial\overline{\partial}s(u)\), but such a moment map is given by \(d^{c}(s(u))\), since \(s(u)\) is \(T\)-invariant. By (5) and Lemma 2.8, on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\backslash E\) a moment map for the \(T\)-action is \[\mu_{\eta}^{\xi}=\sum_{j=1}^{n}A_{j}^{\xi}|\zeta_{j}|^{2}+d^{c}g(\xi),\] where \(\xi\in\mathfrak{t}\) and \(\operatorname{diag}(A_{1}^{\xi},\dots,A_{n}^{\xi})\in GL(\mathbb{C}^{n})\) is the infinitesimal generator of the action of \(\xi\) on \(\mathbb{C}^{n}\).1 In the case \(n=2\) we take \(g:=\log|\zeta|\) and the same formula holds. By uniqueness of moment maps up to addition of constants, this formula extends over the exceptional divisor \(E\) to a well-defined moment map on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\). We note the first term of \(\mu_{\eta}^{\xi}\) is \(\operatorname{O}(|\zeta|^{2})\), and claim the remaining term is \(\operatorname{O}(|\zeta|^{4-2n})\) when \(n>2\). To see this, note that \(g\) is \(\operatorname{O}(|\zeta|^{4-2n})\), and so \(d^{c}g\) is \(\operatorname{O}(|\zeta|^{3-2n})\). But \(\xi\) is \(\operatorname{O}(|\zeta|)\), so \(d^{c}g(\xi)\) is \(\operatorname{O}(|\zeta|^{4-2n})\) by the Leibniz rule. In the case \(n=2\), \(d^{c}g\) is \(\operatorname{O}(|\zeta|^{-1})\) and \(d^{c}g(\xi)\) is \(\operatorname{O}(1)\). We record this for future use: Footnote 1: Recall we use the shorthand \(\mu^{\xi}:=\langle\mu,\xi\rangle\) for moment maps, and conflate \(\xi\in\mathfrak{t}\) with the vector field it generates. **Lemma 3.1**.: _There exists a moment map \(\mu_{\eta}\) for the \(T\)-action on \((\operatorname{Bl}_{0}\mathbb{C}^{n},\eta)\), which on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\backslash E\) can be written_ \[\mu_{\eta}^{\xi}=\sum_{j=1}^{n}A_{j}^{\xi}|\zeta_{j}|^{2}+d^{c}g(\xi),\] _where \(g\) is defined by_ \[\eta=i\partial\overline{\partial}(|\zeta|^{2}+g(\zeta))\] _and satisfies \(g=\operatorname{O}(|\zeta|^{4-2n})\) when \(n>2\) and \(g=\log|\zeta|\) when \(n=2\). Furthermore, \(d^{c}g(\xi)\) is \(\operatorname{O}(|\zeta|^{4-2n})\) for \(n\geq 2\)._ ### The approximate solution Let \((M,\omega)\) be a weighted extremal manifold, and let \(p\in M\) be a common fixed point for both the \(T\)-action and the extremal field. Let \((z_{1},\dots,z_{n})\) be a system of normal coordinates centred at \(p\), with respect to which the action of \(T\) is linear and diagonal. Such coordinates exist by the Bochner linearisation theorem, which says we may choose holomorphic coordinates about \(p\) with respect to which the \(T\)-action is linear. The \(T\)-action is unitary on \(T_{p}M\), so we may further find a linear change of coordinates such that the \(T\)-action is diagonal in these coordinates, and such that the induced linear coordinates on \(T_{p}M\) are orthonormal. Lastly, taking a Taylor expansion of the metric \(\omega\) about \(p\), one sees that the linear terms in \(\omega\) can only be non-zero in the directions where the torus acts trivially. Performing a quadratic change of coordinates as in [10, Proposition 1.14], we produce normal coordinates for which the \(T\)-action is diagonal. Without loss of generality, we can assume the coordinates \(z_{1},\dots,z_{n}\) are well-defined for \(|z|<2\). For \(r<2\), let \[B_{r}:=\{z\in M:|z|<r\}\] be the ball of radius \(r\) centred at \(p\), and define \[\widetilde{B}_{r}:=\pi^{-1}(B_{r})\subset\operatorname{Bl}_{p}M.\] Similarly, for the standard complex coordinates \(\zeta\) on \(\mathbb{C}^{n}\) and \(R>0\), we will write \[D_{R}:=\{\zeta\in\mathbb{C}^{n}:|\zeta|<R\},\] and define \[\widetilde{D}_{R}:=\pi^{-1}(D_{R})\subset\operatorname{Bl}_{0}\mathbb{C}^{n}.\] For \(\epsilon>0\) sufficiently small, let \[r_{\epsilon}:=\epsilon^{\frac{2n-1}{2n+1}},\quad R_{\epsilon}:=\epsilon^{-1}r_{ \epsilon}. \tag{6}\] We then have \(r_{\epsilon}\to 0\) as \(\epsilon\to 0\), and \(R_{\epsilon}\to\infty\) as \(\epsilon\to 0\). We will identify \(\widetilde{B}_{r_{\epsilon}}\subset\mathrm{Bl}_{p}M\) with the subset \(\widetilde{D}_{R_{\epsilon}}\subset\mathrm{Bl}_{0}\mathbb{C}^{n}\) via \(\iota_{\epsilon}:\widetilde{B}_{r_{\epsilon}}\to\widetilde{D}_{R_{\epsilon}}\), which is the unique lift of the map \(\iota_{\epsilon}:B_{r_{\epsilon}}\to D_{R_{\epsilon}}\), \(\iota_{\epsilon}(z):=\epsilon^{-1}z\). Let \(\rho:\mathbb{R}_{\geq 0}\to[0,1]\) be a smooth function such that \(\rho(x)=0\) for \(x<1\) and \(\rho(x)=1\) for \(x>2\). Given \(\epsilon>0\) sufficiently small, on \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\) we define \[\gamma_{1,\epsilon}(z):=\rho(|z|/r_{\epsilon}),\quad\quad\quad\gamma_{2, \epsilon}:=1-\gamma_{1,\epsilon}, \tag{7}\] where \(r_{\epsilon}\) is defined in (6) and the coordinates \(z\) are pulled back from \(M\). We extend the \(\gamma_{j,\epsilon}\) to smooth functions on all of \(\mathrm{Bl}_{p}M\) by taking \(\gamma_{1,\epsilon}|_{\widetilde{B}_{\epsilon}}:=0\), \(\gamma_{1,\epsilon}|_{\mathrm{Bl}_{p}M\backslash\widetilde{B}_{1}}:=1\), and \(\gamma_{2,\epsilon}:=1-\gamma_{1,\epsilon}\). The metric \(\omega\) has an expansion \[\omega=i\partial\overline{\partial}(|z|^{2}+f(z)) \tag{8}\] about \(p\), where \(f(z)\) is \(\mathrm{O}(|z|^{4})\), by definition of normal coordinates. Since \(\pi:\widetilde{B}_{1}\backslash E\to B_{1}\backslash\{p\}\) is a biholomorphism, the coordinates \(z_{1},\dots,z_{n}\) on \(B_{1}\backslash\{p\}\) lift to coordinates on \(\widetilde{B}_{1}\backslash E\), which we denote by the same symbols. We define the approximate solution \(\omega_{\epsilon}\) on three separate regions as follows: 1. On \(\mathrm{Bl}_{p}M\backslash\widetilde{B}_{1}\), \[\omega_{\epsilon}:=\pi^{*}\omega.\] 2. On \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\), \[\omega_{\epsilon}:=i\partial\overline{\partial}(|z|^{2}+\gamma_{1,\epsilon}(z) f(z)+\gamma_{2,\epsilon}(z)\epsilon^{2}g(\epsilon^{-1}z)),\] where \(f\) is defined in (8) and \(g\) is defined in (5). 3. On \(\widetilde{B}_{\epsilon}\), \[\omega_{\epsilon}:=\iota_{\epsilon}^{*}(\epsilon^{2}\eta),\] where \(\iota_{\epsilon}:\widetilde{B}_{\epsilon}\to\widetilde{D}_{1}\) is the biholomorphism lifting the map \(B_{\epsilon}\to D_{1}\), \(z\mapsto\epsilon^{-1}z\). It is easy to see these constructions give a well-defined real closed \((1,1)\)-form on \(\mathrm{Bl}_{p}M\). Furthermore, for \(\epsilon>0\) sufficiently small, the growth conditions on \(f\) and \(g\) imply \(\omega_{\epsilon}\) is a Kahler metric. Lastly, \(\omega_{\epsilon}\) is equal to \(\omega\) outside \(\widetilde{B}_{2r_{\epsilon}}\), and equal to \(\iota_{\epsilon}^{*}(\epsilon^{2}\eta)\) on \(\widetilde{B}_{r_{\epsilon}}\). We now focus on describing an explicit moment map \(\mu_{\epsilon}:\mathrm{Bl}_{p}M\to\mathfrak{t}^{*}\) for \(\omega_{\epsilon}\). On \(\mathrm{Bl}_{p}M\backslash\widetilde{B}_{1}\), we take \(\mu_{\epsilon}:=\pi^{*}\mu\), where \(\mu\) is the moment map for \(\omega\). This fixes a normalisation for the moment map, and the resulting moment polytope \(P_{\epsilon}\) will be a subset of the polytope \(P\) by the Atiyah-Guillemin-Sternberg theorem, which states the moment polytope of a hamiltonian torus action is the convex hull of the images of the fixed points under the moment map [1, 1]. On \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\), \[\omega_{\epsilon}-\omega =i\partial\overline{\partial}(\gamma_{1,\epsilon}(z)f(z)+\gamma_{ 2,\epsilon}(z)\epsilon^{2}g(\epsilon^{-1}z)-f(z))\] \[=i\partial\overline{\partial}(\gamma_{2,\epsilon}(z)(\epsilon^{2}g( \epsilon^{-1}z)-f(z))).\] Hence by Lemma 2.8, the moment map on \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\) is \[\mu_{\epsilon}:=\mu+d^{c}(\gamma_{2,\epsilon}(z)(\epsilon^{2}g(\epsilon^{-1}z)- f(z))).\] On the region \(\widetilde{B}_{r_{\epsilon}}\backslash\widetilde{B}_{\epsilon}\), we have \(\gamma_{2,\epsilon}=1\) and so \[\mu_{\epsilon}=(\mu-d^{c}f)+d^{c}(\epsilon^{2}g(\epsilon^{-1}z)).\] Note by (8) and Lemma 2.8, the first term \(\mu-d^{c}f\) is the Euclidean moment map on \(B_{1}\) with normalisation \(\mu(p)-d^{c}f(p)=\mu(p)\). It follows that the moment map on \(\widetilde{B}_{\epsilon}\backslash E\) is \[\mu_{\epsilon}=\mu(p)+\epsilon^{2}\sum_{j=1}^{n}A_{j}|\epsilon^{-1}z_{j}|^{2}+ d^{c}(\epsilon^{2}g(\epsilon^{-1}z)),\] where \(A_{j}\in\mathfrak{t}^{*}\) is the diagonal matrix representation of the \(\mathfrak{t}\)-action on \(T_{p}M\cong\mathbb{C}^{n}\). We know from Lemma 3.1 that this formula extends smoothly to a moment map on \(\widetilde{B}_{\epsilon}\), hence we have a moment map \(\mu_{\epsilon}\) for \(\omega_{\epsilon}\). We record this here: **Lemma 3.2**.: _There exists a moment map \(\mu_{\epsilon}:\mathrm{Bl}_{p}M\to\mathfrak{t}^{*}\) for \(\omega_{\epsilon}\), satisfying:_ 1. _On_ \(\mathrm{Bl}_{p}M\backslash\widetilde{B}_{2r_{\epsilon}}\)_,_ \[\mu_{\epsilon}=\pi^{*}\mu,\] 2. _On_ \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\)_,_ \[\mu_{\epsilon}=\mu+d^{c}(\gamma_{2,\epsilon}(z)(\epsilon^{2}g(\epsilon^{-1}z)- f(z))),\] 3. _On_ \(\widetilde{B}_{r_{\epsilon}}\backslash E\)_,_ \[\mu_{\epsilon}=\mu(p)+\epsilon^{2}\sum_{j=1}^{n}A_{j}|\epsilon^{-1}z_{j}|^{2} +d^{c}(\epsilon^{2}g(\epsilon^{-1}z)),\] _where \(\mu:M\to\mathfrak{t}^{*}\) is the moment map for \(\omega\), the functions \(f\) and \(g\) are defined in (8) and (5) respectively, and \(A_{j}\in\mathfrak{t}^{*}\) is the diagonal matrix representation of the \(\mathfrak{t}\)-action on \(T_{p}M\cong\mathbb{C}^{n}\). The image of \(\mu_{\epsilon}\) is a convex polytope \(P_{\epsilon}\) contained in the moment polytope \(P:=\mu(M)\)._ **Remark 3.3**.: The inclusion \(P_{\epsilon}\subset P\) allows us to restrict \(v\) and \(w\) to \(P_{\epsilon}\), so the weighted scalar curvature \(S_{v,w}(\omega_{\epsilon})\) is well-defined and it makes sense to search for a \((v,w)\)-extremal metric on \(\mathrm{Bl}_{p}M\). We remark that the inclusion \(P_{\epsilon}\subset P\) may not be strict, in particular if \(\mu(p)\) lies in the interior of \(P\) then we will have \(P_{\epsilon}=P\) for all \(\epsilon\) (I thank Eveline Legendre for pointing out this possibility). ### The deformation problem Let \((M,\omega)\) be a weighted extremal manifold. Then \(X:=\nabla S_{v,w}(\omega)\) is a \(T\)-invariant real holomorphic vector field and \(JX\) preserves \(\omega\), where \(J\) is the integrable almost complex structure of \(X\). Let \(p\) be a fixed point of both \(T\) and the extremal field \(X\). We make the following definitions: 1. \(G\) is the group of \(T\)-commuting hamiltonian isometries of \((M,\omega)\), 2. \(G_{p}\) is the subgroup of \(G\) fixing \(p\), 3. \(T^{\prime}\subset G_{p}\) is a maximal torus, and 4. \(H\subset G\) is the subgroup of automorphisms commuting with \(T^{\prime}\). We write the Lie algebras of \(T^{\prime}\) and \(H\) as \(\mathfrak{t}^{\prime}\) and \(\mathfrak{h}\) respectively, and note the inclusions \(\mathfrak{t}\subset\mathfrak{t}^{\prime}\subset\mathfrak{h}\). If \(T\) was maximal in the hamiltonian isometry group to begin with, these inclusions are all equalities. **Remark 3.4**.: In the previous section, we constructed a \(T\)-invariant metric \(\omega_{\epsilon}\) using \(T\)-invariant coordinates \(z\) near the fixed point \(p\). Since \(T^{\prime}\) also acts by hamiltonian isometries and fixes the point \(p\), we can assume that the \(z\)-coordinates are in fact \(T^{\prime}\)-invariant, and thus all the constructions from the previous section, including \(\omega_{\epsilon}\), are \(T^{\prime}\)-invariant as well. In addition, we claim that if \(\varphi\) is a \(T^{\prime}\)-invariant Kahler potential, then the weighted scalar curvature \(S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi)\) is also \(T^{\prime}\)-invariant. To see this, by the chain rule it suffices to show the moment map \(\mu_{\epsilon}\) for \(T\) is \(T^{\prime}\)-invariant. Let \(\mu_{\epsilon}^{a}\) be a component function of \(\mu_{\epsilon}\) generating \(\xi_{a}\in\mathfrak{t}\), and let \(\xi\in\mathfrak{t}^{\prime}\). We write \(\widetilde{\xi}_{a}\) and \(\widetilde{\xi}\) for the corresponding vector fields on \(\operatorname{Bl}_{p}M\). Then \(\widetilde{\xi}(\mu_{\epsilon}^{a})\) is the hamiltonian generator of \([\widetilde{\xi},\widetilde{\xi}_{a}]=0\), hence this function is constant. It vanishes at a maximum of \(\mu_{\epsilon}^{a}\), and is therefore identically zero. We may assume that the moment maps \(\mu\) for \(T\) and \(\mu_{H}\) for \(H\) satisfy \[\overline{\mu}:=\int_{M}\mu\,w(\mu)\omega^{n}=0\in\mathfrak{t}^{*},\quad \overline{\mu}_{H}:=\int_{M}\mu_{H}\,w(\mu)\omega^{n}=0\in\mathfrak{h}^{*}.\] This is achieved by replacing \(\mu\) with \(\mu-\overline{\mu}\) and \(\mu_{H}\) with \(\mu_{H}-\overline{\mu}_{H}\); this clearly preserves the moment map equation, and to see equivariance is maintained for \(\mu_{H}\), note \[\operatorname{ad}(\xi)^{*}(\overline{\mu}_{H}) =\int_{M}\operatorname{ad}(\xi)^{*}(\mu_{H})w(\mu)\omega^{n}\] \[=\int_{M}\mathcal{L}_{\xi}(\mu_{H})w(\mu)\omega^{n}\] \[=\int_{M}\mathcal{L}_{\xi}(\mu_{H}w(\mu)\omega^{n})\] \[=0\] \[=\mathcal{L}_{\xi}(\overline{\mu}_{H}).\] Here \(\operatorname{ad}(\xi)^{*}\) denotes the coadjoint action of an element \(\xi\in\mathfrak{h}\) on \(\mathfrak{h}^{*}\); in the second line we used equivariance of \(\mu_{H}\), and in the third we used \(\mathcal{L}_{\xi}\omega=0\) as well as \(\xi(\mu)=0\), which follows since \(\mathfrak{t}\) is central in \(\mathfrak{h}\). Note this adjustment of moment maps shifts the moment polytope \(P\), however we can also translate the weight functions \(v,w\) by \(\overline{\mu}\) so that they are well defined on this shift, and this preserves the weighted extremal condition. On the compact Lie algebra \(\mathfrak{h}\), we now fix the \(H\)-invariant inner product \[\langle\xi,\xi^{\prime}\rangle_{\mathfrak{h}}:=\int_{M}\langle\mu_{H},\xi \rangle\langle\mu_{H},\xi^{\prime}\rangle w(\mu)\omega^{n}, \tag{9}\] where the pairing \(\langle-,-\rangle\) inside the integral is the natural dual pairing between \(\mathfrak{h}^{*}\) and \(\mathfrak{h}\). Via this inner product, we identify the Lie algebra and its dual \(\mathfrak{h}\cong\mathfrak{h}^{*}\). Under this identification, the moment map \(\mu_{H}\) for the \(H\)-action can then be considered to take values in \(\mathfrak{h}\), rather than \(\mathfrak{h}^{*}\); we will write \(\mu_{H}^{\#}\) when we do this. **Definition 3.5**.: _We say the point \(p\) is relatively stable if \(\mu_{H}^{\#}(p)\in\mathfrak{h}_{p}\), that is, the vector field generated by \(\mu_{H}^{\#}(p)\in\mathfrak{h}\) fixes the point \(p\)._ We remark this notion does not depend on the particular choice of invariant inner product; any other invariant product will differ from the chosen one by an equivariant isomorphism \(\mathfrak{h}\to\mathfrak{h}\), and any such isomorphism preserves \(\mathfrak{h}_{p}\). For a Lie subalgebra \(\mathfrak{s}\) of \(\mathfrak{h}\), we will write \[\overline{\mathfrak{s}}:=\{h\in C^{\infty}(M,\mathbb{R}):dh=\omega(-,Y)\text{ for some }Y\in\mathfrak{s}\},\] so that \(J\nabla:\overline{\mathfrak{s}}\to\mathfrak{s}\) is a surjection with kernel the constant functions. For each \(\epsilon>0\) sufficiently small, we will define a lifting function \(\ell_{\epsilon}:\overline{\mathfrak{h}}\to C^{\infty}(\operatorname{Bl}_{p}M, \mathbb{R})^{T^{\prime}}\) in terms of the metric \(\omega_{\epsilon}\) constructed in 3.2. Write \(\mathfrak{h}^{\prime}\) for the orthogonal complement of \(\mathfrak{t}^{\prime}\) in \(\mathfrak{h}\) with respect to the fixed invariant metric, so that \(\mathfrak{h}=\mathfrak{t}^{\prime}\oplus\mathfrak{h}^{\prime}\). This yields a decomposition \(\overline{\mathfrak{h}}=\overline{\mathfrak{t}^{\prime}}\oplus\mathfrak{h}^{\prime}\), where we have identified elements of \(\mathfrak{h}^{\prime}\) with their generators in \(\overline{\mathfrak{h}}\) that are normalised to vanish at \(p\). Any element \(h\in\overline{\mathfrak{t}^{\prime}}\) generates a real holomorphic vector field \(Y\) on \(M\) that vanishes at \(p\) via the hamiltonian equation \(dh=\omega(-,Y)\), and \(Y\) lifts to a real holomorphic vector field \(\widetilde{Y}\) on \(\operatorname{Bl}_{p}M\). We define \(\ell_{\epsilon}(h)\) to be the hamiltonian potential for \(\widetilde{Y}\) with respect to \(\omega_{\epsilon}\), normalised so that \(\ell_{\epsilon}(h)=\pi^{*}h\) outside of \(\widetilde{B}_{2r_{\epsilon}}\) (recall that \(\omega_{\epsilon}=\omega\) outside \(\widetilde{B}_{2r_{\epsilon}}\)). For \(h\in\mathfrak{h}^{\prime}\), we define \(\ell_{\epsilon}(h)=\pi^{*}(\gamma_{1,\epsilon}h)\), where \(\gamma_{1,\epsilon}\) was defined in Section 3.2. This defines \(\ell_{\epsilon}\) uniquely on \(\overline{\mathfrak{h}}=\overline{\mathfrak{t}^{\prime}}\oplus\mathfrak{h}^{\prime}\). On \(\operatorname{Bl}_{p}M\), the weighted extremal problem can be written \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi)-\frac{1}{2} \nabla_{\epsilon}h\cdot\nabla_{\epsilon}\varphi=h \tag{10}\] for \(\varphi\in\mathcal{H}_{\operatorname{Bl}_{p}M}^{T}\), where \(\nabla_{\epsilon}\) is the gradient operator of \(\omega_{\epsilon}\) and \(h\) is a \(T\)-invariant holomorphy potential with respect to \(\omega_{\epsilon}\)[10, Lemma 4.10]. We will not attempt to deform \(\omega_{\epsilon}\) to a solution of (10) directly, but instead prove the following direct analogue of [10, Proposition 14]. **Proposition 3.6**.: _Let \((M,\omega)\) be a \((v,w)\)-weighted extremal manifold, and let \(p\in M\) be fixed by both \(T\) and the extremal field \(X\). For all sufficiently small \(\epsilon>0\), there exist \(\varphi_{\epsilon}\in C^{\infty}(\operatorname{Bl}_{p}M)^{T^{\prime}}\) and \(h_{p,\epsilon}\in\overline{\mathfrak{h}}\) such that \(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon}>0\) and_ \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon})- \frac{1}{2}\nabla_{\epsilon}\ell_{\epsilon}(h_{p,\epsilon})\cdot\nabla_{ \epsilon}\varphi_{\epsilon}=\ell_{\epsilon}(h_{p,\epsilon}).\] _Moreover, for \(\epsilon>0\) sufficiently small the following expansion holds in \(\overline{\mathfrak{h}}\):_ \[h_{p,\epsilon}=s+\epsilon^{2n-2}c_{n}\overline{\mu_{H}^{\#}(p)}+h^{\prime}_{p,\epsilon},\] _where \(s:=S_{v,w}(\omega)\in\overline{\mathfrak{h}}\) is the weighted scalar curvature generating the extremal field on \(M\), \(c_{n}\) is a constant depending only on \(n\), \(\overline{\mu_{H}^{\#}(p)}\) is a fixed lift of \(\mu_{H}^{\#}(p)\) to \(\overline{\mathfrak{h}}\), and the \(h_{p,\epsilon}\) satisfy \(|h^{\prime}_{p,\epsilon}|\leq C\epsilon^{\kappa}\) for some \(\kappa>2n-2\) and \(C>0\) independent of \(\epsilon\)._ Note that if \(h_{p,\epsilon}\in\overline{\nu}\) then \(\ell_{\epsilon}(h_{p,\epsilon})\) is a \(T\)-invariant holomorphy potential on \(\operatorname{Bl}_{p}M\), so equation (10) is satisfied and \(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon}\) is weighted extremal. Denote by \(H^{c}\) the complexification of the group \(H\), which acts on \(M\). Suppose there exists a point \(q\) in the \(H^{c}\)-orbit of \(p\) for which the condition \(h_{q,\epsilon}\in\overline{\nu}\) is satisfied. Then \(\operatorname{Bl}_{q}M\) admits a \((v,w)\)-weighted extremal metric, but since the manifolds \(\operatorname{Bl}_{p}M\) and \(\operatorname{Bl}_{q}M\) are \(T\)-equivariantly biholomorphic, this implies \(\operatorname{Bl}_{p}M\) admits a \((v,w)\)-weighted extremal metric. The exact same argument as [10, p. 11 - Proof of Theorem 1] shows that if \(p\) is relatively stable, then such a point \(q\) exists: **Proposition 3.7** ([10]).: _If Proposition 3.6 holds, and the point \(p\) is relatively stable in the sense of Definition 3.5, then there exists a point \(q\) in the \(H^{c}\)-orbit of \(p\) such that \(\operatorname{Bl}_{q}M\) admits a weighted extremal metric. Since \(\operatorname{Bl}_{q}M\) and \(\operatorname{Bl}_{p}M\) are \(T\)-equivariantly diffeomorphic, there exists a weighted extremal metric on \(\operatorname{Bl}_{p}M\)._ **Remark 3.8**.: We have taken care to make our constructions invariant under the maximal torus \(T^{\prime}\) in \(H\). It will seem that we will never use this condition, however it is an essential ingredient of [10, p. 11 - Proof of Theorem 1], so must be included in the present work. Thus, our only goal now is to prove Proposition 3.6, as Proposition 3.7 will then imply Theorem 1.1. Before launching into the proof, we must introduce the weighted Holder norms. ### Weighted norms We will define the weighted Holder norms on three manifolds: \(M_{p}:=M\backslash\{p\}\), \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) and \(\operatorname{Bl}_{p}M\). These are modifications of the \(C^{k,\alpha}\) norms that depend on an extra parameter \(\delta\in\mathbb{R}\), and in the case of \(\operatorname{Bl}_{p}M\) a further parameter \(\epsilon>0\). On the non-compact manifolds \(M_{p}\) and \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) they allow or enforce certain growth or decay conditions at the ends, depending on the sign of \(\delta\). For \(f:M_{p}\to\mathbb{R}\), define \[\|f\|_{C^{k,\alpha}_{\delta}(M_{p})}:=\|f\|_{C^{k,\alpha}(M\backslash B_{1})}+ \sup_{0<r<1/2}r^{-\delta}\|f(rz)\|_{C^{k,\alpha}(B_{2}\backslash B_{1})}.\] On \(M\backslash B_{1}\) we calculate the norm with respect to the metric \(\omega\), and on \(B_{2}\backslash B_{1}\) with respect to the fixed Euclidean metric defined by the coordinates \((z_{1},\dots,z_{n})\) from Section 3.2. The space \(C^{k,\alpha}_{\delta}(M_{p})\) is the set of locally \(C^{k,\alpha}\)-functions on \(M_{p}\) with finite \(\|\cdot\|_{C^{k,\alpha}_{\delta}(M_{p})}\)-norm. For \(f:\operatorname{Bl}_{0}\mathbb{C}^{n}\to\mathbb{R}\), let \[\|f\|_{C^{k,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})}:=\|f\|_{C^ {k,\alpha}(\widetilde{D}_{1})}+\sup_{r>1}r^{-\delta}\|f(r\zeta)\|_{C^{k,\alpha }(\widetilde{D}_{2}\backslash\widetilde{D}_{1})}.\] On \(\widetilde{D}_{1}\) we compute the norm with respect to the fixed metric \(\eta\), and on \(\widetilde{D}_{2}\backslash\widetilde{D}_{1}\) we use the Euclidean metric defined by the \(\zeta\)-coordinates. The space \(C^{k,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})\) is the set of locally \(C^{k,\alpha}\)-functions on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) with finite \(\|\cdot\|_{C^{k,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})}\)-norm. Let \(f\in C^{k,\alpha}(\mathrm{Bl}_{p}M,\mathbb{R})\). The \(C^{k,\alpha}_{\delta,\epsilon}\)-weighted norm of \(f\) is defined as \[\|f\|_{C^{k,\alpha}_{\delta,\epsilon}}:=\|f\|_{C^{k,\alpha}(\mathrm{Bl}_{p}M \setminus\widetilde{B}_{1})}+\sup_{\epsilon\leq r\leq 1/2}r^{-\delta}\|f(rz)\|_{C^{k, \alpha}(\widetilde{B}_{2}\setminus\widetilde{B}_{1})}+\epsilon^{-\delta}\|( \iota_{\epsilon})_{*}(f)\|_{C^{k,\alpha}(\widetilde{D}_{1})}.\] Here the norms are computed with respect to fixed background metrics, and we recall the map \(\iota_{\epsilon}:\widetilde{B}_{\epsilon}\to\widetilde{D}_{1}\) is the biholomorphism lifting the map \(B_{\epsilon}\to D_{1}\), \(z\mapsto\epsilon^{-1}z\). Note we do not include \(\mathrm{Bl}_{p}M\) in the notation for this norm; whenever the manifold is not specified, we take it that the norm is on \(\mathrm{Bl}_{p}M\). We must also define the weighted Holder norms of a tensor \(T\) on \(\mathrm{Bl}_{p}M\). On \(\mathrm{Bl}_{p}M\backslash\widetilde{B}_{1}\) this is done as normal, with respect to the fixed metric \(\omega\): \[\|T\|_{C^{k,\alpha}(\mathrm{Bl}_{p}M\setminus\widetilde{B}_{1})}.\] Suppose that \(T\) is a section of \((T^{*}\widetilde{M})^{m}\otimes(T\widetilde{M})^{\ell}\), where \(\widetilde{M}:=\mathrm{Bl}_{p}M\). We define \(\sigma(T):=\ell-m\). On \(\widetilde{B}_{2}\backslash E\) we have the Euclidean coordinates \(z_{1},\ldots,z_{n}\); we define \(\iota_{r}:\widetilde{B}_{2r}\backslash\widetilde{B}_{r}\to\widetilde{B}_{2} \backslash\widetilde{B}_{1}\) by \(\iota_{r}(z):=r^{-1}z\) for \(\epsilon\leq r\leq 1/2\). On \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\), the weighted norm is then \[\sup_{\epsilon\leq r\leq 1/2}r^{-\delta}\|r^{\sigma(T)}(\iota_{r})_{*}T\|_{C^ {k,\alpha}(\widetilde{B}_{2}\setminus\widetilde{B}_{1})}.\] Finally, on the region \(\widetilde{B}_{\epsilon}\) we identify \(\widetilde{B}_{\epsilon}\) with \(\widetilde{D}_{1}\) via \(\iota_{\epsilon}:\widetilde{B}_{\epsilon}\to\widetilde{D}_{1}\) and take: \[\epsilon^{-\delta}\|\epsilon^{\sigma(T)}(\iota_{\epsilon})_{*}T\|_{C^{k, \alpha}(\widetilde{D}_{1})}.\] Thus, overall: \[\|T\|_{C^{k,\alpha}_{\delta,\epsilon}}:=\|T\|_{C^{k,\alpha}(\mathrm{Bl}_{p}M \setminus\widetilde{B}_{1})}+\sup_{\epsilon\leq r\leq 1/2}r^{-\delta}\|r^{ \sigma(T)}(\iota_{r})_{*}T\|_{C^{k,\alpha}(\widetilde{B}_{2}\setminus \widetilde{B}_{1})}+\epsilon^{-\delta}\|\epsilon^{\sigma(T)}(\iota_{\epsilon})_{ *}T\|_{C^{k,\alpha}(\widetilde{D}_{1})}.\] This agrees with our definition of the \(C^{k,\alpha}_{\delta,\epsilon}\)-norm in the case \(T\) is a function. Note the central term in the norm on \(\widetilde{B}_{2}\backslash\widetilde{B}_{1}\) is equivalent to pulling back the components of the tensor \(r^{-\delta}T\) in the \(z\)-coordinates by \(\iota_{r}^{-1}\) and summing the \(C^{k,\alpha}\)-norms of these. However, the final term on \(\widetilde{D}_{1}\) does not have a similar description as the rescaling \(\iota_{\epsilon}\) is only in one direction, namely the fibre coordinate of \(\mathcal{O}_{\mathbb{P}^{n-1}}(-1)\). Equivalently, we could consider pulling back \(T\) to \(\widetilde{D}_{1}\), and then measuring its \(C^{k,\alpha}\)-norm with respect a fixed metric on \(\widetilde{D}_{1}\), but using the metric \((\epsilon^{2}\eta)^{-m}\otimes(\epsilon^{2}\eta)^{\ell}\) on the vector bundle \((T^{*}\widetilde{D}_{1})^{m}\otimes(T\widetilde{D}_{1})^{\ell}\). We similarly define the weighted \(C^{k}\)-norms \(\|\cdot\|_{C^{k}_{\delta,\epsilon}}\), without the Holder coefficient \(\alpha\). The following properties will be useful: **Lemma 3.9**.: _Let \(\epsilon>0\) and \(\delta,\delta^{\prime}\in\mathbb{R}\). Then:_ 1. _If_ \(\delta\leq\delta^{\prime}\) _then_ \(\|T\|_{C^{k,\alpha}_{\delta,\epsilon}}\leq\|T\|_{C^{k,\alpha}_{\delta^{\prime}, \epsilon}}\) _for all tensors_ \(T\)_._ 2. _If_ \(\delta>\delta^{\prime}\) _then_ \(\|T\|_{C^{k,\alpha}_{\delta,\epsilon}}\leq\epsilon^{\delta^{\prime}-\delta}\|T\| _{C^{k,\alpha}_{\delta^{\prime},\epsilon}}\) _for all tensors_ \(T\)_._ 3. _There is a constant_ \(C>0\)_, independent of_ \(\delta\)_,_ \(\delta^{\prime}\) _and_ \(\epsilon\)_, such that_ \(\|ST\|_{C^{k,\alpha}_{\delta+\delta^{\prime},\epsilon}}\leq C\|S\|_{C^{k, \alpha}_{\delta,\epsilon}}\|T\|_{C^{k,\alpha}_{\delta^{\prime},\epsilon}}\) _for all tensors_ \(S,T\)_. Here_ \(ST\) _can mean either the tensor product_ \(S\otimes T\)_, or a contraction of any number of dual pairs in_ \(S\otimes T\) _._ 4. _There is a constant_ \(C>0\) _independent of_ \(\epsilon>0\) _such that_ \(\|T\|_{C^{0}}\leq C\|T\|_{C^{0}_{0,\epsilon}}\) _for all tensors_ \(T\in\Gamma(\widetilde{M},(T^{*}\widetilde{M})^{k})\) _with_ \(k\geq 0\)_, where the_ \(C^{0}\)_-norm is fixed independent of_ \(\epsilon\)_. In the case_ \(k=0\)_, i.e._ \(T=f\) _is a function, this is an equivalence of norms._ 5. _There is a uniform equivalence of norms on functions_ \[\|f\|_{C^{k,\alpha}_{\delta,\epsilon}}\sim\|\gamma_{1,\epsilon}f\|_{C^{k, \alpha}_{\delta,\epsilon}(M_{p})}+\epsilon^{-\delta}\|\gamma_{2,\epsilon}( \iota_{\epsilon}^{-1}(\zeta))f(\iota_{\epsilon}^{-1}(\zeta))\|_{C^{k,\alpha} _{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})}\] _independent of_ \(\epsilon\)_._ 6. _There is a constant_ \(C>0\) _independent of_ \(\epsilon\) _such that_ \(\|f\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\|f\|_{C^{1}}\) _for all_ \(f\in C^{\infty}(\operatorname{Bl}_{p}M;\mathbb{R})\)_, where we take a fixed_ \(C^{1}\)_-norm on_ \(\operatorname{Bl}_{p}M\) _independent of_ \(\epsilon\)_._ Most of these are already known and straightforward; the most difficult perhaps is (6), so we shall prove this as an example. Proof of (6).: Outside of \(\widetilde{B}_{1}\) this follows from the usual inequality \[\|f\|_{C^{0,\alpha}(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{1})}\leq C \|f\|_{C^{1}(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{1})}.\] For the component of the norm on \(\widetilde{B}_{1}\backslash\widetilde{B}_{\epsilon}\), for \(\epsilon\leq r\leq 1/2\) we must estimate \[\|f(rz)\|_{C^{0,\alpha}(\widetilde{B}_{2}\backslash\widetilde{B}_{1})},\] which is bounded above by the \(C^{1}\)-norm of \(f(rz)\) over \(\widetilde{B}_{2}\backslash\widetilde{B}_{1}\). The \(C^{0}\)-norm of \(f(rz)\) is bounded by \(\|f\|_{C^{0}(\operatorname{Bl}_{p}M)}\). For the derivatives, consider \[\left|\frac{\partial}{\partial z_{j}}(f(rz))\right|=r\left|\frac{\partial f}{ \partial z_{j}}(rz)\right|.\] In this form, we cannot immediately bound the right hand side by the \(C^{1}\)-norm, since the coordinates \(z_{j}\) do not extend to the blowup and we require a uniform bound as \(\epsilon\to 0\). Instead choose coordinates \((w_{1},\ldots,w_{n})=(z_{1},\frac{z_{2}}{z_{1}},\ldots,\frac{z_{n}}{z_{1}})\) on \(\widetilde{B}_{2}\) such that \(|z_{1}|>a|(z_{2},\ldots,z_{n})|\) for a fixed small \(a>0\). Note this implies \(|z_{j}|/|z_{1}|\leq 1/a\) for all \(j\). For \(z\) in the intersection of this coordinate domain and \(\widetilde{B}_{2}\backslash\widetilde{B}_{1}\), we also have \(1/|z_{1}|\leq C\) for some \(C>0\). Hence \[r\left|\frac{\partial f}{\partial z_{1}}(rz)\right| =\left|r\frac{\partial f}{\partial w_{1}}(rz)-\frac{z_{2}}{z_{1} ^{2}}\frac{\partial f}{\partial w_{2}}(rz)-\cdots-\frac{z_{n}}{z_{1}^{2}} \frac{\partial f}{\partial w_{n}}(rz)\right|\] \[\leq C\sum_{j=1}^{n}\left|\frac{\partial f}{\partial w_{j}}(rz) \right|,\] where \(C>0\) depends only on \(a\). Similarly \[r\left|\frac{\partial f}{\partial z_{j}}(rz)\right|=\left|\frac{1}{z_{1}} \frac{\partial f}{\partial w_{j}}(rz)\right|\leq C\left|\frac{\partial f}{ \partial w_{j}}(rz)\right|\] for \(2\leq j\leq n\). Since the coordinates \(w_{j}\) are well-defined on the blowup, this gives a uniform bound on this coordinate domain by \(C\|f\|_{C^{1}}\). Covering \(\widetilde{B}_{2}\backslash\widetilde{B}_{1}\) by the analogous coordinate domains where \(z_{2}\neq 0,\ldots,z_{n}\neq 0\), we get a uniform bound by \(C\|f\|_{C^{1}}\) on this region. It remains to prove a bound on \(\widetilde{B}_{\epsilon}\). We need to estimate \[\|(\iota_{\epsilon})_{*}f\|_{C^{0,\alpha}(\widetilde{D}_{1})}.\] Once again we can reduce to estimating the \(C^{1}\)-norm of \((\iota_{\epsilon})_{*}f\) on \(\widetilde{D}_{1}\). Similarly to above, we choose coordinates \((\nu_{1},\dots,\nu_{n})=(\zeta_{1},\frac{\zeta_{2}}{\zeta_{1}},\dots,\frac{ \zeta_{n}}{\zeta_{n}})\) on \(\widetilde{D}_{1}\) for the region \(|\zeta_{1}|>a|(\zeta_{2},\dots,\zeta_{n})|\). The \(C^{0}\)-norm is bounded by \(\|f\|_{C^{1}}\), and the \(\nu\)-derivatives satisfy \[\left|\frac{\partial}{\partial\nu_{j}}((\iota_{\epsilon})_{*}f)\right|=\left| \frac{\partial}{\partial\nu_{j}}(f(\epsilon\nu_{1},\nu_{2},\dots,\nu_{n})) \right|\leq\left|\frac{\partial f}{\partial\nu_{j}}(\epsilon\nu_{1},\nu_{2}, \dots,\nu_{n})\right|\leq\|f\|_{C^{1}}.\] Covering \(\widetilde{D}_{1}\) by similarly defined coordinate charts, we produce a uniform bound by \(\|f\|_{C^{1}}\) on \(\widetilde{D}_{1}\), which finishes the proof. **Remark 3.10**.: Using the coordinates \(w_{1},\dots,w_{n}\) from the above proof, we can give an interpretation of how the weighted norms on \(\widetilde{B}_{\epsilon}\) can be computed in coordinates. For example, if \(\xi\) is a section of the \(T^{1,0}\)-bundle of \(\operatorname{Bl}_{p}M\), writing it as \[\xi=\xi^{1}\frac{\partial}{\partial w^{1}}+\xi^{2}\frac{\partial}{\partial w^ {2}}+\dots+\xi^{n}\frac{\partial}{\partial w^{n}}\] on \(\widetilde{B}_{\epsilon}\), its pushforward to \(\widetilde{D}_{1}\) is \[\xi^{1}(\iota_{\epsilon}^{-1}(\nu))\epsilon^{-1}\frac{\partial}{\partial\nu_ {1}}+\xi^{2}(\iota_{\epsilon}^{-1}(\nu))\frac{\partial}{\partial\nu_{2}}+ \dots+\xi^{n}(\iota_{\epsilon}^{-1}(\nu))\frac{\partial}{\partial\nu_{n}}.\] Multiplying this all by \(\epsilon=\epsilon^{\sigma(\xi)}\), we are computing the Holder norms of \[\xi^{1}(\iota_{\epsilon}^{-1}(\nu)),\quad\epsilon\xi^{2}(\iota_{\epsilon}^{- 1}(\nu)),\quad\dots\quad\epsilon\xi^{n}(\iota_{\epsilon}^{-1}(\nu))\] on the appropriate coordinate domain on \(\widetilde{D}_{1}\). In particular, from this it is easy to see there is a uniform bound on \(\|\xi\|_{C^{k,\alpha}_{0,\epsilon}}\) independent of \(\epsilon\), for any \(k\) and \(\alpha\). Furthermore, if \(\xi\) is the lift of a vector field on \(M\) that vanishes at \(p\), note the coefficient \(\xi^{1}\) vanishes along the exceptional locus. In this case we can even produce a uniform bound on \(\|\xi\|_{C^{k,\alpha}_{1,\epsilon}}\) independent of \(\epsilon\). We finish by collecting some useful estimates. Let \(g_{\epsilon}\) be the Riemannian metric on \(\operatorname{Bl}_{p}M\) corresponding to \(\omega_{\epsilon}\) defined in Section 3.2, and recall the functions \(\gamma_{1,\epsilon}\) and \(\gamma_{2,\epsilon}\) defined in (7). Given a Kahler potential \(\varphi\), we will write \(g_{\epsilon,\varphi}\) for the Riemannian metric corresponding to \(\omega_{\epsilon,\varphi}:=\omega_{\epsilon}+i\partial\overline{\partial}\varphi\), where \(\omega_{\epsilon}\) is the metric from Section 3.2. For a Riemannian metric \(g\), we write \(\operatorname{Rm}(g)\) for the full Riemann curvature tensor of \(g\). **Lemma 3.11** ([11, pp. 166-167]).: _The norms_ \[\|g_{\epsilon}\|_{C^{2,\alpha}_{0,\epsilon}},\,\|g_{\epsilon}^{-1}\|_{C^{2, \alpha}_{0,\epsilon}},\,\|\gamma_{1,\epsilon}\|_{C^{4,\alpha}_{0,\epsilon}},\, \|\gamma_{2,\epsilon}\|_{C^{4,\alpha}_{0,\epsilon}}\] _are uniformly bounded independent of \(\epsilon\). Furthermore, given \(c_{0}>0\), there exists \(C>0\) independent of \(\epsilon\) such that for all Kahler potentials \(\varphi\in C^{4,\alpha}(\operatorname{Bl}_{p}M)\) satisfying \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\leq c_{0}\), the following hold:_ \[\|g_{\epsilon,\varphi}\|_{C^{2,\alpha}_{0,\epsilon}},\,\|g_{\epsilon,\varphi}^ {-1}\|_{C^{2,\alpha}_{0,\epsilon}},\,\|\operatorname{Rm}(g_{\epsilon,\varphi}) \|_{C^{0,\alpha}_{-2,\epsilon}}\leq C.\] _If \(\varphi\) instead satisfies \(\|\varphi\|_{C^{4,\alpha}_{\delta,\epsilon}}<c_{0}\) for some \(\delta\in\mathbb{R}\), then_ \[\|g_{\epsilon,\varphi}-g_{\epsilon}\|_{C^{2,\alpha}_{\delta-2,\epsilon}},\,\|g _{\epsilon,\varphi}^{-1}-g_{\epsilon}^{-1}\|_{C^{2,\alpha}_{\delta-2,\epsilon }},\,\|\operatorname{Rm}(g_{\epsilon,\varphi})-\operatorname{Rm}(g_{\epsilon} )\|_{C^{0,\alpha}_{\delta-4,\epsilon}}\leq C\|\varphi\|_{C^{4,\alpha}_{\delta, \epsilon}}.\] Finally, we will estimate the lifting function \(\ell_{\epsilon}:\overline{\mathfrak{h}}\to C^{\infty}(\operatorname{Bl}_{p} M)^{T^{\prime}}\) from Section 3.3. **Lemma 3.12**.: _There exists a constant \(C>0\) independent of \(\epsilon\) such that_ \[\|\ell_{\epsilon}(h)\|_{C^{1,\alpha}_{0,\epsilon}}\leq C|h|\] _for all \(h\in\overline{\mathfrak{h}}\), where the norm on the right hand side is any choice of fixed norm on \(\overline{\mathfrak{h}}\) independent of \(\epsilon\)._ Proof.: Recalling \(\overline{\mathfrak{h}}=\overline{\mathfrak{t}^{\prime}}\oplus\mathfrak{h}^ {\prime}\), we treat the cases \(h\in\overline{\mathfrak{t}^{\prime}}\) and \(h\in\mathfrak{h}^{\prime}\) separately. For \(h\in\overline{\mathfrak{t}^{\prime}}\), the real holomorphic vector field \(\xi_{h}\) generated by \(h\) vanishes at \(p\), so lifts to \(\widetilde{\xi}_{h}\) on \(\operatorname{Bl}_{p}M\). The function \(\ell_{\epsilon}(h)\) is then the holomorphy potential for \(\widetilde{\xi}_{h}\) with respect to \(\omega_{\epsilon}\) that is equal to \(\pi^{*}h\) on \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{1}\). The norm can be computed as \[\|\ell_{\epsilon}(h)\|_{C^{1,\alpha}_{0,\epsilon}}=\|\ell_{\epsilon}(h)\|_{C^ {0}}+\|d\ell_{\epsilon}(h)\|_{C^{0,\alpha}_{-1,\epsilon}}.\] By the estimates on \(g_{\epsilon}\) from Lemma 3.11, note \[\|d\ell_{\epsilon}(h)\|_{C^{0,\alpha}_{-1,\epsilon}}=\|g_{\epsilon}g_{ \epsilon}^{-1}d\ell_{\epsilon}(h)\|_{C^{0,\alpha}_{-1,\epsilon}}\leq C\|g_{ \epsilon}\|_{C^{1,\alpha}_{0,\epsilon}}\|\nabla_{\epsilon}\ell_{\epsilon}(h)\| _{C^{0,\alpha}_{-1,\epsilon}}\leq C\|\widetilde{\xi}_{h}\|_{C^{0,\alpha}_{-1, \epsilon}}\,.\] Following Remark 3.10, we can produce a bound \(\|\widetilde{\xi}_{h}\|_{C^{0,\alpha}_{-1,\epsilon}}\leq C|h|\) by considering the supremum of \(\|\widetilde{\xi}_{h}\|_{C^{0,\alpha}_{-1,\epsilon}}\) over the compact unit ball in \(\overline{\mathfrak{t}^{\prime}}\). Since the \(C^{0,\alpha}_{-1,\epsilon}\)-norm of vector fields decreases in \(\epsilon\), the bound is independent of \(\epsilon\). So it remains to estimate \(\|\ell_{\epsilon}(h)\|_{C^{0}}\). This is straightforward however; note that the \(T^{\prime}\)-action on \(M\) has a moment map \(\mu_{T^{\prime}}\) with moment polytope \(P^{\prime}\subset(\mathfrak{t}^{\prime})^{*}\), and the lift of this action to \(\operatorname{Bl}_{p}M\) has a moment map \(\mu_{T^{\prime},\epsilon}\) whose image is contained in \(P^{\prime}\). Writing \(\operatorname{pr}(h)\) for the image of \(h\) in \(\mathfrak{t}^{\prime}\), we have \[\ell_{\epsilon}(h)=\langle\mu_{T^{\prime},\epsilon},\operatorname{pr}(h) \rangle+h(q)-\langle\mu_{T^{\prime}}(q),\operatorname{pr}(h)\rangle\] for any fixed \(q\in M\backslash B_{1}\). Since the image of \(\mu_{T^{\prime},\epsilon}\) is contained in \(P^{\prime}\), this expression gives a uniform bound \(\|\ell_{\epsilon}(h)\|_{C^{0}}\leq C|h|\). Next we take \(h\in\mathfrak{h}^{\prime}\). Recall that \(h\) vanishes at \(p\), and the lift of \(h\) is defined as \(\ell_{\epsilon}(h):=\gamma_{1,\epsilon}\pi^{*}h\). In particular, since this function is supported on \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{\epsilon}\), \[\|\ell_{\epsilon}(h)\|_{C^{1,\alpha}_{0,\epsilon}}=\|\gamma_{1,\epsilon}h\|_{C ^{1,\alpha}_{0,\epsilon}}\leq\|\gamma_{1,\epsilon}\|_{C^{1,\alpha}_{0,\epsilon} }\|h\|_{C^{1,\alpha}_{0,\epsilon}(\operatorname{Bl}_{p}M\backslash\widetilde{B }_{\epsilon})}\leq C\|h\|_{C^{1,\alpha}_{0}(M_{p})},\] where we applied the estimate on \(\gamma_{1,\epsilon}\) from Lemma 3.11. The norm \(\|h\|_{C^{1,\alpha}_{0}(M_{p})}\) is well-defined, and we get a uniform bound \(\|h\|_{C^{1,\alpha}_{0}(M_{p})}\leq C|h|\) by finite-dimensionality of \(\mathfrak{h}^{\prime}\). ## 4. Moment map estimates In the weighted scalar curvature \(S_{v,w}\) and its derivatives, the moment map appears in several terms. Most importantly are functions of the form \(u(\mu_{\epsilon})\) for a fixed smooth function \(u:P\to\mathbb{R}\), and the Laplacian \(\Delta_{\epsilon}\mu_{\epsilon}^{a}\) appearing in \(\Phi_{v,w}(\omega_{\epsilon})\). In this section we collect some fundamental estimates on the moment map that are used in many of the later proofs. Throughout this section we will use \(C\) to denote a positive constant that is independent of \(\epsilon\) and may vary from line to line. **Lemma 4.1**.: _Given \(c_{0}>0\), there exists \(C>0\) independent of \(\epsilon\) such that_ \[\|\mu_{\epsilon,\varphi}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\] _and_ \[\|\mu_{\epsilon,\varphi}-\mu_{\epsilon}\|_{C^{0,\alpha}_{1,\epsilon}}\leq C\| \varphi\|_{C^{4,\alpha}_{2,\epsilon}}\] _for all Kahler potentials \(\varphi\in C^{4,\alpha}(\operatorname{Bl}_{p}M)^{T}\) satisfying \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}<c_{0}\)._ Proof.: We begin by proving \(\|\mu_{\epsilon,\varphi}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\). By _(6)_ of Lemma 3.9 it suffices to estimate the \(C^{1}\)-norm of \(\mu_{\epsilon,\varphi}\). First, \(\mu_{\epsilon,\varphi}\) clearly has a uniform \(C^{0}\) bound, since its image is contained in the moment polytope \(P\). To estimate the derivative \(d\mu_{\epsilon,\varphi}\), note we have \[\|d\mu_{\epsilon,\varphi}\|_{C^{0}} \leq C\|g_{\epsilon,\varphi}\|_{C^{0}}\|g_{\epsilon,\varphi}^{-1} d\mu_{\epsilon,\varphi}\|_{C^{0}}\] \[\leq C\|\nabla_{\epsilon,\varphi}\mu_{\epsilon,\varphi}\|_{C^{0}}.\] Here we have used properties _(3)_ and _(4)_ of Lemma 3.9 together with the estimate \(\|g_{\epsilon,\varphi}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\) of Lemma 3.11. Now, note the component functions \(\mu_{\epsilon,\varphi}^{a}\) of \(\mu_{\epsilon,\varphi}\) with respect to a basis \(\{\xi_{a}\}\) of \(\mathfrak{t}\) satisfy \(\nabla_{\epsilon,\varphi}\mu_{\epsilon,\varphi}^{a}=J\widetilde{\xi}_{a}\) by definition of the moment map. Hence \(\|\nabla_{\epsilon,\varphi}\mu_{\epsilon,\varphi}\|_{C^{0}}\) is uniformly bounded, and we have proven the estimate for \(\mu_{\epsilon,\varphi}\). For \(\|\mu_{\epsilon,\varphi}-\mu_{\epsilon}\|_{C^{0,\alpha}_{1,\epsilon}}\) note that \[\mu_{\epsilon,\varphi}^{a}-\mu_{\epsilon}^{a}=d^{c}\varphi(\widetilde{\xi}_{a }).\] We have \(\|d^{c}\varphi\|_{C^{0,\alpha}_{1,\epsilon}}\leq C\|\varphi\|_{C^{1,\alpha}_{2, \epsilon}}\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\), and \(\|\widetilde{\xi}_{a}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\) by Remark 3.10. **Lemma 4.2**.: _There exists \(C>0\) independent of \(\epsilon\) such that_ \[\|\Delta_{\epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C.\] Proof.: We will use the explicit formula for the moment map in Lemma 3.2. On \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{2r_{\epsilon}}\), since \(\Delta_{\epsilon}\mu_{\epsilon}=\Delta\mu\) is independent of \(\epsilon\) there is a uniform bound on this region, by \(\|\Delta\mu\|_{C^{1}}\) for example. On \(\widetilde{B}_{r_{\epsilon}}\) the metric and moment map are pulled back from \(\widetilde{D}_{R_{\epsilon}}\subset\operatorname{Bl}_{p}\mathbb{C}^{n}\) via \(\iota_{\epsilon}\), where \(R_{\epsilon}:=\epsilon^{-1}r_{\epsilon}\). Since the vector fields \(\widetilde{\xi}_{a}\) are invariant under pushforward by the dilation map \(\iota_{\epsilon}\), the component functions \(\mu_{\epsilon}^{a}\) of the moment map are also pulled back from \(\widetilde{D}_{R_{\epsilon}}\). Hence, on \(\widetilde{B}_{r_{\epsilon}}\), \[\Delta_{\epsilon}\mu_{\epsilon}^{a} =\Delta_{\epsilon^{2}_{\epsilon}(\epsilon^{2}\eta)^{t}_{\epsilon}} \left(\epsilon^{2}\sum_{j}A_{j}^{a}|\zeta_{j}|^{2}+\epsilon^{2}d^{c}g(\widetilde {\xi}_{a})\right)\] \[=\iota_{\epsilon}^{*}\left(\Delta_{\eta}\left(\sum_{j}A_{j}^{a}| \zeta_{j}|^{2}+d^{c}g(\widetilde{\xi}_{a})\right)\right)\] \[=\iota_{\epsilon}^{*}(\Delta_{\eta}\mu_{\eta}^{a}).\] On \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) we have the expansion \[\eta=i\partial\overline{\partial}\left(|\zeta|^{2}+g(\zeta)\right)\] as \(|\zeta|\to\infty\), where \(g(\zeta)=\operatorname{O}(|\zeta|^{4-2n})\) if \(n>2\) and \(g=\log|\zeta|\) if \(n=2\). It follows \(\Delta_{\eta}\) has the expansion \(\Delta_{\eta}=\Delta_{\operatorname{Euc}}+h^{j\bar{k}}\partial_{j}\partial_{ \bar{k}}\), where \(\Delta_{\operatorname{Euc}}\) is the Euclidean Laplacian in the \(\zeta\)-coordinates and \(h^{j\bar{k}}\) is \(\operatorname{O}(|\zeta|^{2-2n})\). Since \(d^{c}g(\widetilde{\xi}_{a})\) is \(\operatorname{O}(|\zeta|^{4-2n})\) this implies \[\Delta_{\eta}\mu_{\eta}^{a}=\sum_{j}A_{j}^{a}+\operatorname{O}(|\zeta|^{2-2n}),\] so \(\Delta_{\eta}\mu_{\eta}^{a}\) is \(\operatorname{O}(1)\) on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\). It follows that \[\|\Delta_{\epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}\left(\widetilde {B}_{r_{\epsilon}}\right)}=\|\Delta_{\eta}\mu_{\eta}\|_{C^{0,\alpha}_{0}\left( \widetilde{D}_{R_{\epsilon}}\right)}\leq\|\Delta_{\eta}\mu_{\eta}\|_{C^{0, \alpha}_{0}\left(\operatorname{Bl}_{0}\mathbb{C}^{n}\right)}.\] Hence \(\|\Delta_{\epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}\left(\widetilde {B}_{r_{\epsilon}}\right)}\) is uniformly bounded independent of \(\epsilon\). The only remaining region to estimate is the annulus \(\widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{\epsilon}}\), on which \(\mu_{\epsilon}\) takes the form \[\mu_{\epsilon}=\mu_{\operatorname{Euc}}+d^{c}(\gamma_{1,\epsilon}(z)f(z)+ \epsilon^{2}\gamma_{2,\epsilon}(z)g(\epsilon^{-1}z)).\] We claim that the function \(\gamma_{1,\epsilon}(z)f(z)+\epsilon^{2}\gamma_{2,\epsilon}(z)g(\epsilon^{-1}z)\) is \(\operatorname{O}(|z|^{2+\tau})\) on the annulus \(\widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{\epsilon}}\), for all \(\tau>0\) sufficiently small. That is to say, for fixed \(\tau>0\) small enough, there is a uniform bound on the \(C^{k,\alpha}_{2+\tau,\epsilon}\)-norm of this function over the annulus, independent of \(\epsilon\). To see this, first note that \(\gamma_{1,\epsilon}f\) is \(\operatorname{O}(|z|^{4})\) so it suffices to estimate \(\epsilon^{2}\gamma_{2,\epsilon}(z)g(\epsilon^{-1}z)\), or equivalently \(\epsilon^{2}g(\epsilon^{-1}z)\) since \(\gamma_{2,\epsilon}\) is uniformly \(\operatorname{O}(1)\). In the case \(n>2\), \(g\) is \(\operatorname{O}(|\zeta|^{4-2n})\), and \[r_{\epsilon}^{-3}(\epsilon^{2}g(\epsilon^{-1}r_{\epsilon}z))=\epsilon^{-1}R_{ \epsilon}^{1-2n}(R_{\epsilon}^{2n-4}g(R_{\epsilon}z)),\] where \(1\leq|z|\leq 2\). The \(C^{k,\alpha}\)-norm of \(R_{\epsilon}^{2n-4}g(R_{\epsilon}z)\) is uniformly bounded on the annulus \(1\leq|z|\leq 2\), hence the \(C^{k,\alpha}\)-norm of this expression is bounded by \[\epsilon^{-1}R_{\epsilon}^{1-2n}=\epsilon^{\frac{2n-3}{2n+1}}\to 0\] as \(\epsilon\to 0\), where we used the definition \(r_{\epsilon}:=\epsilon^{\frac{2n-1}{2n+1}}\). Hence \(\epsilon^{2}g(\epsilon^{-1})\) is \(\operatorname{O}(|z|^{3})\) on \(\widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{\epsilon}}\). In the case \(n=2\) we have \(g=\log|\zeta|\), and \[r_{\epsilon}^{-2-\tau}(\epsilon^{2}g(\epsilon^{-1}r_{\epsilon}z))=\epsilon^{2- \tau}r_{\epsilon}^{-2}(R_{\epsilon}^{-\tau}\log|R_{\epsilon}z|).\] The \(C^{k,\alpha}\)-norm of \(R_{\epsilon}^{-\tau}\log|R_{\epsilon}z|\) is uniformly bounded on \(1\leq|z|\leq 2\), so the \(C^{k,\alpha}\)-norm of this expression is bounded by \[\epsilon^{2-\tau}r_{\epsilon}^{-2}=\epsilon^{2-\tau-2\frac{2n-1}{2n+1}}\to 0\] as \(\epsilon\to 0\) provided \(\tau<4/5\). It follows that \(\epsilon^{2}g(\epsilon^{-1}z)\) is \(\mathrm{O}(|z|^{2+\tau})\) on \(\widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{\epsilon}}\) for \(n=2\) and \(0<\tau<4/5\). We see from this that the Laplacian \(\Delta_{\epsilon}\) satisfies \(\Delta_{\epsilon}=\Delta_{\mathrm{Euc}}+h^{j\bar{k}}\partial_{j}\partial_{\bar {k}}\) on this region, where \(h^{j\bar{k}}\) is uniformly \(\mathrm{O}(|z|^{\tau})\). Hence \[\Delta_{\epsilon}\mu_{\epsilon}^{a}=\sum_{j}A_{j}^{a}+\mathrm{O}(|z|^{\tau})\] on the annulus. This gives \[\|\Delta_{\epsilon}\mu_{\epsilon}^{a}\|_{C^{0,\alpha}_{0,\epsilon}(\, \widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{\epsilon}})}\leq C (1+r_{\epsilon}^{\tau})\] and the right hand side is uniformly bounded. This completes the estimate on the annulus, so we have a uniform bound on \(\|\Delta_{\epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}\) on all of \(\mathrm{Bl}_{p}M\). **Lemma 4.3**.: _Given \(c_{0}>0\) there exists \(C>0\) independent of \(\epsilon\) such that_ \[\|\Delta_{\epsilon,\varphi}\mu_{\epsilon,\varphi}\|_{C^{0,\alpha}_{0, \epsilon}}\leq C\] _and_ \[\|\Delta_{\epsilon,\varphi}\mu_{\epsilon,\varphi}-\Delta_{\epsilon}\mu_{ \epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}\leq C\|\varphi\|_{C^{4,\alpha}_{2, \epsilon}}\] _for all \(\varphi\) such that \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\leq c_{0}\)._ Proof.: Note it suffices to prove the second estimate, since by the previous lemma, \[\|\Delta_{\epsilon,\varphi}\mu_{\epsilon,\varphi}\|_{C^{0,\alpha }_{0,\epsilon}} \leq\|\Delta_{\epsilon,\varphi}\mu_{\epsilon,\varphi}-\Delta_{ \epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}+\|\Delta_{\epsilon} \mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}\] \[\leq C+\|\Delta_{\epsilon,\varphi}\mu_{\epsilon,\varphi}-\Delta _{\epsilon}\mu_{\epsilon}\|_{C^{0,\alpha}_{0,\epsilon}}.\] Dropping the \(\epsilon\) subscripts, we compute this as \[\Delta_{\varphi}\mu_{\varphi}^{a}-\Delta\mu^{a} =g_{\varphi}^{-1}i\partial\overline{\partial}\mu_{\varphi}^{a}-g ^{-1}i\partial\overline{\partial}\mu^{a}\] \[=(g_{\varphi}^{-1}-g^{-1})i\partial\overline{\partial}(\mu^{a}+ d^{c}\varphi(\widetilde{\xi}_{a}))+g^{-1}i\partial\overline{\partial}(\mu_{ \varphi}^{a}-\mu^{a})\] \[=(g_{\varphi}^{-1}-g^{-1})i\partial\overline{\partial}(\mu^{a}+ d^{c}\varphi(\widetilde{\xi}_{a}))+g^{-1}i\partial\overline{\partial}(d^{c} \varphi(\widetilde{\xi}_{a}))\] \[=(g_{\varphi}^{-1}-g^{-1})i\partial\overline{\partial}\mu^{a}+ g_{\varphi}^{-1}i\partial\overline{\partial}(d^{c}\varphi(\widetilde{\xi}_{a})).\] From here, \[\|(g_{\varphi}^{-1}-g^{-1})i\partial\overline{\partial}\mu^{a} \|_{C^{0,\alpha}_{0,\epsilon}} \leq C\|g_{\varphi}^{-1}-g^{-1}\|_{C^{0,\alpha}_{0,\epsilon}}\| \partial(gg^{-1}\overline{\partial}\mu^{a})\|_{C^{0,\alpha}_{0,\epsilon}}\] \[\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\|g\|_{C^{1,\alpha}_{0,\epsilon}}\|\widetilde{\xi}_{a}\|_{C^{1,\alpha}_{1,\epsilon}}\] \[\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}},\] where we used the uniform bound on \(\|\widetilde{\xi}_{a}\|_{C^{k,\alpha}_{1,\epsilon}}\) from Remark 3.10. Finally, \[\|g_{\varphi}^{-1}i\partial\overline{\partial}(d^{c}\varphi( \widetilde{\xi}_{a}))\|_{C^{0,\alpha}_{0,\epsilon}} \leq C\|g_{\varphi}^{-1}\|_{C^{0,\alpha}_{0,\epsilon}}\|d^{c} \varphi(\widetilde{\xi}_{a})\|_{C^{2,\alpha}_{2,\epsilon}}\] \[\leq C\|d^{c}\varphi\|_{C^{2,\alpha}_{1,\epsilon}}\|\widetilde{ \xi}_{a}\|_{C^{2,\alpha}_{1,\epsilon}}\] \[\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}.\qed\] **Lemma 4.4**.: _Let \(u:P\to\mathbb{R}\) be a fixed smooth function. Given \(c_{0}>0\), there is a constant \(C>0\) independent of \(\epsilon\) such that \(\|u(\mu_{\epsilon,\varphi})\|_{C^{0,\alpha}_{0,\epsilon}}<C\) whenever \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}<c_{0}\)._ Proof.: Again it suffices to bound the \(C^{1}\)-norm. Note the \(C^{0}\)-norm is bounded, since the image of \(\mu_{\epsilon,\varphi}\) is contained in \(P\) and the image of \(u:P\to\mathbb{R}\) is compact. For the derivative, by the same argument as in the proof of Lemma 4.1 it is sufficient to estimate \[\nabla_{\epsilon,\varphi}(u(\mu_{\epsilon,\varphi}))=\sum_{a}u_{,a}(\mu_{ \epsilon,\varphi})\widetilde{\xi}_{a}.\] By the same reasoning as for \(u(\mu_{\epsilon,\varphi})\) there is a \(C^{0}\)-bound on \(u_{,a}(\mu_{\epsilon,\varphi})\), and \(\widetilde{\xi}_{a}\) is a fixed vector field, so also has a \(C^{0}\) bound. **Lemma 4.5**.: _Let \(u:P\to\mathbb{R}\) be a fixed smooth function. Given \(c_{0}>0\), there is a constant \(C>0\) independent of \(\epsilon\) such that \(\|u(\mu_{\epsilon,\varphi})-u(\mu_{\epsilon})\|_{C^{0,\alpha}_{0,\epsilon}} \leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\) for all \(\varphi\in C^{4,\alpha}_{2,\epsilon}\) satisfying \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\leq c_{0}\)._ Proof.: We again bound the \(C^{1}\)-norm. The \(C^{0}\) bound follows from the mean value theorem, since for \(x\in\mathrm{Bl}_{p}M\), \[u(\mu_{\epsilon,\varphi}(x))-u(\mu_{\epsilon}(x))=\sum_{a}u_{,a}(p_{x})d^{c} \varphi(\widetilde{\xi}_{a})(x)\] for some \(p_{x}\in P\) on the line joining \(\mu_{\epsilon,\varphi}(x)\) to \(\mu_{\epsilon}(x)\), and we can bound the terms on the right hand side as follows: \[\|u_{,a}(p_{x})d^{c}\varphi(\widetilde{\xi}_{a})(x)\|_{C^{0}} \leq C\|u_{,a}\|_{C^{0}}\|d^{c}\varphi\|_{C^{0}}\|\widetilde{\xi} _{a}\|_{C^{0}}\] \[\leq C\|d\varphi\|_{C^{0}_{0,\epsilon}}\] \[\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}.\] To estimate the derivative, \[d(u(\mu_{\epsilon,\varphi})-u(\mu_{\epsilon}))=\sum_{a}u_{,a}(\mu_{\epsilon, \varphi})d\mu_{\epsilon,\varphi}^{a}-u_{,a}(\mu_{\epsilon})d\mu_{\epsilon}^{a},\] and \[u_{,a}(\mu_{\epsilon,\varphi})d\mu_{\epsilon,\varphi}^{a}-u_{,a} (\mu_{\epsilon})d\mu_{\epsilon}^{a}\] \[= (u_{,a}(\mu_{\epsilon,\varphi})-u_{,a}(\mu_{\epsilon}))d\mu_{ \epsilon,\varphi}^{a}+u_{,a}(\mu_{\epsilon})d(\mu_{\epsilon,\varphi}^{a}-\mu_ {\epsilon}^{a})\] \[= (u_{,a}(\mu_{\epsilon,\varphi})-u_{,a}(\mu_{\epsilon}))d\mu_{ \epsilon,\varphi}^{a}+u_{,a}(\mu_{\epsilon})d(d^{c}\varphi(\widetilde{\xi}_{a })).\] The same method as for \(u\) proves a \(C^{0}\) bound by \(C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}\) on \(u_{,a}(\mu_{\epsilon,\varphi})-u_{,a}(\mu_{\epsilon})\). For the \(C^{0}\)-bound on \(d\mu_{\epsilon,\varphi}^{a}\), as in previous proofs it is enough to note \(\widetilde{\xi}_{a}\) is bounded. Clearly \(u_{,a}(\mu_{\epsilon})\) also has a \(C^{0}\)-bound. Lastly \[\|d(d^{c}\varphi(\widetilde{\xi}_{a}))\|_{C^{0}} \leq\|d(d^{c}\varphi(\widetilde{\xi}_{a}))\|_{C^{0,\alpha}_{0, \epsilon}}\] \[\leq\|d^{c}\varphi(\widetilde{\xi}_{a})\|_{C^{1,\alpha}_{1, \epsilon}}\] \[\leq C\|\widetilde{\xi}_{a}\|_{C^{1,\alpha}_{0,\epsilon}}\| \varphi\|_{C^{2,\alpha}_{2,\epsilon}}\] \[\leq C\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}.\qed\] ## 5. Estimates for the weighted linearisation Throughout the rest of the paper, we will use the notation and set up of Section 3.3, and our ultimate goal is to prove Proposition 3.6. Let \(\check{L}_{\epsilon,\varphi}\) denote the linearisation of the weighted scalar curvature operator \[\psi\mapsto S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\psi)\] at \(\varphi\in\mathcal{H}^{T}_{\omega_{\epsilon}}\). Our aim in this section is to prove: **Proposition 5.1**.: _For any \(\delta<0\) there exist \(c_{0},C>0\) independent of \(\epsilon\) such that for all \(\varphi\in C^{4,\alpha}(\mathrm{Bl}_{p}M)^{T}\) with \(\|\varphi\|_{C^{4,\alpha}_{2,\epsilon}}<c_{0}\) and all \(\psi\in C^{4,\alpha}(\mathrm{Bl}_{p}M)^{T}\),_ \[\|\check{L}_{\epsilon,\varphi}(\psi)-\check{L}_{\epsilon}(\psi)\|_{C^{0, \alpha}_{\delta-4,\epsilon}}\leq C\|\varphi\|_{C^{4,\alpha}_{2}}\|\psi\|_{C^{4,\alpha}_{\delta,\epsilon}}.\] Let us define \(u:=v/w\). For simplicity, we will now drop \(\epsilon\) from the notation, although the reader should keep in mind the base metric is \(g_{\epsilon}\) with moment map \(\mu_{\epsilon}\), even though we write these as \(g\) and \(\mu\). We will also write the weighted norms as \(\|\cdot\|_{C^{k,\alpha}_{\delta}}\), although on \(\mathrm{Bl}_{p}M\) these always depend on \(\epsilon\). As in the previous section, \(C\) denotes a positive constant that is independent of \(\epsilon\) and may vary from line to line. Sketch proof.: From Lemma 2.12 we can write \[\check{L}_{\varphi}(\psi)-\check{L}(\psi)\] \[= u(\mu_{\varphi})L_{\varphi}(\psi)-u(\mu)L(\psi) \tag{12}\] \[- \frac{2}{w(\mu_{\varphi})}(\overline{\partial}^{*}_{\varphi} \mathcal{D}_{\varphi}\psi,\nabla^{1,0}_{\varphi}(v(\mu_{\varphi})))_{\varphi} +\frac{2}{w(\mu)}(\overline{\partial}^{*}\mathcal{D}\psi,\nabla^{1,0}(v(\mu)))\] (13) \[+ \frac{1}{w(\mu_{\varphi})}(\mathcal{D}_{\varphi}\psi,\mathcal{D} _{\varphi}(v(\mu_{\varphi})))_{\varphi}-\frac{1}{w(\mu)}(\mathcal{D}\psi, \mathcal{D}(v(\mu)))\] (14) \[+ \frac{1}{2}S(\omega_{\varphi})\nabla_{\varphi}\left(u(\mu_{ \varphi})\right)\cdot\nabla_{\varphi}\psi-\frac{1}{2}S(\omega)\nabla\left(u( \mu)\right)\cdot\nabla\psi\] (15) \[+ \frac{1}{2}\nabla_{\varphi}\Phi_{v,w}(\omega_{\varphi})\cdot \nabla_{\varphi}\psi-\frac{1}{2}\nabla\Phi_{v,w}(\omega)\cdot\nabla\psi, \tag{11}\] where \(L\) is the linearisation of the unweighted scalar curvature operator. To prove Proposition 5.1, each of these lines can be estimated separately. The techniques are fairly similar for each line, so we will only estimate some terms to demonstrate how this may be done. The line (11) is equal to \[(u(\mu_{\varphi})-u(\mu))L_{\varphi}(\psi)+u(\mu)(L_{\varphi}(\psi)-L(\psi)). \tag{16}\] From the estimates in Section 4, we have \[\|u(\mu_{\varphi})-u(\mu)\|_{C^{0,\alpha}_{0}}\leq C\|\varphi\|_{C^{4,\alpha}_{2}}\] and \[\|u(\mu)\|_{C^{0,\alpha}_{0}}\leq C.\] It therefore suffices to show \[\|L_{\varphi}(\psi)\|_{C^{0,\alpha}_{\delta-4}}\leq C\|\psi\|_{C^{4,\alpha}_{ \delta}}\] and \[\|L_{\varphi}(\psi)-L(\psi)\|_{C^{0,\alpha}_{\delta-4}}\leq C\|\varphi\|_{C^{4,\alpha}_{2}}\|\psi\|_{C^{4,\alpha}_{\delta}}.\] The latter inequality is already proven [22, Proposition 18]. To see the norms of \(L_{\varphi}:C^{4,\alpha}_{\delta}\to C^{0,\alpha}_{\delta-4}\) are uniformly bounded, we use the formula \[L_{\varphi}\psi =\Delta_{\varphi}^{2}\psi+\operatorname{Ric}(\omega_{\varphi})^{j \bar{k}}\partial_{j}\partial_{\bar{k}}\psi.\] The estimates in Lemma 3.11 then give the uniform bound: \[\|\Delta_{\varphi}^{2}\psi\|_{C^{0,\alpha}_{\delta-4}} =\|g_{\varphi}^{-1}i\partial\overline{\partial}(g_{\varphi}^{-1} i\partial\overline{\partial}\psi)\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq C\|g_{\varphi}^{-1}\|_{C^{0,\alpha}_{0}}\|\partial\overline {\partial}(g_{\varphi}^{-1}i\partial\overline{\partial}\psi)\|_{C^{0,\alpha} _{\delta-4}}\] \[\leq C\|g_{\varphi}^{-1}i\partial\overline{\partial}\psi\|_{C^{2,\alpha}_{\delta-2}}\] \[\leq C\|\psi\|_{C^{4,\alpha}_{\delta}}.\] Similarly \[\|\operatorname{Ric}(\omega_{\varphi})^{j\bar{k}}\partial_{j} \partial_{\bar{k}}\psi\|_{C^{0,\alpha}_{\delta-4}} \leq C\|g_{\varphi}\|_{C^{0,\alpha}_{0}}\|g_{\varphi}^{-1}\|_{C^{ 0,\alpha}_{0}}\|\operatorname{Rm}(\omega_{\varphi})\|_{C^{0,\alpha}_{-2}}\|i \partial\overline{\partial}\psi\|_{C^{0,\alpha}_{\delta-2}}\] \[\leq C\|\psi\|_{C^{4,\alpha}_{\delta}}.\] To estimate the other lines of (11)-(15) we apply a similar principle: namely, wherever we see an expression of the form \(A_{\varphi}B_{\varphi}-AB\), we write \[A_{\varphi}B_{\varphi}-AB=(A_{\varphi}-A)B_{\varphi}+A(B_{\varphi}-B).\] Applying this trick recursively, we reduce to estimating terms of the form \(A_{\varphi}-A\) and \(A_{\varphi}\). In particular, these require estimates \[\|A_{\varphi}\|_{C^{0,\alpha}_{-k}}\leq C,\qquad\|A_{\varphi}-A_{0}\|_{C^{0, \alpha}_{-k}}\leq C\|\varphi\|_{C^{4,\alpha}_{2}} \tag{17}\] for some \(0\leq k\leq 4\) depending on \(A\). For example, when \(A_{\varphi}=u(\mu_{\varphi})\), we have such estimates on the \(C^{0,\alpha}_{0}\)-norms of \(A_{\varphi}\) and \(A_{\varphi}-A\) from Section 4, and when \(A_{\varphi}=g_{\varphi}\) or \(g_{\varphi}^{-1}\), the \(C^{0,\alpha}_{0}\)-estimates are from Lemma 3.11. In some lines we are required to estimate derivatives of \(v(\mu_{\varphi})\), and we only have estimates on \(v(\mu_{\varphi})\) itself so far. To estimate these derivatives, we use the chain rule together with the definition of the moment map to reduce these estimates to the \(C_{0}^{0,\alpha}\)-estimates already proven in Section 4. For example,2 Footnote 2: Before now we have written \(\widetilde{\xi}\) for the real holomorphic vector field generated by \(\xi\), whereas here we instead denote the associated \((1,0)\)-vector field by the same symbol. It will always be clear from the context to which vector field we are referring, so we allow this small abuse of notation. \[\nabla^{1,0}(v(\mu))=\sum_{a}v_{,a}(\mu)\nabla^{1,0}\mu^{a}=\sum_{a}v_{,a}(\mu )\widetilde{\xi}_{a},\] and \[\mathcal{D}(v(\mu)) =\overline{\partial}\nabla^{1,0}(v(\mu))\] \[=\overline{\partial}\sum_{a}v_{,a}(\mu)\widetilde{\xi}_{a}\] \[=\sum_{a,b}v_{,ab}(\mu)\overline{\partial}\mu^{b}\otimes \widetilde{\xi}_{a}\] \[=\sum_{a,b}v_{,ab}(\mu)(\widetilde{\xi}_{b})^{\flat}\otimes \widetilde{\xi}_{a},\] where \(\flat\) is conversion from a \((1,0)\)-vector field to a \((0,1)\)-form via the metric, and in the third line we have used that \(\overline{\partial}\widetilde{\xi}_{a}=0\). From these expressions, the factors \(\nabla^{1,0}_{\varphi}v(\mu_{\varphi})\) and \(\mathcal{D}_{\varphi}(v(\mu_{\varphi}))\) satisfy estimates of the form (17) with \(k=0\). With this in mind, it is straightforward to estimate lines (12) and (13); we note the formula (1) for \(\overline{\partial}^{\ast}\) from Section 2 can be used to estimate (12). Line (14) is similarly straightforward; we only note the inequalities \[\|S(\omega_{\varphi})-S(\omega)\|_{C_{-2}^{0,\alpha}}\leq C\|\varphi\|_{C_{2} ^{4,\alpha}}\] and \[\|S(\omega_{\varphi})\|_{C_{-2}^{0,\alpha}}\leq C\] which follow from Lemma 3.11. Finally for line (15) we recall Lemma 2.13 which states that \(\Phi_{v,w}(\omega)\) can be written as a linear combination of functions of the form \(u_{a}(\mu)\Delta\mu^{a}\) and \(u_{ab}(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\), for finitely many fixed smooth functions \(u_{a}\) and \(u_{ab}\) on the moment polytope. Taking the gradients of these gives \[\nabla(u_{a}(\mu)\Delta\mu^{a})=u_{a}(\mu)\text{Ric}(\widetilde{\xi}_{a},-)^ {\#}+\sum_{b}u_{a,b}(\mu)(\Delta\mu^{a})\widetilde{\xi}_{b}\] and \[\nabla(u_{ab}(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b}))=\sum_{c}u_{ab,c }(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\widetilde{\xi}_{c}+u_{ab}( \mu)(g(\nabla\widetilde{\xi}_{a},\widetilde{\xi}_{b})+g(\widetilde{\xi}_{a}, \nabla\widetilde{\xi}_{b})),\] where \(\#\) is conversion from a \(1\)-form to a vector field via the metric; here we have also used Remark 2.10 that \(\Delta\mu\) is a moment map for the Ricci curvature. Given the estimates in Lemma 4.3 on \(\Delta_{\varphi}\mu_{\varphi}\), it is straightforward to bound line (15), which completes the proof of Proposition 5.1. As a corollary, we obtain: **Lemma 5.2**.: _Let \(\check{Q}_{\epsilon}\) be the non-linear part of the weighted scalar curvature operator \(\check{S}:=S_{v,w}\) with respect to \(\omega_{\epsilon}\), so that_ \[\check{S}(\omega_{\epsilon}+i\partial\overline{\partial}\psi)=\check{S}(\omega _{\epsilon})+\check{L}_{\epsilon}(\psi)+\check{Q}_{\epsilon}(\psi).\] _Given \(\delta<0\), there exist \(C,c_{0}>0\) such that if_ \[\|\varphi\|_{C^{4,\alpha}_{2}},\,\|\psi\|_{C^{4,\alpha}_{2}}\leq c_{0},\] _then_ \[\|\check{Q}_{\epsilon}(\varphi)-\check{Q}_{\epsilon}(\psi)\|_{C^{4,\alpha}_{ \delta-4}}\leq C\left(\|\varphi\|_{C^{4,\alpha}_{2}}+\|\psi\|_{C^{4,\alpha}_{2} }\right)\|\varphi-\psi\|_{C^{4,\alpha}_{\delta}}\] Proof.: The proof is exactly the same as [22, Lemma 19]. Namely, by the mean value theorem there exists \(\chi\) on the line joining \(\varphi\) and \(\psi\) such that \[\check{Q}_{\epsilon}(\varphi)-\check{Q}_{\epsilon}(\psi)=D\check{Q}_{\epsilon }|_{\chi}(\varphi-\psi)=(\check{L}_{\epsilon,\chi}-\check{L}_{\epsilon})( \varphi-\psi).\] Applying Proposition 5.1, \[\|\check{Q}_{\epsilon}(\varphi)-\check{Q}_{\epsilon}(\psi)\|_{C^{ 4,\alpha}_{\delta-4}} \leq C\|\chi\|_{C^{4,\alpha}_{2}}\|\varphi-\psi\|_{C^{4,\alpha}_{ \delta}}\] \[\leq C\left(\|\varphi\|_{C^{4,\alpha}_{2}}+\|\psi\|_{C^{4,\alpha}_{ 2}}\right)\|\varphi-\psi\|_{C^{4,\alpha}_{\delta}}.\qed\] ## 6. A right-inverse of the linearised operator Recall from Section 3.3 the lifting operator \(\ell_{\epsilon}:\overline{\mathfrak{h}}\to C^{\infty}(\operatorname{Bl}_{p} M)^{T^{\prime}}\). We also write \(X:=\nabla_{\omega}S_{v,w}(\omega)\) for the extremal vector field on \(M\), and write \(\widetilde{X}\) for its lift to \(\operatorname{Bl}_{p}M\). The aim of this section is to prove: **Proposition 6.1**.: _For \(n>2\), let \(\delta\in(4-2n,0)\). For sufficiently small \(\epsilon>0\), the operator \(G_{\epsilon}:C^{4,\alpha}_{\delta,\epsilon}(\operatorname{Bl}_{p}M)^{T^{ \prime}}\times\overline{\mathfrak{h}}\to C^{0,\alpha}_{\delta-4,\epsilon}( \operatorname{Bl}_{p}M)^{T^{\prime}}\),_ \[G_{\epsilon}(\psi,f):=\check{L}_{\epsilon}\psi-\frac{1}{2}\widetilde{X}(\psi) -\ell_{\epsilon}(f)\] _has a right inverse \(P_{\epsilon}\), satisfying \(\|P_{\epsilon}\|\leq C\) for a constant \(C>0\) independent of \(\epsilon\)._ _In the case \(n=2\), given \(\delta<0\) sufficiently close to \(0\), there is a right inverse \(P_{\epsilon}\) for \(G_{\epsilon}\) satisfying \(\|P_{\epsilon}\|\leq C\epsilon^{\delta}\)._ Following [22, Proposition 20], the rough approach is to glue together right-inverses for linearised operators on \(M_{p}:=M\setminus\{p\}\) and \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) to construct an approximate right-inverse, and then to deform this to a genuine right-inverse. However, instead of gluing an inverse for the weighted linearization on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\), we can glue in an inverse for the usual unweighted linearization. We will see that near the exceptional divisor, the extra terms from the weighted setting are sufficiently small that we can still deform this to a genuine weighted right-inverse regardless. Before giving the next proof, we recall the notion of _indicial roots_ in the weighted Fredholm theory. On \(\mathbb{R}^{m}\setminus\{0\}\), the indicial roots of the Laplacian are the growth rates of radially symmetric harmonic functions on \(\mathbb{R}^{m}\setminus\{0\}\). That is, \(\delta\in\mathbb{R}\) is an _indicial root_ of \(\Delta_{\mathbb{R}^{n}}\) if there exists a non-zero harmonic function \(f(r)\) on \(\mathbb{R}^{m}\setminus\{0\}\) such that \(f\in C^{k,\alpha}_{\delta}\) for all \(k\) and \(\alpha\), where \(r\) is the radial coordinate. **Lemma 6.2** ([14, Theorem 1.7]).: _For \(m\geq 4\), the indicial roots of the Laplacian on \(\mathbb{R}^{m}\backslash\{0\}\) are \(\mathbb{Z}\backslash\{-1,-2,\ldots,3-m\}\)._ On the manifolds \(M_{p}\) and \(\operatorname{Bl}_{0}\mathbb{C}^{n}\), the Kahler Laplacians agree asymptotically with the Euclidean Laplacian. Using this, one can prove: **Lemma 6.3** ([14, Theorem 8.6]).: _Let \(n\geq 2\). If \(\delta\in\mathbb{R}\) is not an indicial root of the Euclidean Laplacian on \(\mathbb{R}^{2n}\backslash\{0\}\), then the operators_ \[\Delta_{\omega}:C^{k,\alpha}_{\delta}(M_{p})\to C^{k-2,\alpha}_{\delta-2}(M_{ p})\] _and_ \[\Delta_{\eta}:C^{k,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})\to C ^{k-2,\alpha}_{\delta-2}(\operatorname{Bl}_{0}\mathbb{C}^{n})\] _are Fredholm._ _Hence, for \(n>2\) let \(\delta\in(4-2n,0)\), and for \(n=2\) let \(\delta\in(-1,0)\). Then \(\Delta^{2}_{\omega}\) and \(\Delta^{2}_{\eta}\) are Fredholm as maps \(C^{k,\alpha}_{\delta}\to C^{k-4,\alpha}_{\delta-4}\) on the manifolds \(M_{p}\) and \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) respectively._ Using this information, we can study the properties of the weighted Lichnerowicz operator on weighted Holder spaces: **Lemma 6.4**.: _For \(n>2\) let \(\delta\in(4-2n,0)\), and for \(n=2\) let \(\delta\in(-1,0)\). The operator \(C^{4,\alpha}_{\delta}(M_{p})^{T^{\prime}}\times\overline{\mathfrak{h}}\to C^{0,\alpha}_{\delta-4}(M_{p})^{T}\),_ \[(\varphi,f)\mapsto\frac{v(\mu)}{w(\mu)}\mathcal{D}^{*}_{v}\mathcal{D}\varphi- f|_{M_{p}}\] _has a bounded right-inverse._ Proof.: We first note that the leading order term of \(\check{\mathcal{L}}\) is the linear isomorphism \(v(\mu)/w(\mu)\) composed with \(\Delta^{2}_{\omega}\), and \(\Delta^{2}_{\omega}\) is Fredholm on \(C^{4,\alpha}_{\delta}(M_{p})\) by Lemma 6.3. The remaining terms in \(\check{\mathcal{L}}\) define bounded linear maps \(C^{4,\alpha}_{\delta}(M_{p})\to C^{1,\alpha}_{\delta-3}(M_{p})\), and the inclusion \(C^{1,\alpha}_{\delta-3}(M_{p})\subset C^{0,\alpha}_{\delta-4}(M_{p})\) is compact. Hence \(\check{\mathcal{L}}\) is Fredholm on \(C^{4,\alpha}_{\delta}(M_{p})\). Furthermore, \(\check{\mathcal{L}}\) is formally self-adjoint with respect to the \(L^{2}\)-inner product \(\langle f,g\rangle_{w}:=\int fg\,w(\mu)\omega^{n}\). It follows that the image of \(\check{\mathcal{L}}:C^{4,\alpha}_{\delta}\to C^{0,\alpha}_{\delta-4}\) is the \(L^{2}\)-orthogonal complement of the kernel of \(\check{\mathcal{L}}:C^{4,\alpha}_{4-2n-\delta}\to C^{0,\alpha}_{-2n-\delta}\).3 Footnote 3: Strictly speaking, the \(L^{2}\)-inner product does not give a well-defined pairing between the spaces \(C^{0,\alpha}_{\delta-4}\) and \(C^{4,\alpha}_{4-2n-\delta}\), for example since \(|z|^{\delta-4}|z|^{4-2n-\delta}=|z|^{-2n}\) is not integrable. However, since \(4-2n-\delta\in(4-2n,0)\), the weight \(4-2n-\delta\) is not an indicial root of \(\check{\mathcal{L}}\). Hence elements in the kernel of \(\check{\mathcal{L}}:C^{4,\alpha}_{4-2n-\delta}\to C^{0,\alpha}_{-2n-\delta}\) are in fact contained in weighted Hölder spaces with strictly higher weights [14, Lemma 12.1.2], so have a well-defined \(L^{2}\)-pairing with elements of \(C^{0,\alpha}_{\delta-4}\). I thank Lars Sektnan for explaining this point to me. We now claim that the kernel of \(\check{\mathcal{L}}:C^{4,\alpha}_{4-2n-\delta}\to C^{0,\alpha}_{-2n-\delta}\) is precisely \(\overline{\mathfrak{h}}\). Since \(\check{\mathcal{L}}\) has no indicial roots in \((4-2n,0)\) when \(n>2\), any \(f\in\ker(\check{\mathcal{L}})\cap C^{4,\alpha}_{4-2n-\delta}\) lies in \(C^{k,\alpha}_{\delta^{\prime}}\) for all \(\delta^{\prime}<0\)[14, Proposition 12.2.1]. Now, \(0\) is an indicial root of the Laplacian, so there exists \(g\in\ker(\check{\mathcal{L}})\cap C^{4,\alpha}_{0}\) such that \(f-g\in C^{4,\alpha}_{\delta^{\prime}}\) for \(\delta^{\prime}>0\) sufficiently small [14, Proposition 12.4.1]. Note that elements of \(C^{4,\alpha}_{0}\) are integrable on \(M_{p}\), and hence define distributions on \(M\). By elliptic regularity, the kernel of \(\check{\mathcal{L}}\) on \(C^{4,\alpha}_{0}\) is therefore \(\overline{\mathfrak{h}}\). Hence both \(g\) and \(f-g\) lie in \(\overline{\mathfrak{h}}\), and therefore so does \(f\). In the case of \(n=2\), note \(C^{4,\alpha}_{4-2n-\delta}(M_{p})=C^{4,\alpha}_{-\delta}(M_{p})\) and \(-\delta>0\), so \(f\) automatically lies in \(\overline{\mathfrak{h}}\) in this case. We also have the following result [22, Proposition 16]. **Lemma 6.5**.: _For \(n>2\), let \(\delta>4-2n\). Then the operator \(C^{4,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{n})^{T^{\prime}}\to C ^{0,\alpha}_{\delta-4}(\operatorname{Bl}_{0}\mathbb{C}^{n})^{T^{\prime}}\),_ \[\varphi\mapsto\mathcal{D}^{*}_{\eta}\mathcal{D}_{\eta}\varphi\] _has a bounded inverse._ _When \(n=2\), let \(\delta\in(-1,0)\) and choose a compactly supported smooth \(T^{\prime}\)-invariant function \(\chi\) on \(\operatorname{Bl}_{0}\mathbb{C}^{2}\) with non-zero integral. Then the operator \(C^{4,\alpha}_{\delta}(\operatorname{Bl}_{0}\mathbb{C}^{2})^{T^{\prime}}\times \mathbb{R}\to C^{4,\alpha}_{\delta-4}(\operatorname{Bl}_{0}\mathbb{C}^{2})^{T ^{\prime}}\),_ \[(\varphi,t)\mapsto\mathcal{D}^{*}_{\eta}\mathcal{D}_{\eta}\varphi+t\chi\] _has a bounded inverse._ Recall the functions \(\gamma_{j,\epsilon}(z)\) defined in (7), for \(j=1,2\). Before beginning the proof of Proposition 6.1, following [22, p. 16] for each \(j\) we will need to define a function \(\beta_{j,\epsilon}(z)\) that is equal to \(1\) on \(\operatorname{supp}\gamma_{j,\epsilon}(z)\), and has slightly larger support than \(\gamma_{j,\epsilon}(z)\). Write \(a:=\frac{2n-1}{2n+1}\), and recall \(r_{\epsilon}:=\epsilon^{a}\). Let us choose \(\overline{a}\) such that \(a<\overline{a}<1\), and let \(\chi_{1}:\mathbb{R}\to\mathbb{R}\) be a smooth function such that \(\chi_{1}(x)=1\) for \(x\leq a\) and \(\chi_{1}(x)=0\) for \(x\geq\overline{a}\). With this choice, let \(\beta_{1,\epsilon}\) be the function on \(\operatorname{Bl}_{p}M\) defined by \[\beta_{1,\epsilon}(z):=\chi_{1}\left(\frac{\log|z|}{\log\epsilon}\right),\] for \(z\in\widetilde{B}_{1}\backslash E\), extended to the constant function \(1\) on \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{1}\) and \(0\) on \(E\). Then \(\beta_{1,\epsilon}=0\) on \(\widetilde{B}_{\epsilon}\overline{\pi}\), \(\beta_{1,\epsilon}=1\) on \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{r_{\epsilon}}\), and there are uniform bounds \[\|\beta_{1,\epsilon}\|_{C^{4,\alpha}_{0,\epsilon}}\leq C,\qquad\|\nabla_{ \epsilon}\beta_{1,\epsilon}\|_{C^{3,\alpha}_{-1,\epsilon}}\leq\frac{C}{|\log \epsilon|}\] where \(C>0\) is independent of \(\epsilon\). In particular \(\beta_{1,\epsilon}\) is equal to \(1\) on \(\operatorname{supp}(\gamma_{1,\epsilon})\). Similarly, we choose \(\underline{a}\in\mathbb{R}\) such that \(0<\underline{a}<a\), and let \(\chi_{2}:\mathbb{R}\to\mathbb{R}\) be a smooth function such that \(\chi_{2}(x)=0\) for \(x<\underline{a}/a\) and \(\chi_{2}(x)=1\) for \(x>1\). We define the function \[\beta_{2,\epsilon}(z):=\chi_{2}\left(\frac{\log(|z|/2)}{\log r_{\epsilon}}\right)\] for \(z\in\widetilde{B}_{1}\backslash E\), extended similarly to \(\operatorname{Bl}_{p}M\). Then \(\beta_{2,\epsilon}\) satisfies \(\beta_{2,\epsilon}=1\) on \(\widetilde{B}_{2r_{\epsilon}}\), \(\beta_{2,\epsilon}=0\) on \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{2\epsilon\underline{a}}\), and there exists \(C>0\) independent of \(\epsilon\) such that \[\|\beta_{2,\epsilon}\|_{C^{4,\alpha}_{0,\epsilon}}\leq C,\qquad\|\nabla_{ \epsilon}\beta_{2,\epsilon}\|_{C^{3,\alpha}_{-1,\epsilon}}\leq\frac{C}{|\log \epsilon|}. \tag{18}\] Proof of Proposition 6.1.: We again drop \(\epsilon\) from the notation, writing \(G\) in place of \(G_{\epsilon}\), \(\mu\) for \(\mu_{\epsilon}\), and so on. We will prove the case \(n>2\); the case \(n=2\) requires only slight alterations, and we refer to [22, Proposition 20] for the details. Given \(\varphi\in C^{0,\alpha}_{\delta-4}(\operatorname{Bl}_{p}M)^{T^{\prime}}\), let us consider \(\gamma_{1}\varphi\) as a function on \(M_{p}\) and write \[\widetilde{P}_{1}(\gamma_{1}\varphi):=(P_{1}(\gamma_{1}\varphi),f_{\varphi}) \tag{19}\] for the right-inverse operator \(\widetilde{P}_{1}\) of Lemma 6.4 applied to \(\gamma_{1}\varphi\). The function \(\beta_{1}P_{1}(\gamma_{1}\varphi)\) can then be considered as a function on \(\operatorname{Bl}_{p}M\) instead of \(M_{p}\). Similarly, we let \(c_{0}:=v(p_{0})/w(p_{0})\), identify \(\gamma_{2}\varphi\) with the function \(\zeta\mapsto\gamma_{2}(\iota_{\epsilon}^{-1}(\zeta))\varphi(\iota_{\epsilon}^ {-1}(\zeta))\) on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\), and write \(P_{2}(\gamma_{2}\varphi)\) for the inverse operator \(P_{2}\) of \(c_{0}L_{\epsilon^{2}\eta}\) from Lemma 6.5 applied to \(\gamma_{2}\varphi\). We then write \(\beta_{2}P_{2}(\gamma_{2}\varphi)\) for the function \(\beta_{2}(z)P_{2}(\gamma_{2}\varphi)(\iota_{\epsilon}(z))\) on \(\operatorname{Bl}_{p}M\). Note that \(L_{\epsilon^{2}\eta}=\epsilon^{-4}L_{\eta}\), so \(P_{2}\) is \(\epsilon^{4}\) times a fixed operator. Using all this information, we now define the operators \(P:C^{0,\alpha}_{\delta-4}(\operatorname{Bl}_{p}M)^{T^{\prime}}\to C^{4,\alpha} _{\delta}(\operatorname{Bl}_{p}M)^{T^{\prime}}\) and \(\widetilde{P}:C^{0,\alpha}_{\delta-4}(\operatorname{Bl}_{p}M)^{T^{\prime}} \to C^{4,\alpha}_{\delta}(\operatorname{Bl}_{p}M)^{T^{\prime}}\times \overline{\mathfrak{h}}\) by \[P(\varphi):=\beta_{1}P_{1}(\gamma_{1}\varphi)+\beta_{2}P_{2}(\gamma_{2}\varphi),\] and \[\widetilde{P}(\varphi):=(P\varphi,f_{\varphi}),\] where \(f_{\varphi}\) is defined in (19). Note these operators depend on the parameter \(\epsilon\), and act on the corresponding weighted spaces defined in terms of \(\epsilon\). Our aim is now to prove that for \(\epsilon>0\) sufficiently small, \[\|(G\circ\widetilde{P})(\varphi)-\varphi\|_{C^{0,\alpha}_{\delta-4}}\leq\frac{ 1}{2}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}. \tag{20}\] From this it follows that \(\|G\circ\widetilde{P}-\operatorname{Id}\|\leq\frac{1}{2}\), so the operator \(G\circ\widetilde{P}\) is invertible. Writing \(Q\) for the inverse, we have \(G\circ(\widetilde{P}\circ Q)=\operatorname{Id}\), and so \(\widetilde{P}\circ Q\) is a right inverse for \(G\). Since \(Q\) is constructed via a geometric series, \(\|Q\|\leq 2\). For the norm of \(\widetilde{P}\circ Q\) to be uniformly bounded, it then suffices to show the norm of \(\widetilde{P}\) is uniformly bounded independent of \(\epsilon\): \[\|\widetilde{P}\varphi\|_{C^{4,\alpha}_{\delta}(\operatorname{Bl} _{p}M)\times\overline{\mathfrak{h}}}\] \[=\|(P\varphi,f_{\varphi})\|_{C^{4,\alpha}_{\delta}(\operatorname{ Bl}_{p}M)\times\overline{\mathfrak{h}}}\] \[=\|P\varphi\|_{C^{4,\alpha}_{\delta}}+|f_{\varphi}|\] \[\leq\|\beta_{1}P_{1}(\gamma_{1}\varphi)\|_{C^{4,\alpha}_{\delta}} +\|\beta_{2}P_{2}(\gamma_{2}\varphi)\|_{C^{4,\alpha}_{\delta}}+\|\widetilde{P} _{1}(\gamma_{1}\varphi)\|_{C^{4,\alpha}_{\delta}(M_{p})}\] \[\leq\|\beta_{1}\|_{C^{4,\alpha}_{0}}\|P_{1}(\gamma_{1}\varphi)\| _{C^{4,\alpha}_{\delta}(M_{p})}+\|\beta_{2}\|_{C^{4,\alpha}_{0}\epsilon^{- \delta}}\|P_{2}(\gamma_{2}\varphi)\|_{C^{4,\alpha}_{\delta}(\operatorname{Bl}_ {0}\mathbb{C}^{n})}+C\|\gamma_{1}\varphi\|_{C^{4,\alpha}_{\delta-4}(M_{p})}\] \[\leq C\left(\|\gamma_{1}\varphi\|_{C^{0,\alpha}_{\delta-4}(M_{p})} +\epsilon^{-(\delta-4)}\|\gamma_{2}\varphi\|_{C^{0,\alpha}_{\delta-4}( \operatorname{Bl}_{0}\mathbb{C}^{n})}\right)\] \[\leq C\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] In the fourth and sixth lines we have used the equivalence of norms _(5)_ from Lemma 3.9, as well as the bounds on the norms of the \(\gamma_{j}\) from Lemma 3.11. We also used that \(\|P_{2}\|=\operatorname{O}(\epsilon^{4})\), as we remarked above. Note in particular, \[\|\beta_{2}P_{2}(\gamma_{2}\varphi)\|_{C^{4,\alpha}_{\delta}}\leq C\|\varphi\| _{C^{0,\alpha}_{\delta-4}}. \tag{21}\] It remains to prove (20). We can write \((G\circ\widetilde{P})(\varphi)-\varphi\) as \[\check{L}(\beta_{1}P_{1}(\gamma_{1}\varphi))-\frac{1}{2}\widetilde {X}(\beta_{1}P_{1}(\gamma_{1}\varphi))-\gamma_{1}\ell(f_{\varphi})-\gamma_{1}\varphi \tag{23}\] \[+\check{L}(\beta_{2}P_{2}(\gamma_{2}\varphi))-\frac{1}{2} \widetilde{X}(\beta_{2}P_{2}(\gamma_{2}\varphi))-\gamma_{2}\ell(f_{\varphi})- \gamma_{2}\varphi, \tag{22}\] and so to prove (20), it suffices to estimate each of these expressions separately. The line (22) is supported in \(\mathrm{Bl}_{p}M\backslash\widetilde{B}_{\epsilon^{\underline{a}}}\), and (23) is supported in \(\widetilde{B}_{2\epsilon^{\underline{a}}}\). The estimate for (22) is essentially unchanged from [10, Proposition 20], only we use Proposition 5.1 instead of the unweighted version [10, Proposition 18], so we omit this. However, (23) introduces new challenges, since we are seeking an inverse of the weighted linearisation but have glued in an inverse for the unweighted linearisation on \(\mathrm{Bl}_{0}\mathbb{C}^{n}\). Our task is therefore to estimate (23) on the region \(\widetilde{B}_{2\epsilon^{\underline{a}}}\). Using Lemma 2.12 and defining \(\psi:=\beta_{2}P_{2}(\gamma_{2}\varphi)\), (23) can be written \[\frac{v(\mu)}{w(\mu)}L\psi-\frac{2}{w(\mu)}(\overline{\partial}^ {\,\mathcal{D}}\psi,\nabla^{1,0}(v(\mu)))\] \[+\frac{1}{w(\mu)}(\mathcal{D}\psi,\mathcal{D}(v(\mu)))+\frac{1}{ 2}S(\omega)\nabla\left(\frac{v(\mu)}{w(\mu)}\right)\cdot\nabla\psi\] \[+\frac{1}{2}\nabla\Phi_{v,w}(\omega)\cdot\nabla\psi-\frac{1}{2} \widetilde{X}(\psi)-\gamma_{2}\ell(f_{\varphi})-\gamma_{2}\varphi. \tag{24}\] Setting \(c_{0}:=v(p_{0})/w(p_{0})\), we have \[\frac{v(\mu)}{w(\mu)}L\psi=\left(\frac{v(\mu)}{w(\mu)}-c_{0}\right)L\psi+c_{0 }L\psi.\] We first estimate \[\left\|\left(\frac{v(\mu)}{w(\mu)}-c_{0}\right)L(\beta_{2}P_{2}(\gamma_{2} \varphi))\right\|_{C^{0,\alpha}_{\delta-4}},\] for which we will show the uniform bound \[\left\|\frac{v(\mu)}{w(\mu)}-c_{0}\right\|_{C^{0,\alpha}_{0}\left(\widetilde{ B}_{4\epsilon^{\underline{a}}}\right)}\leq C\epsilon^{2\underline{a}}. \tag{25}\] By the mean value theorem, it suffices to bound \[\left\|\mu-p_{0}\right\|_{C^{0,\alpha}_{0}\left(\widetilde{B}_{4\epsilon^{ \underline{a}}}\right)}.\] On the region \(\widetilde{B}_{r_{\epsilon}}\), we have \(\mu-p_{0}=\epsilon^{2}\iota_{\epsilon}^{*}\mu_{\eta}\), where \(\iota_{\epsilon}:\widetilde{B}_{r_{\epsilon}}\to\widetilde{D}_{R_{\epsilon}} \subset\mathrm{Bl}_{0}\mathbb{C}^{n}\) is the scaling isomorphism. From this, \[\left\|\mu-p_{0}\right\|_{C^{0,\alpha}_{0}\left(\widetilde{B}_{ r_{\epsilon}}\right)} =\epsilon^{2}\|\mu_{\eta}\|_{C^{0,\alpha}_{0}\left(\widetilde{D}_{ R_{\epsilon}}\right)}\] \[\leq\epsilon^{2}R_{\epsilon}^{2}\|\mu_{\eta}\|_{C^{0,\alpha}_{2} \left(\widetilde{D}_{R_{\epsilon}}\right)}\] \[\leq C\epsilon^{2\underline{a}},\] where we use that \(\mu_{\eta}^{a}=\mu_{\text{Euc}}^{a}+d^{c}g(\widetilde{\xi}_{a})\) is \(\text{O}(|\zeta|^{2})\) to get a uniform bound \(\|\mu_{\eta}\|_{C^{0,\alpha}_{2}(\widetilde{B}_{R_{\epsilon}})}\leq C\). Then on \(\widetilde{B}_{4\epsilon\underline{a}}\backslash\widetilde{B}_{r_{\epsilon}}\), \[\mu-p_{0}=\mu_{\text{Euc}}+d^{c}(\gamma_{1}(z)f(z)+\epsilon^{2}\gamma_{2}(z)g( \epsilon^{-1}z))\] is uniformly \(\text{O}(|z|^{2})\), so there is a uniform bound \[\|\mu-p_{0}\|_{C^{0,\alpha}_{0}(\widetilde{B}_{4\epsilon\underline{a}} \backslash\widetilde{B}_{r_{\epsilon}})}\leq C\epsilon^{2\underline{a}}\] and the bound (25) is achieved. Using this, \[\left\|\left(\frac{v(\mu)}{w(\mu)}-c_{0}\right)L(\beta_{2}P_{2}( \gamma_{2}\varphi))\right\|_{C^{0,\alpha}_{\delta-4}} \leq C\epsilon^{2\underline{a}}\|L(\beta_{2}P_{2}(\gamma_{2} \varphi))\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq C\epsilon^{2\underline{a}}\|\varphi\|_{C^{0,\alpha}_{\delta-4 }},\] where we used (21). Next, consider the term \[\frac{1}{w(\mu)}(\mathcal{D}(\beta_{2}P_{2}(\gamma_{2}\varphi)),\mathcal{D}(v (\mu)))\] from (24). Note that \[\mathcal{D}(v(\mu))=\sum_{a,b}v_{,ab}(\mu)(\widetilde{\xi}_{b})^{\flat}\otimes \widetilde{\xi}_{a}\] and the right hand side has a uniform \(C^{0,\alpha}_{0}\)-bound on \(\text{Bl}_{p}M\) independent of \(\epsilon\). Hence \[\left\|\frac{1}{w(\mu)}(\mathcal{D}(\beta_{2}P_{2}(\gamma_{2} \varphi)),\mathcal{D}(v(\mu)))\right\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq C\|\mathcal{D}(\beta_{2}P_{2}(\gamma_{2}\varphi))\|_{C^{0, \alpha}_{\delta-2}}\|\mathcal{D}(v(\mu))\|_{C^{0,\alpha}_{-2}(\widetilde{B}_{4 \epsilon\underline{a}})}\] \[\leq C\|\beta_{2}P_{2}(\gamma_{2}\varphi)\|_{C^{4,\alpha}_{\delta}} \left\|\sum_{a,b}v_{,ab}(\mu)(\widetilde{\xi}_{b})^{\flat}\otimes\widetilde{ \xi}_{a}\right\|_{C^{0,\alpha}_{-2}(\widetilde{B}_{4\epsilon\underline{a}})}\] \[\leq C\epsilon^{2\underline{a}}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] Similarly for the second term in (24), \[\left\|\frac{2}{w(\mu)}(\overline{\partial}^{*}\mathcal{D}\psi, \nabla^{1,0}(v(\mu)))\right\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq C\|\overline{\partial}^{*}\mathcal{D}\psi\|_{C^{0,\alpha}_{ \delta-3}}\|\nabla^{1,0}(v(\mu))\|_{C^{0,\alpha}_{-1}(\widetilde{B}_{4 \epsilon\underline{a}})}\] \[\leq C\|\beta_{2}P_{2}(\gamma_{2}\varphi)\|_{C^{4,\alpha}_{\delta}} \left\|\sum_{a}v_{,a}(\mu)\widetilde{\xi}_{a}\right\|_{C^{0,\alpha}_{-1}( \widetilde{B}_{4\epsilon\underline{a}})}\] \[\leq C\epsilon^{2\underline{a}}\|\varphi\|_{C^{0,\alpha}_{\delta-4}},\] where \(\overline{\partial}^{*}:C^{1,\alpha}_{\delta-2}\to C^{0,\alpha}_{\delta-3}\) is seen to have uniformly bounded norm from (1). For the term \(-\frac{1}{2}\widetilde{X}(\psi)\) from (24), \[\|\widetilde{X}(\beta_{2}P_{2}(\gamma_{2}\varphi))\|_{C^{0,\alpha}_{ \delta-4}}\] \[\leq \|\widetilde{X}\|_{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4\text{ \tiny{4}}\underline{a}}\right)}\left(\|\nabla\beta_{2}\|_{C^{0,\alpha}_{-1}}\| P_{2}(\gamma_{2}(\varphi))\|_{C^{4,\alpha}_{\delta}\left(\widetilde{B}_{4\text{ \tiny{4}}\underline{a}}\right)}+\|\beta_{2}\|_{C^{0,\alpha}_{0}}\|\nabla(P_{2 }(\gamma_{2}\varphi))\|_{C^{0,\alpha}_{\delta-1}\left(\widetilde{B}_{4\text{ \tiny{4}}\underline{a}}\right)}\right)\] \[\leq C\epsilon^{3\underline{a}}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] Next we bound the term \(\frac{1}{2}S(\omega)\nabla\left(\frac{v(\mu)}{w(\mu)}\right)\cdot\nabla\psi\) in (24). Since \(\|S(\omega)\|_{C^{4,\alpha}_{-2}}\leq C\) we have \(\|S(\omega)\|_{C^{4,\alpha}_{-3}\left(\widetilde{B}_{4\text{\tiny{4}} \underline{a}}\right)}\leq C\epsilon^{\underline{a}}\). Writing \(u=v/w\), note \(\nabla(u(\mu))=\sum_{a}u_{,a}(\mu)\widetilde{\xi}_{a}\) is uniformly bounded in \(C^{0,\alpha}_{0}\). Hence \[\left\|S(\omega)\nabla\left(\frac{v(\mu)}{w(\mu)}\right)\cdot \nabla(\beta_{2}P_{2}(\gamma_{2}\varphi))\right\|_{C^{0,\alpha}_{\delta-4}} \leq C\epsilon^{\underline{a}}\|\nabla(\beta_{2}P_{2}(\gamma_{2} \varphi))\|_{C^{0,\alpha}_{\delta-1}}\] \[\leq C\epsilon^{\underline{a}}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] For the term \(\nabla\Phi_{v,w}(\omega)\cdot\nabla(\beta_{2}P_{2}(\gamma_{2}\varphi))\) in (24), it is enough to show that \[\|\nabla\Phi_{v,w}(\omega)\|_{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4\text{ \tiny{4}}\underline{a}}\right)}\leq C\epsilon^{\underline{a}}.\] Lemma 2.13 states that \(\Phi_{v,w}(\omega)\) is a linear combination of terms of two kinds: \(u_{a}(\mu)\Delta\mu^{a}\), and \(u_{ab}(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\), where \(u_{a}\) and \(u_{ab}\) are among finitely many fixed smooth functions on the moment polytope. We first have \[\nabla(u_{a}(\mu)\Delta\mu^{a})=\sum_{b}u_{a,b}(\mu)\widetilde{\xi}_{b}\Delta \mu^{a}+u_{a}(\mu)\text{Ric}(\widetilde{\xi}_{a},-)^{\#}.\] By Lemma 4.3, \[\|\Delta\mu^{a}\|_{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4\text{\tiny{4}} \underline{a}}\right)}\leq C\epsilon^{3\underline{a}}.\] Also, \[\|\text{Ric}(\widetilde{\xi}_{a},-)^{\#}\|_{C^{0,\alpha}_{-3}\left(\widetilde{ B}_{4\text{\tiny{4}}\underline{a}}\right)}\leq C\epsilon^{\underline{a}}\] by Lemma 3.11. The remaining factors \(u_{a,b}(\mu)\), \(u_{a}(\mu)\) and \(\widetilde{\xi}_{b}\) are uniformly bounded in \(C^{0,\alpha}_{0}\), hence \[\|\nabla(u_{a}(\mu)\Delta\mu^{a})\|_{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4 \text{\tiny{4}}\underline{a}}\right)}\leq C\epsilon^{\underline{a}}.\] Next, we must estimate \[\nabla(u_{ab}(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b}))=\sum_{c}u_{ab,c}( \mu)\widetilde{\xi}_{c}g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})+u_{ab}(\mu)( g(\nabla\widetilde{\xi}_{a},\widetilde{\xi}_{b})+g(\widetilde{\xi}_{a}, \nabla\widetilde{\xi}_{b})).\] The factors \(u_{ab,c}(\mu)\), \(\widetilde{\xi}_{c}\) and \(g_{\epsilon}(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\) all have uniform \(C^{0,\alpha}_{0,\epsilon}\) bounds, so \[\|u_{ab,c}(\mu)\widetilde{\xi}_{c}g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\| _{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4\text{\tiny{4}}\underline{a}}\right)} \leq C\epsilon^{3\underline{a}}.\] For the term \(u_{ab}(\mu)(g(\nabla\widetilde{\xi}_{a},\widetilde{\xi}_{b})+g(\widetilde{\xi}_ {a},\nabla\widetilde{\xi}_{b}))\), the factors \(u_{ab}(\mu)\), \(g\), \(\widetilde{\xi}_{a}\) and \(\widetilde{\xi}_{b}\) have uniform \(C^{0,\alpha}_{0}\)-bounds. The covariant derivatives \(\nabla\widetilde{\xi}_{a}\) are uniformly bounded in \(C^{0,\alpha}_{-1}\) by Remark 3.10. Hence \[\|u_{ab}(\mu)(g(\nabla\widetilde{\xi}_{a},\widetilde{\xi}_{b})+g(\widetilde{ \xi}_{a},\nabla\widetilde{\xi}_{b}))\|_{C^{0,\alpha}_{-3}\left(\widetilde{B}_{4 \text{\tiny{4}}\underline{a}}\right)}\leq C\epsilon^{2\underline{a}}.\] Next consider the term \(\gamma_{2}\ell(f_{\varphi})\) from (24). By Lemma 3.12, \(\|\ell(f_{\varphi})\|_{C^{0,\alpha}_{0}}\leq C|f_{\varphi}|\) where \(|\cdot|\) is a fixed norm on \(\overline{\mathfrak{h}}\), and by (19) we have \(|f_{\varphi}|\leq C\|\varphi\|_{C^{0,\alpha}_{\delta-4}}\). It follows that \[\|\gamma_{2}\ell(f_{\varphi})\|_{C^{0,\alpha}_{\delta-4}}\leq C\|\gamma_{2}\|_ {C^{0,\alpha}_{\delta-4}}\|\ell(f_{\varphi})\|_{C^{0,\alpha}_{0}}\leq Cr_{ \epsilon}^{4-\delta}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] In summary, we have reduced estimating (24) to estimating \[c_{0}L(\beta_{2}P_{2}(\gamma_{2}\varphi))-\gamma_{2}\varphi.\] We apply [22, Proposition 18], which we recall is the unweighted analogue of Proposition 5.1, to show the norm of \(L_{\omega}-L_{\epsilon^{2}\eta}\) tends to \(0\) over \(\widetilde{B}_{4\epsilon\underline{a}}\) as \(\epsilon\to 0\), for any \(k\geq 0\) and \(\alpha\in(0,1)\). Here we write \(\epsilon^{2}\eta\) for what is strictly speaking \(\iota^{*}_{\epsilon}(\epsilon^{2}\eta)\). First note this operator vanishes identically on \(\widetilde{B}_{r_{\epsilon}}\), since \(\omega=\epsilon^{2}\eta\) on this region. So it suffices to estimate the norm over the annulus \(\widetilde{B}_{4\epsilon\underline{a}}\backslash\widetilde{B}_{r_{\epsilon}}\). On the annulus we have \(\epsilon^{2}\eta-\omega=i\partial\overline{\partial}\rho\), where \[\rho:=\gamma_{1}(z)(\epsilon^{2}g(\epsilon^{-1}z)-f(z)).\] Using the fact that \(f\) is \(\operatorname{O}(|z|^{4})\) on \(M\) near \(p\), and \(g\) is \(\operatorname{O}(|\zeta|^{4-2n})\) on \(\operatorname{Bl}_{0}\mathbb{C}^{n}\) near \(\infty\), we can proceed as in the proof of Lemma 4.2 to show that \(\epsilon^{2}g(\epsilon^{-1}z)-f(z)\) is \(\operatorname{O}(|z|^{3})\) on \(\widetilde{B}_{4\epsilon\underline{a}}\backslash\widetilde{B}_{r_{\epsilon}}\) for \(\tau>0\) small, so that \[\|\rho\|_{C^{k,\alpha}_{\delta}(\widetilde{B}_{4\epsilon\underline{a}} \backslash\widetilde{B}_{r_{\epsilon}})}\leq C\epsilon^{\underline{a}}\to 0\] as \(\epsilon\to 0\). It follows from [22, Proposition 18] that the operator norm of \[L_{\omega}-L_{\epsilon^{2}\eta}:C^{4,\alpha}_{\delta}(\widetilde{B}_{4 \epsilon\underline{a}})\to C^{0,\alpha}_{\delta-4}(\widetilde{B}_{4 \epsilon\underline{a}})\] tends to \(0\) as \(\epsilon\to 0\). Note the bound on \(\rho\) also implies the estimates in Lemma 3.11 and (18) hold with \(\epsilon^{2}\eta\) in place of \(g_{\epsilon}\). We have further reduced to estimating \[c_{0}L_{\epsilon^{2}\eta}(\beta_{2}P_{2}(\gamma_{2}\varphi))-\gamma_{2}\varphi.\] Now, \[c_{0}L_{\epsilon^{2}\eta}(\beta_{2}P_{2}(\gamma_{2}\varphi))- \gamma_{2}\varphi =(\nabla^{1,0})^{*}\overline{\partial}^{*}\overline{\partial}(P_ {2}(\gamma_{2}\varphi)\nabla^{1,0}\beta_{2})\] \[+(\nabla^{1,0})^{*}\overline{\partial}^{*}(\overline{\partial} \beta_{2}\otimes\nabla^{1,0}(P_{2}(\gamma_{2}\varphi)))\] \[-2(\overline{\partial}^{*}\mathcal{D}(P_{2}(\gamma_{2}\varphi)),\nabla^{1,0}\beta_{2})+(\mathcal{D}\beta_{2},\mathcal{D}(P_{2}(\gamma_{2} \varphi))),\] where all gradients and adjoints are with respect to \(\epsilon^{2}\eta\). Note by the estimate (18), the right hand side satisfies \(\|\text{RHS}\|_{C^{0,\alpha}_{\delta-4}(\widetilde{B}_{4\epsilon\underline{a} })}\leq C\frac{1}{|\log\epsilon|}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}\). For example, \[\|(\mathcal{D}\beta_{2},\mathcal{D}(P_{2}(\gamma_{2}\varphi)))\|_ {C^{0,\alpha}_{\delta-4}(\widetilde{B}_{4\epsilon\underline{a}})} \leq C\|\overline{\partial}\nabla^{1,0}\beta_{2}\|_{C^{0, \alpha}_{-2}}\|\mathcal{D}(P_{2}(\gamma_{2}\varphi))\|_{C^{0,\alpha}_{\delta- 2}(\widetilde{B}_{4\epsilon\underline{a}})}\] \[\leq C\|\nabla^{1,0}\beta_{2}\|_{C^{0,\alpha}_{-1}}\|P_{2}(\gamma_ {2}\varphi)\|_{C^{4,\alpha}_{\delta}(\widetilde{B}_{4\epsilon\underline{a}})}\] \[\leq C\frac{1}{|\log\epsilon|}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}.\] To summarise, we have shown that the \(C^{0,\alpha}_{\delta-4}\)-norm of (24) is bounded by \(c_{\epsilon}\|\varphi\|_{C^{0,\alpha}_{\delta-4}}\), where \(c_{\epsilon}\to 0\) as \(\epsilon\to 0\). Since (24) was equal to \((G\circ\widetilde{P})(\varphi)-\varphi\), we have shown the inequality (20) holds, which was our aim. ## 7. Completing the proof In this section we will finish the proof of Proposition 3.6, which we recall implies Theorem 1.1, our main result. The proof is by a contraction mapping argument, which will deform our approximate solution \(\omega_{\epsilon}\) to \(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon}\) solving the equation \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon})- \frac{1}{2}\nabla_{\epsilon}\ell_{\epsilon}(f_{\epsilon})\cdot\nabla_{ \epsilon}\varphi_{\epsilon}=\ell_{\epsilon}(f_{\epsilon}),\] for some \(f_{\epsilon}\in\overline{\mathfrak{h}}\). We replace \(f_{\epsilon}\) with \(f_{\epsilon}+s\) where \(s:=S_{v,w}(\omega)\in\overline{\mathfrak{h}}\) generates the extremal field on \(M\). So we are trying to solve \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon})- \frac{1}{2}\nabla_{\epsilon}\ell_{\epsilon}(f_{\epsilon}+s)\cdot\nabla_{ \epsilon}\varphi_{\epsilon}=\ell_{\epsilon}(f_{\epsilon}+s).\] We also require that \(f_{\epsilon}\) satisfies the expansion \[f_{\epsilon}=c_{n}\epsilon^{2n-2}\overline{\mu_{H}^{\#}(p)}+f_{\epsilon}^{\prime}\] from Proposition 3.6 where \(c_{n}\) is a fixed constant and \(|f_{\epsilon}^{\prime}|\leq C\epsilon^{\kappa}\) for some \(\kappa>2n-2\) and \(C>0\) independent of \(\epsilon\). We recall \(\mu_{H}^{\#}\) is the composition of the moment map \(\mu_{H}\) with the linear isomorphism \(\mathfrak{h}^{*}\to\mathfrak{h}\) determined by the inner product (9), and \(\overline{\mu_{H}^{\#}(p)}\) denotes a fixed lift of \(\mu_{H}^{\#}(p)\) to \(\overline{\mathfrak{h}}\). As in [1] and [2], we will first modify \(\omega\) so that it matches with the Burns-Simanca metric to higher order near \(p\). To do this, we need: **Lemma 7.1**.: _There exists a \(T^{\prime}\)-invariant smooth function \(\Gamma:M_{p}\to\mathbb{R}\) and an element \(h\in\overline{\mathfrak{h}}\) satisfying the equation_ \[\frac{v(\mu)}{w(\mu)}\mathcal{D}_{v}^{*}\mathcal{D}\Gamma=h|_{M_{p}}.\] _Moreover, \(h=c_{n}\overline{\mu_{H}^{\#}(p)}\in\overline{\mathfrak{h}}\) for some \(c_{n}\in\mathbb{R}\), and \(\Gamma\) has the asymptotic behaviour_ \[\Gamma(z)=-|z|^{4-2n}+\mathrm{O}(|z|^{5-2n})\] _near \(p\) when \(n>2\), and_ \[\Gamma(z)=\log|z|+\mathrm{O}(|z|^{\tau})\] _for all \(0<\tau<4/5\) when \(n=2\)._ Proof.: We first treat the case \(n>2\). Let \(G:M_{p}\to\mathbb{R}\) be a smooth \(T^{\prime}\)-invariant function equal to \(|z|^{4-2n}\) on \(B_{1/2}\backslash\{p\}\), and \(0\) on \(M\backslash B_{1}\). The highest order term of \(\frac{v}{w}\mathcal{D}_{v}^{*}\mathcal{D}\) is \(\frac{v}{w}\Delta^{2}\), and \(\Delta\) is asymptotic to \(\Delta_{\mathrm{Euc}}\) near \(p\) in the sense that \(\Delta-\Delta_{\mathrm{Euc}}=H^{j\bar{k}}\partial_{j}\partial_{\bar{k}}\) where \(H^{j\bar{k}}\) is \(\mathrm{O}(|z|)\); see the proof of Lemma 4.2. From this, \(\Delta^{2}-\Delta_{\mathrm{Euc}}^{2}\) defines a bounded map \(C^{k,\alpha}_{\delta}(M_{p})\to C^{k-4,\alpha}_{\delta-3}(M_{p})\) for any weight \(\delta\). Since \(|z|^{4-2n}\) is the fundamental solution for \(\Delta^{2}_{\text{Euc}}\) in Euclidean space, it follows that \[\frac{v}{w}\mathcal{D}^{*}_{v}\mathcal{D}G\in C^{k-4,\alpha}_{1-2n}(M_{p}).\] By Lemma 6.4, there exist \(\varphi\in C^{4,\alpha}_{5-2n}(M_{p})^{T^{\prime}}\) and \(h\in\overline{\mathfrak{h}}\) such that \[\frac{v}{w}\mathcal{D}^{*}_{v}\mathcal{D}(\varphi-G)=h|_{M_{p}}.\] By elliptic regularity \(\varphi\) is smooth, and we have \[\Gamma:=\varphi-G=-|z|^{4-2n}+\operatorname{O}(|z|^{5-2n})\] near \(p\). From this expansion, \(\Gamma\) is a distributional solution to the equation \[\frac{v(\mu)}{w(\mu)}\mathcal{D}^{*}_{v}\mathcal{D}\Gamma=h-c_{n}\delta_{p}\] on \(M\), where \(c_{n}\) is a constant depending only on \(n\) and the weights \(v\), \(w\). We next treat the \(n=2\) case; let \(G:M_{p}\to\mathbb{R}\) be a smooth \(T^{\prime}\)-invariant function equal to \(\log|z|\) on \(B_{1/2}\backslash\{p\}\), and \(0\) on \(M\backslash B_{1}\). In this case, the difference \(\Delta-\Delta_{\text{Euc}}=H^{j\bar{k}}\partial_{j}\partial_{\bar{k}}\), where \(H^{j\bar{k}}\) is now \(\operatorname{O}(|z|^{\tau})\) for all \(0<\tau<4/5\); see again the proof of Lemma 4.2. The function \(-\log|z|\) is the fundamental solution of \(\Delta^{2}_{\text{Euc}}\) on \(\mathbb{C}^{2}\), so \[\frac{v}{w}\mathcal{D}^{*}_{v}\mathcal{D}G\in C^{k-4,\alpha}_{-4+\tau}(M_{p}).\] Now, the same proof as for the \(n>2\) case in Lemma 6.4 in fact shows that \[(\varphi,f)\mapsto\frac{v(\mu)}{w(\mu)}\mathcal{D}^{*}_{v}\mathcal{D}\varphi-f\] on \(C^{4,\alpha}_{\delta^{\prime}}(M_{p})^{T^{\prime}}\times\overline{\mathfrak{h}}\) has a right inverse for any \(\delta^{\prime}\in(0,1)\) (as opposed to \(\delta^{\prime}\in(-1,0)\)). Hence there exist \(\varphi\in C^{4,\alpha}_{\tau}(M_{p})^{T^{\prime}}\) and \(h\in\overline{\mathfrak{h}}\) such that \[\frac{v}{w}\mathcal{D}^{*}_{v}\mathcal{D}(\varphi-G)=h|_{M_{p}}.\] Defining \(\Gamma:=\varphi-G\) once again, we have by the expansion \(\Gamma=\log|z|+\operatorname{O}(|z|^{\tau})\) that \(\Gamma\) is a distributional solution to the equation \[\frac{v(\mu)}{w(\mu)}\mathcal{D}^{*}_{v}\mathcal{D}\Gamma=h-c_{2}\delta_{p}\] on \(M\), where \(c_{2}\) is a constant depending only on the weights \(v\), \(w\). For any \(n\geq 2\), denote by \(\operatorname{pr}:\overline{\mathfrak{h}}\to\mathfrak{h}\) the natural projection; we wish to show that \(\operatorname{pr}(h)=c_{n}\mu_{H}^{\#}(p)\). To see this, first note \(\langle\mu_{H},\operatorname{pr}(h)\rangle=h+\operatorname{const}\). Recalling \(\mu_{H}\) is normalised to have integral \(0\) with respect to \(w(\mu)\omega^{n}\), for any \(\xi\in\mathfrak{h}\), \[\langle\xi,\operatorname{pr}(h)\rangle_{\mathfrak{h}} =\int_{M}\langle\mu_{H},\xi\rangle\langle\mu_{H},\operatorname{pr}( h)\rangle w(\mu)\omega^{n}\] \[=\int_{M}\langle\mu_{H},\xi\rangle h\,w(\mu)\omega^{n}\] \[=\int_{M}\langle\mu_{H},\xi\rangle\left(\frac{v(\mu)}{w(\mu)} \mathcal{D}_{v}^{*}\mathcal{D}\Gamma+c_{n}\delta_{p}\right)w(\mu)\omega^{n}\] \[=c_{n}\langle\mu_{H}(p),\xi\rangle\] \[=c_{n}\langle\xi,\mu_{H}^{\#}(p)\rangle_{\mathfrak{h}},\] where in the fourth line we used that \(\langle\mu_{H},\xi\rangle\in\ker\mathcal{D}\). On \(M_{p}\) define the metric \(\widetilde{\omega}\) by \[\widetilde{\omega}=\omega+\epsilon^{2n-2}i\partial\overline{\partial}\Gamma.\] For \(n>2\), \(\widetilde{\omega}\) takes the form \[\widetilde{\omega} =i\partial\overline{\partial}\left(|z|^{2}+\epsilon^{2n-2} \Gamma(z)+f(z)\right)\] \[=i\partial\overline{\partial}\left(|z|^{2}-\epsilon^{2}|\epsilon^ {-1}z|^{4-2n}+\epsilon^{2n-2}\widetilde{\Gamma}(z)+f(z)\right),\] near \(p\), where \(\Gamma(z)=-|z|^{4-2n}+\widetilde{\Gamma}(z)\). Recall the Burns-Simanca metric satisfies \(\eta=i\partial\overline{\partial}(|\zeta|^{2}+g(\zeta))\), where \(g(\zeta)=-|\zeta|^{4-2n}+\operatorname{O}(|\zeta|^{3-2n})\) as \(|\zeta|\to\infty\). Let us write explicitly \[g=-|\zeta|^{4-2n}+\widetilde{g},\] where \(\widetilde{g}\) is \(\operatorname{O}(|\zeta|^{3-2n})\). Gluing \(\widetilde{\omega}\) to the pullback of \(\epsilon^{2}\eta\) from \(\widetilde{D}_{R_{\epsilon}}\) to \(\widetilde{B}_{r_{\epsilon}}\), we produce the metric \[\widetilde{\omega}_{\epsilon} :=i\partial\overline{\partial}\left(|z|^{2}-\epsilon^{2}| \epsilon^{-1}z|^{4-2n}+\gamma_{1}(z)(\epsilon^{2n-2}\widetilde{\Gamma}(z)+f( z))\right.\] \[\left.+\gamma_{2}(z)\epsilon^{2}\widetilde{g}(\epsilon^{-1}z) \right). \tag{26}\] Outside of \(\widetilde{B}_{2r_{\epsilon}}\) we have \(\widetilde{\omega}_{\epsilon}=\widetilde{\omega}\), and on \(\widetilde{B}_{r_{\epsilon}}\) we have \(\widetilde{\omega}_{\epsilon}=\iota_{\epsilon}^{*}(\epsilon^{2}\eta)\). Furthermore, \[\widetilde{\omega}_{\epsilon}=\omega_{\epsilon}+i\partial\overline{\partial}( \epsilon^{2n-2}\gamma_{1}(z)\Gamma(z)).\] In the case \(n=2\), we define \(\widetilde{\omega}_{\epsilon}\) in the same way: \[\widetilde{\omega}_{\epsilon}=i\partial\overline{\partial}\left(|z|^{2}+ \epsilon^{2}\log|\epsilon^{-1}z|+\gamma_{1}(z)(\epsilon^{2}\widetilde{\Gamma}( z)+f(z))\right), \tag{27}\] where \(\Gamma(z)=\log|z|+\widetilde{\Gamma}(z)=\log|z|+\operatorname{O}(|z|^{\tau})\) for \(\tau>0\) small, and \(g(\zeta)=\log|\zeta|\) so \(\widetilde{g}=0\). The equation we wish to solve is \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon})- \nabla_{\epsilon}\ell_{\epsilon}(f_{\epsilon}+s)\cdot\nabla_{\epsilon}\varphi _{\epsilon}=\ell_{\epsilon}(f_{\epsilon}+s),\] where \(\varphi_{\epsilon}\) is a \(T^{\prime}\)-invariant smooth Kahler potential and \(f_{\epsilon}\in\overline{\mathfrak{h}}\). As done previously we will now drop the ever cumbersome \(\epsilon\), favouring \[S_{v,w}(\omega+i\partial\overline{\partial}\varphi)-\nabla\ell(f+s)\cdot \nabla\varphi=\ell(f+s).\] For the original Kahler metric on \(M\) we will write \(\omega^{\prime}\) when needed. Expanding the scalar curvature \(\tilde{S}:=S_{v,w}\) at \(\omega_{\epsilon}\), we can rewrite this as \[\check{L}(\varphi)-\frac{1}{2}\widetilde{X}(\varphi)-\ell(f)=\ell(s)-\check{S} (\omega)-\check{Q}(\varphi)+\frac{1}{2}\nabla\ell(f)\cdot\nabla\varphi.\] In order to incorporate the metric \(\widetilde{\omega}_{\epsilon}\) later on, we define \[\varphi^{\prime}:=\varphi-\epsilon^{2n-2}\gamma_{1}\Gamma\] and \[f^{\prime}:=f-\epsilon^{2n-2}h,\] where \(\Gamma\) and \(h\) are from Lemma 7.1. The equation can then be written \[\check{L}(\varphi^{\prime})-\frac{1}{2}\widetilde{X}(\varphi^{ \prime})-\ell(f^{\prime}) =\ell(s)-\check{S}(\omega)-\check{Q}(\varphi)+\frac{1}{2}\nabla \ell(f)\cdot\nabla\varphi\] \[-\check{L}(\epsilon^{2n-2}\gamma_{1}\Gamma)+\frac{1}{2}X( \epsilon^{2n-2}\gamma_{1}\Gamma)+\epsilon^{2n-2}\ell(h).\] We now use the operator \(P\) from Proposition 6.1, which is a right-inverse for the operator on the left-hand side of this equation, to rewrite this as a fixed point problem \((\varphi^{\prime},f^{\prime})=\mathcal{N}(\varphi^{\prime},f^{\prime})\), where \[\mathcal{N}(\varphi^{\prime},f^{\prime}):=P\left( \ell(s)-\check{S}(\omega)-\check{Q}(\epsilon^{2n-2}\gamma_{1} \Gamma+\varphi^{\prime})\right.\] \[\qquad+\frac{1}{2}\nabla(\epsilon^{2n-2}\ell(h)+\ell(f^{\prime}) )\cdot\nabla(\epsilon^{2n-2}\gamma_{1}\Gamma+\varphi^{\prime})\] \[\qquad\left.-\check{L}(\epsilon^{2n-2}\gamma_{1}\Gamma)+\frac{1}{ 2}X(\epsilon^{2n-2}\gamma_{1}\Gamma)+\epsilon^{2n-2}\ell(h)\right).\] **Lemma 7.2**.: _For \(n>2\) let \(\delta\in(4-2n,0)\), and for \(n=2\) let \(\delta<0\) be sufficiently close to \(0\). Then there exist constants \(c_{0},\epsilon_{0}>0\) such that for all positive \(\epsilon<\epsilon_{0}\), if \(\varphi_{1}^{\prime},\varphi_{2}^{\prime}\in(C_{\delta}^{4,\alpha})^{T^{\prime}}\) satisfy \(\|\varphi_{j}^{\prime}\|_{C_{2}^{4,\alpha}}<c_{0}\) and \(f_{1}^{\prime},f_{2}^{\prime}\in\mathfrak{h}\) satisfy \(|f_{j}^{\prime}|<c_{0}\), then_ \[\|\mathcal{N}(\varphi_{1}^{\prime},f_{1}^{\prime})-\mathcal{N}(\varphi_{2}^{ \prime},f_{2}^{\prime})\|_{C_{\delta}^{4,\alpha}}\leq\frac{1}{2}(\|\varphi_{1 }^{\prime}-\varphi_{2}^{\prime}\|_{C_{\delta}^{4,\alpha}}+|f_{1}^{\prime}-f_{2} ^{\prime}|).\] Proof.: For \(n>2\), the operator \(P\) has norm uniformly bounded independent of \(\epsilon\), hence we must estimate the \(C_{\delta-4}^{0,\alpha}\)-norm of \[\check{Q}(\varphi_{2})-\check{Q}(\varphi_{1})+\frac{1}{2}\left(\nabla\ell(f_{1 })\cdot\nabla\varphi_{1}-\nabla\ell(f_{2})\cdot\nabla\varphi_{2}\right).\] By Lemma 5.2, \[\|\check{Q}(\varphi_{2})-\check{Q}(\varphi_{1})\|_{C_{\delta-4}^{0,\alpha}} \leq C(\|\varphi_{1}\|_{C_{2}^{4,\alpha}}+\|\varphi_{2}\|_{C_{2}^{4,\alpha}}) \|\varphi_{1}-\varphi_{2}\|_{C_{\delta}^{4,\alpha}}.\] Now, \(\varphi_{j}=\varphi_{j}^{\prime}+\epsilon^{2n-2}\gamma_{1}\Gamma\), and since \(\Gamma\) is \(\mathrm{O}(|z|^{4-2n})\), \[\|\epsilon^{2n-2}\gamma_{1}\Gamma\|_{C_{2}^{4,\alpha}}\leq C\|\epsilon^{2n-2 }\Gamma\|_{C_{2}^{4,\alpha}(M\setminus B_{r\epsilon})}\leq C\epsilon^{2n-2}r _{\epsilon}^{4-2n}r_{\epsilon}^{-2}\to 0\] as \(\epsilon\to 0\). Hence, choosing \(c_{0}\) and \(\epsilon_{0}\) sufficiently small, we can ensure \[C(\|\varphi_{1}\|_{C_{2}^{4,\alpha}}+\|\varphi_{2}\|_{C_{2}^{4,\alpha}})\leq \frac{1}{4}.\] Noting that \(\|\varphi_{1}-\varphi_{2}\|_{C^{4,\alpha}_{\delta}}=\|\varphi_{1}^{\prime}-\varphi _{2}^{\prime}\|_{C^{4,\alpha}_{\delta}}\), this implies \[\|\check{Q}(\varphi_{2})-\check{Q}(\varphi_{1})\|_{C^{0,\alpha}_{\delta-4}}\leq \frac{1}{4}\|\varphi_{1}^{\prime}-\varphi_{2}^{\prime}\|_{C^{4,\alpha}_{ \delta}}.\] For the remaining term, we use the uniform estimate \(\|\ell(f)\|_{C^{1,\alpha}_{0}}\leq C|f|\) from Lemma 3.12: \[\|\nabla\ell(f_{1})\cdot\nabla\varphi_{1}-\nabla\ell(f_{2})\cdot \nabla\varphi_{2}\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq \|\nabla\ell(f_{1})\cdot(\nabla\varphi_{1}-\nabla\varphi_{2})\|_ {C^{0,\alpha}_{\delta-4}}+\|(\nabla\ell(f_{1})-\nabla\ell(f_{2}))\cdot\nabla \varphi_{2}\|_{C^{0,\alpha}_{\delta-4}}\] \[\leq C|f_{1}|\cdot\|\varphi_{1}-\varphi_{0}\|_{C^{1,\alpha}_{\delta- 2}}+C|f_{1}-f_{2}|\cdot\|\varphi_{2}\|_{C^{1,\alpha}_{\delta-2}}.\] Again, choosing \(c_{0}\) sufficiently small, \[\|\nabla\ell(f_{1})\cdot\nabla\varphi_{1}-\nabla\ell(f_{2})\cdot\nabla\varphi _{2}\|_{C^{0,\alpha}_{\delta-4}}\leq\frac{1}{4}(\|\varphi_{1}^{\prime}-\varphi _{2}^{\prime}\|_{C^{4,\alpha}_{\delta}}+|f_{1}^{\prime}-f_{2}^{\prime}|).\] When \(n=2\), the norm of \(P\) is only bounded by \(C\epsilon^{\delta}\). However, since \(\Gamma\) is \(\mathrm{O}(\log|z|)\) we have \[\epsilon^{\delta}\|\epsilon^{2}\gamma_{1}\Gamma\|_{C^{4,\alpha}_{2}}\leq C \epsilon^{2+\delta}|\log r_{\epsilon}|r_{\epsilon}^{-2}\to 0\] as \(\epsilon\to 0\), so the proof still goes through in this case. **Proposition 7.3**.: _For \(n>2\) let \(\delta\in(4-2n,0)\) be sufficiently close to \(4-2n\), and for \(n=2\) let \(\delta<0\) be sufficiently close to \(0\). Then there exists \(C>0\) independent of \(\epsilon\) such that_ \[\|\mathcal{N}(0,0)\|_{C^{4,\alpha}_{\delta}}\leq Cr_{\epsilon}^{4-\delta} \epsilon^{\theta},\] _where \(\theta:=0\) for \(n>2\) and \(\theta:=\delta\) for \(n=2\)._ Proof.: Since the norm of \(P\) is uniformly bounded by \(C\epsilon^{\theta}\), it is enough to bound the \(C^{0,\alpha}_{\delta-4}\)-norm of \[F:= \ell(s)-\check{S}(\omega)-\check{Q}(\epsilon^{2n-2}\gamma_{1} \Gamma)+\frac{1}{2}\nabla(\epsilon^{2n-2}\ell(h))\cdot\nabla(\epsilon^{2n-2} \gamma_{1}\Gamma)\] \[-\check{L}(\epsilon^{2n-2}\gamma_{1}\Gamma)+\frac{1}{2}X(\epsilon ^{2n-2}\gamma_{1}\Gamma)+\epsilon^{2n-2}\ell(h)\] by \(Cr_{\epsilon}^{4-\delta}\). We first estimate \(F\) in the region \(\widetilde{B}_{r_{\epsilon}}\). Here the terms involving \(\gamma_{1}\) vanish, and we are left with \[F=\ell(s)-\check{S}(\omega)+\epsilon^{2n-2}\ell(h).\] By Lemma 3.12, \[\|\ell(s)\|_{C^{0,\alpha}_{0}}+\|\ell(h)\|_{C^{0,\alpha}_{0}}\leq C,\] which gives \[\|\ell(s)\|_{C^{0,\alpha}_{\delta-4}(\widetilde{B}_{r_{\epsilon}})}+\|\ell(h) \|_{C^{0,\alpha}_{\delta-4}(\widetilde{B}_{r_{\epsilon}})}\leq Cr_{\epsilon}^{ 4-\delta}\to 0\] as \(\epsilon\to 0\). For the term \(\check{S}(\omega)\), note that \(\omega=\epsilon^{2}\eta\) is scalar flat in this region, so we only need to estimate \(\Phi_{v,w}(\omega)\). The term \(\Phi_{v,w}(\omega)\) is a linear combination of terms of the form \(u_{a}(\mu)\Delta\mu^{a}\) and \(u_{ab}(\mu)g(\widetilde{\xi}_{a},\widetilde{\xi}_{b})\), where the \(u_{a}\) and \(u_{ab}\) are among finitely many fixed smooth functions on the moment polytope \(P\). From Sections 3.4 and 4 we have \(C_{0}^{0,\alpha}\)-bounds for \(u_{a}(\mu)\), \(u_{ab}(\mu)\), \(g\), \(\widetilde{\xi}_{a}\), and \(\Delta\mu_{\epsilon}^{a}\). Using these bounds, we get \[\|\Phi_{v,w}(\omega)\|_{C^{0,\alpha}_{\delta-4}(\widetilde{B}_{r})}\leq Cr_{ \epsilon}^{4-\delta}\to 0\] as \(\epsilon\to 0\). Next we estimate \(F\) on the region \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{2r_{\epsilon}}\). Here \(F\) reduces to \[\check{Q}_{\omega^{\prime}}(\epsilon^{2n-2}\Gamma)+\epsilon^{4n-4}\frac{1}{2} \nabla_{\omega^{\prime}}h\cdot\nabla_{\omega^{\prime}}\Gamma,\] where \(\omega^{\prime}\) is the original metric on \(M\). First note for \(n\geq 2\) that \(\Gamma\) is \(\operatorname{O}(|z|^{4-2n-\tau})\) for all \(\tau>0\) small, so \[\|\Gamma\|_{C^{4,\alpha}_{2}(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{ 2r_{\epsilon}})}\leq Cr_{\epsilon}^{4-2n-\tau}r_{\epsilon}^{-2}=Cr_{\epsilon} ^{2-2n-\tau}.\] In particular \(\|\epsilon^{2n-2}\Gamma\|_{C^{4,\alpha}_{2}(\operatorname{Bl}_{p}M\backslash \widetilde{B}_{2r_{\epsilon}})}\leq CR_{\epsilon}^{2-2n}r_{\epsilon}^{-\tau}\to 0\) as \(\epsilon\to 0\) for \(\tau\) sufficiently small, and we can apply Lemma 5.2 to get \[\|\check{Q}_{\omega^{\prime}}(\epsilon^{2n-2}\Gamma)\|_{C^{0, \alpha}_{\delta-4}(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{2r_{ \epsilon}})}\] \[\leq C\|\epsilon^{2n-2}\Gamma\|_{C^{4,\alpha}_{2}(\operatorname{Bl}_{p }M\backslash\widetilde{B}_{2r_{\epsilon}})}\|\epsilon^{2n-2}\Gamma\|_{C^{4, \alpha}_{\delta}(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{2r_{\epsilon} })}\] \[\leq C\epsilon^{4n-4}r_{\epsilon}^{8-4n-2\tau}r_{\epsilon}^{-2}r_{ \epsilon}^{-\delta}\] \[= Cr_{\epsilon}^{4-\delta}\epsilon^{4n-4}r_{\epsilon}^{2-4n-2\tau}\] \[\leq Cr_{\epsilon}^{4-\delta},\] where in the last line we used the explicit definition \(r_{\epsilon}:=\epsilon^{\frac{2n-1}{2n+1}}\) to conclude \(\epsilon^{4n-4}r_{\epsilon}^{2-4n-2\tau}=\epsilon^{\frac{4n-6}{2n+1}-2\tau \frac{2n-1}{2n+1}}\to 0\) as \(\epsilon\to 0\) for small \(\tau\). For the term \(\epsilon^{4n-4}\nabla_{\omega^{\prime}}h\cdot\nabla_{\omega^{\prime}}\Gamma\), \[\|\epsilon^{4n-4}\nabla_{\omega^{\prime}}h\cdot\nabla_{\omega^{ \prime}}\Gamma\|_{C^{0,\alpha}_{\delta-4}(\operatorname{Bl}_{p}M\backslash \widetilde{B}_{2r_{\epsilon}})} \leq C\epsilon^{4n-4}\|h\|_{C^{1,\alpha}_{\delta-3}(\operatorname{ Bl}_{p}M\backslash\widetilde{B}_{2r_{\epsilon}})}\] \[\leq C\epsilon^{4n-4}r_{\epsilon}^{4-2n-\tau}r_{\epsilon}^{3-\delta}\] \[\leq Cr_{\epsilon}^{4-\delta}.\] Here we used Lemma 3.12 to bound \(\|h\|_{C^{1,\alpha}_{0}}\), and the definition of \(r_{\epsilon}\) to conclude \(\epsilon^{4n-4}r_{\epsilon}^{7-2n-\tau}\leq r_{\epsilon}^{4}\). Lastly we estimate \(F\) on the annulus \(A_{\epsilon}:=\widetilde{B}_{2r_{\epsilon}}\backslash\widetilde{B}_{r_{ \epsilon}}\). Here, note that since \(\widetilde{\omega}_{\epsilon}=\omega_{\epsilon}+i\partial\overline{\partial} (\epsilon^{2n-2}\gamma_{1}(z)\Gamma(z))\), we have \[\check{S}(\omega)+\check{L}(\epsilon^{2n-2}\gamma_{1}\Gamma)+\check{Q}( \epsilon^{2n-2}\gamma_{1}\Gamma)=\check{S}(\widetilde{\omega}).\] So \(F\) is given by \[\ell(s)-\check{S}(\widetilde{\omega})+\frac{1}{2}\epsilon^{4n-4}\nabla\ell(h) \cdot\nabla(\gamma_{1}\Gamma)+\frac{1}{2}X(\epsilon^{2n-2}\gamma_{1}\Gamma)+ \epsilon^{2n-2}\ell(h).\] The same estimates as for the region \(\widetilde{B}_{r_{\epsilon}}\) will bound the terms \(\ell(s)\) and \(\epsilon^{2n-2}\ell(h)\), and the terms \(\frac{1}{2}\epsilon^{4n-4}\nabla\ell(h)\cdot\nabla(\gamma_{1}\Gamma)\) and \(\frac{1}{2}X(\epsilon^{2n-2}\gamma_{1}\Gamma)\) are bounded similarly as \(\epsilon^{4n-4}\nabla h\cdot\nabla\Gamma\) was on the region \(\operatorname{Bl}_{p}M\backslash\widetilde{B}_{r_{\epsilon}}\). The only term remaining to estimate is \(\check{S}(\widetilde{\omega})\). On the annulus \(A_{\epsilon}\) we can write \(\widetilde{\omega}_{\epsilon}=i\partial\overline{\partial}\left(|z|^{2}+\rho\right)\), where \[\rho:=-\epsilon^{2}|\epsilon^{-1}z|^{4-2n}+\gamma_{1}(z)(\epsilon^{2n-2} \widetilde{\Gamma}(z)+f(z))+\gamma_{2}(z)\epsilon^{2}\widetilde{g}(\epsilon^{ -1}z)\] for \(n>2\), and \[\rho:=\epsilon^{2}\log|\epsilon^{-1}z|+\gamma_{1}(z)(\epsilon^{2}\widetilde{ \Gamma}(z)+f(z))\] for \(n=2\). For each \(t\in[0,1]\) we define the metric \(\omega_{t}:=i\partial\overline{\partial}\left(|z|^{2}+t\rho\right)\) on the annulus, so \(\omega_{1}=\widetilde{\omega}_{\epsilon}\) and \(\omega_{0}\) is the Euclidean metric. To each \(\omega_{t}\) we associate a moment map as follows. First, for \(\omega_{1}=\widetilde{\omega}_{\epsilon}\) we take the moment map \(\mu_{1}:=\mu_{\epsilon}+d^{c}(\epsilon^{2n-2}\gamma_{1}\Gamma)\). For the Euclidean metric \(\omega_{0}\), we proceed as follows. First, recall the original metric \(\omega^{\prime}\) on \(M\) can be written \(\omega^{\prime}=i\partial\overline{\partial}\left(|z|^{2}+f(z)\right)\) near \(p\), where \(f\in\mathrm{O}(|z|^{4})\). Using the function \(\beta_{2}\) from the proof of Lemma 6.5, we define \[\omega^{\prime}_{s}:=\omega^{\prime}-si\partial\overline{\partial}(\beta_{2}( z)f(z))\] on all of \(M\). Since \(f\) is \(\mathrm{O}(|z|^{4})\), \(\omega^{\prime}_{s}\) is a Kahler metric in the same class as \(\omega\) for \(\epsilon\) sufficiently small, and \[\mu^{\prime}_{s}=\mu^{\prime}-sd^{c}(\beta_{2}(z)f(z))\] is a moment map for \(\omega^{\prime}_{s}\) with image in \(P\). Taking \(s=1\) and restricting to \(A_{\epsilon}\), we produce a moment map \(\mu_{0}\) for \(\omega_{0}\) on the annulus whose image lies in \(P\). By taking the convex combination of moment maps for \(\omega_{0}\) and \(\omega_{1}\), we have a moment map \(\mu_{t}\) for every \(\omega_{t}\) on the annulus with image lying in \(P\). It follows the functions \(v(\mu_{t})\) and \(w(\mu_{t})\) are well defined, and we can consider the operator \(S_{v,w}(\omega_{t})\) and its linearisation for all \(t\). By the mean value theorem, there exists \(t\in[0,1]\) such that \[\check{S}(\widetilde{\omega}_{\epsilon})=\check{S}(\omega_{0})+\check{L}_{ \omega_{t}}\rho\] on \(A_{\epsilon}\). We estimate each term on the right. First, since \(\omega_{0}\) is scalar flat, the term \(\check{S}(\omega_{0})\) is a sum of terms of the form \(u_{\alpha}(\mu_{0})\Delta_{0}\mu_{0}\) and \(u_{ab}(\mu_{0})g_{0}(\xi_{a},\xi_{b})\). These are fixed smooth functions independent of \(\epsilon\), so their \(C^{0,\alpha}_{\delta-4}\) norms over the region \(A_{\epsilon}\) are bounded by \(Cr^{4-\delta}_{\epsilon}\). For \(n>2\), \[\|\rho\|_{C^{4,\alpha}_{\delta}(A_{\epsilon})}\leq Cr^{-\delta}_{\epsilon}(\epsilon^{2n-2}r^{4-2n}_{\epsilon}+\epsilon^{2n-2}r^{5-2 n}_{\epsilon}+r^{4}_{\epsilon}+\epsilon^{2}R^{3-2n}_{\epsilon}) \tag{28}\] \[\leq Cr^{-\delta}_{\epsilon}(r^{3}_{\epsilon}+r^{4}_{\epsilon}+r^{4}_{ \epsilon})\] \[\leq Cr^{3-\delta}_{\epsilon}.\] Note to obtain sufficiently sharp bounds in the third line, we used the formula \(r_{\epsilon}:=\epsilon^{\frac{2n-1}{2n+1}}\). For example, writing \(\epsilon^{2n-2}r^{4-2n}_{\epsilon}=r^{a}_{\epsilon}\) for some \(a\), we solve \(a=5-\frac{2n+1}{2n-1}>3\) for all \(n>2\), so \(\epsilon^{2n-2}r^{4-2n}_{\epsilon}\leq r^{3}_{\epsilon}\). For \(n=2\), \[\|\rho\|_{C^{4,\alpha}_{\delta}(A_{\epsilon})}\leq Cr^{-\delta}_{\epsilon}(\epsilon^{2}|\log R_{\epsilon}|+\epsilon^{2}r^{ \tau}_{\epsilon}+r^{4}_{\epsilon}) \tag{29}\] \[\leq Cr^{-\delta}_{\epsilon}(r^{3}_{\epsilon}+r^{4}_{\epsilon}+r^{4}_{ \epsilon})\] \[\leq Cr^{3-\delta}_{\epsilon},\] where we used \(\epsilon^{2}=r^{10/3}_{\epsilon}\) and \(\tau<4/5\) so \(\epsilon^{2}r^{\tau}_{\epsilon}\leq r^{4}_{\epsilon}\) for \(\tau\) sufficiently close to \(4/5\). Similarly, \(\|\rho\|_{C^{4,\alpha}_{\gamma}(A_{\epsilon})}\leq Cr^{3-2}_{\epsilon}=Cr_{\epsilon}\) for \(n\geq 2\). Writing \(\check{L}_{0}\) for the linearisation of the weighted scalar curvature operator at \(\omega_{0}\), \[\check{L}_{\omega_{t}}\rho=\check{L}_{0}\rho+(\check{L}_{\omega_{t}}\rho-\check {L}_{0}\rho).\] We now claim that the analogue of Proposition 5.1 applies for the metric \(\omega_{0}\) on the region \(A_{\epsilon}\) in place of \(\omega_{\epsilon}\). To see this is rather straightforward; in particular one needs estimates \(\|g_{0}\|_{C^{k,\alpha}_{0,\epsilon}(A_{\epsilon})}\leq C\) independent of \(\epsilon\), but this is even easier than the estimates for \(g_{\epsilon}\), since the metric is fixed independent of \(\epsilon\). The estimates on the metrics \(g_{\epsilon}\) were used to prove all of the corresponding moment map estimates in Section 4, so these also hold for \(\omega_{0}\). In turn, the moment map estimates were applied to prove Proposition 5.1, and so the Proposition holds for \(\omega_{0}\) in place of \(\omega_{\epsilon}\). Applying this, \[\|\check{L}_{\omega_{t}}\rho-\check{L}_{0}\rho\|_{C^{0,\alpha}_{S-4}(A_{ \epsilon})}\leq C\|\rho\|_{C^{4,\alpha}_{2}(A_{\epsilon})}\|\rho\|_{C^{4, \alpha}_{\delta}(A_{\epsilon})}\leq Cr_{\epsilon}^{4-\delta}\] on \(A_{\epsilon}\). The final term to estimate is \(\check{L}_{0}(\rho)\). The leading order term of \(\check{L}_{0}\) is \(D:=\frac{v(\mu_{0})}{w(\mu_{0})}\Delta_{0}^{2}\), which annihilates the leading order term \(-\epsilon^{2n-2}|z|^{4-2n}\) (or \(\epsilon^{2}\log|\epsilon^{-1}z|\) when \(n=2\)) of \(\rho\). Writing \(\widetilde{\rho}\) for \(\rho\) minus its leading order term, \[\|\check{L}_{0}(\rho)\|_{C^{0,\alpha}_{\delta-4}(A_{\epsilon})}\leq\|(\check{ L}_{0}-D)\rho\|_{C^{0,\alpha}_{\delta-4}(A_{\epsilon})}+\|D\widetilde{\rho}\|_{C^{0, \alpha}_{\delta-4}(A_{\epsilon})}.\] Now, from Lemma 2.12, \[(\check{L}_{0}-D)\rho=-\frac{2}{w(\mu_{0})}(\overline{\partial}^{*}_{0} \mathcal{D}_{0}\rho,\nabla^{1,0}_{0}(v(\mu_{0})))_{0}+D^{\prime}\rho,\] where \(D^{\prime}\) is a differential operator of order \(2\). For the first term on the right-hand side, \[\nabla^{1,0}_{0}(v(\mu_{0}))=\sum_{a}v_{,a}(\mu_{0})\xi_{a}.\] This has uniformly bounded \(C^{0,\alpha}_{1}\)-norm on \(A_{\epsilon}\), since \(\xi_{a}\) vanishes at \(p\) and the norm on this region can be computed as the weighted norm on \(M_{p}\). It follows that \[\left\|-\frac{2}{w(\mu_{0})}(\overline{\partial}^{*}_{0}\mathcal{ D}_{0}\rho,\nabla^{1,0}_{0}(v(\mu_{0})))_{0}\right\|_{C^{0,\alpha}_{\delta-4}(A_{ \epsilon})}\] \[\leq C\|\overline{\partial}^{*}_{0}\mathcal{D}_{0}\rho\|_{C^{0, \alpha}_{\delta-5}(A_{\epsilon})}\sum_{a}\|\xi_{a}\|_{C^{0,\alpha}_{1}(A_{ \epsilon})}\] \[\leq C\|\rho\|_{C^{3,\alpha}_{\delta-2}(A_{\epsilon})}\] \[\leq Cr_{\epsilon}^{2-\delta}r_{\epsilon}^{3}\] \[\leq Cr_{\epsilon}^{4-\delta}.\] Since \(D^{\prime}\) has order \(2\) with coefficients having bounded \(C^{k,\alpha}_{0}\)-norms, \[\|D^{\prime}\rho\|_{C^{0,\alpha}_{\delta-4}(A_{\epsilon})}\leq\|\rho\|_{C^{2, \alpha}_{\delta-2}(A_{\epsilon})}\leq Cr_{\epsilon}^{4-\delta}.\] Lastly, to estimate \(D\widetilde{\rho}\), we use parts of the bounds in (28) and (29) to get \[\|D\widetilde{\rho}\|_{C^{0,\alpha}_{\delta-4}(A_{\epsilon})} \leq\|\widetilde{\rho}\|_{C^{4,\alpha}_{\delta}(A_{\epsilon})}\] \[\leq Cr_{\epsilon}^{-\delta}r_{\epsilon}^{4}\] \[= Cr_{\epsilon}^{4-\delta}.\] We have bound all terms in \(F\) by \(Cr_{\epsilon}^{4-\delta}\), so the proof is complete. We can at last prove Proposition 3.6, which we recall implies Theorem 1.1. Proof of Proposition 3.6.: Given \(\delta\) so that Proposition 7.3 holds, we have \[\left\|\mathcal{N}_{\epsilon}(0,0)\right\|_{C_{\delta}^{4,\alpha}}\leq C_{1}r_{ \epsilon}^{4-\delta}\epsilon^{\theta}\] for all \(\epsilon>0\) sufficiently small, for a fixed constant \(C_{1}>0\). Define the set \[S_{\epsilon}:=\{(\varphi,f)\in(C_{\delta}^{4,\alpha})^{T^{\prime}}\times \overline{b}:\left\|\varphi\right\|_{C_{\delta}^{4,\alpha}}+|f|\leq 2C_{1}r_{ \epsilon}^{4-\delta}\epsilon^{\theta}\}.\] For \(\epsilon\) sufficiently small, we will have \(2C_{1}r_{\epsilon}^{4-\delta}\epsilon^{\theta}<c_{0}\), where \(c_{0}\) is the constant of Lemma 7.2. Hence for \((\varphi,f)\in S_{\epsilon}\), \[\left\|\mathcal{N}_{\epsilon}(\varphi,f)\right\|_{C_{\delta}^{4, \alpha}} \leq\left\|\mathcal{N}_{\epsilon}(0,0)\right\|_{C_{\delta}^{4, \alpha}}+\left\|\mathcal{N}_{\epsilon}(\varphi,f)-\mathcal{N}_{\epsilon}(0,0) \right\|_{C_{\delta}^{4,\alpha}}\] \[\leq C_{1}r_{\epsilon}^{4-\delta}\epsilon^{\theta}+\frac{1}{2}( \left\|\varphi\right\|_{C_{\delta}^{4,\alpha}}+|f|)\] \[\leq 2C_{1}r_{\epsilon}^{4-\delta}\epsilon^{\theta}.\] It follows that \(\mathcal{N}_{\epsilon}\) maps the set \(S_{\epsilon}\) to itself, and by Lemma 7.2, \(\mathcal{N}_{\epsilon}\) is a contraction on \(S_{\epsilon}\). By the contraction mapping theorem, there exists a unique fixed point \((\varphi_{\epsilon},f_{\epsilon})\) of \(\mathcal{N}_{\epsilon}\) on the set \(S\). By construction, this fixed point solves the approximate weighted extremal equation \[S_{v,w}(\omega_{\epsilon}+i\partial\overline{\partial}\varphi_{\epsilon})- \nabla_{\epsilon}\ell_{\epsilon}(s+f_{\epsilon}+\epsilon^{2n-2}h)\cdot\nabla_ {\epsilon}\varphi_{\epsilon}=\ell_{\epsilon}(s+f_{\epsilon}+\epsilon^{2n-2}h),\] where we recall \(h\) is the fixed function from Lemma 7.1. Thus, relating to Proposition 3.6, we define \(h_{p,\epsilon}:=s+f_{\epsilon}+\epsilon^{2n-2}h\), and we have solved the required equation; note the solution is smooth, by elliptic regularity. All that remains to be seen is that \(h_{p,\epsilon}\) has the expansion \[h_{p,\epsilon}=s+\epsilon^{2n-2}c_{n}\overline{\mu_{H}^{\#}(p)}+h_{p,\epsilon }^{\prime},\] where \(h_{p,\epsilon}^{\prime}\) satisfies \(|h_{p,\epsilon}^{\prime}|\leq C\epsilon^{\kappa}\) for some \(\kappa>2n-2\). By construction of \(h\) we have \(h=c_{n}\overline{\mu_{H}^{\#}(p)}\), and \(h_{p,\epsilon}^{\prime}=f_{\epsilon}\) satisfies \[|f_{\epsilon}|\leq 2C_{1}r_{\epsilon}^{4-\delta}\epsilon^{\theta}\leq C \epsilon^{\frac{2n-1}{2n+1}(4-\delta)+\theta}.\] We can choose \(\delta\) as close to \(4-2n\) as required so that \(\frac{2n-1}{2n+1}(4-\delta)+\theta>2n-2\). This completes the proof. ## 8. Examples We end by applying our theorem to specific choices of weight functions. Perhaps the best new result is in the extremal Sasaki case, where we can genuinely produce new extremal Sasaki metrics. 1. **Extremal metrics.** Our first example is that of constant weight functions \(v\) and \(w\). In this setting, our main theorem 1.1 recovers Szekelyhidi's refinement of the Arezzo-Pacard-Singer theorem, that the blowup of an extremal manifold at a relatively stable fixed point of the extremal field admits an extremal metric in classes making the exceptional divisor small [1, 2]. 2. **Extremal Sasaki metrics.** For our next example, we prove Corollary 1.3 on extremal Sasaki metrics. Proof of Corollary 1.3.: Consider the weight functions \[v:=(a+\ell_{\xi})^{-n-1},\quad w:=(a+\ell_{\xi})^{-n-3}.\] With this choice, a \((v,w)\)-weighted extremal metric on \(M\) in the class \(c_{1}(L)\) corresponds to an extremal Sasaki metric on the unit sphere bundle \(S\) of \(L^{*}\). Suppose we blow up \(M\) at a relatively stable fixed point \(p\) of the torus action and the extremal field. For \(\epsilon>0\) sufficiently small, by Theorem 1.1 there exists a weighted extremal metric in the class \([\pi^{*}L-\epsilon E]\), where \(E\) is the exceptional divisor of the blowup. If \(\epsilon\) is rational, the class \([\pi^{*}L-\epsilon E]\) will be that of a \(\mathbb{Q}\)-line bundle. The weighted extremal property is invariant under rescalings of the metric. Thus, if we rescale the class \([\pi^{*}L-\epsilon E]\) by an integer \(k\) such that \(k\epsilon\in\mathbb{Z}\), there will exist a \((v,w)\)-extremal metric in the class \([k\pi^{*}L-k\epsilon E]\), which is the first Chern class of an ample line bundle. It follows that the unit sphere bundle \(S_{\epsilon,k}\) of \((k\pi^{*}L-k\epsilon E)^{*}\) admits an extremal Sasaki metric. 3. **Kahler-Ricci solitons and \(\mu\)-cscK metrics.** Finally, let us consider Kahler-Ricci solitons. We mentioned in the introduction that our result can never produce a Kahler-Ricci soliton on the blowup. However, we can produce a weighted extremal metric with weights \(v=w=e^{\langle\xi,-\rangle}\). This is almost a \(\mu\)-cscK metric in the sense of Inoue [10], the only obstruction is that the extremal field for this metric might not equal \(\xi\). However, it is a small deformation of \(\xi\), since the weighted extremal metric on the blowup is a small deformation of \(\omega_{\epsilon}\). It would be interesting to know when this extremal field is equal to \(\xi\) itself, so that the blowup is genuinely \(\mu\)-cscK.
2307.09576
Feedback models in galaxy simulations and probing their impact by cosmological hydrodynamic simulations
Feedback effects generated by supernovae (SNe) and active galactic nuclei (AGNs) are pivotal in shaping the evolution of galaxies and their present-day structures. However, our understanding of the specific mechanisms operating at galactic scales, as well as their impact on circum-galactic medium (CGM) and intergalactic medium (IGM), remains incomplete. Galaxy formation simulations encounter challenges in resolving sub-parsec scales, necessitating the implementation of subgrid models to capture the physics occurring at smaller scales. In this article, we provide an overview of the ongoing efforts to develop more physically grounded feedback models. We discuss the pursuit of pushing simulation resolution to its limits in galaxy simulations and the rigorous testing of galaxy formation codes through participation in the AGORA code comparison project. Additionally, we delve into techniques for investigating the impact of feedback using cosmological hydrodynamic simulations, specifically through Lya absorption and CGM/IGM tomography. Furthermore, we outline our future research directions within this field and highlight the progress made by comparing our simulation results with observational data.
Kentaro Nagamine
2023-07-13T06:00:42Z
http://arxiv.org/abs/2307.09576v1
Feedback models in galaxy simulations and probing their impact by cosmological hydrodynamic simulations ###### Abstract Feedback effects generated by supernovae (SNe) and active galactic nuclei (AGNs) are pivotal in shaping the evolution of galaxies and their present-day structures. However, our understanding of the specific mechanisms operating at galactic scales, as well as their impact on circum-galactic medium (CGM) and intergalactic medium (IGM), remains incomplete. Galaxy formation simulations encounter challenges in resolving sub-parsec scales, necessitating the implementation of subgrid models to capture the physics occurring at smaller scales. In this article, we provide an overview of the ongoing efforts to develop more physically grounded feedback models. We discuss the pursuit of pushing simulation resolution to its limits in galaxy simulations and the rigorous testing of galaxy formation codes through participation in the AGORA code comparison project. Additionally, we delve into techniques for investigating the impact of feedback using cosmological hydrodynamic simulations, specifically through Ly\(\alpha\) absorption and CGM/IGM tomography. Furthermore, we outline our future research directions within this field and highlight the progress made by comparing our simulation results with observational data. cosmology: theory, galaxies: formation, galaxies: evolution, galaxies: ISM, galaxies: high-redshift, hydrodynamics 2022 Kentaro Nagamine] Kentaro Nagamine\({}^{1,2,3}\) 2023 119-122 IAUS 373 T. Wong & W.-T. Kim ## 1 Introduction Feedback processes from supernovae (SNe) and active galactic nuclei (AGNs) have a significant impact on the regulation of galaxy formation and evolution. The prevailing consensus suggests that AGN feedback primarily acts to suppress star formation in massive galaxies at lower redshifts, whereas SN feedback predominantly affects star formation in low-mass galaxies at higher redshifts. As a result, this interplay between feedback mechanisms contributes to the observed peak in the stellar-to-halo mass relation (Behroozi et al. 2013). High-redshift galaxies serve as excellent testbeds for examining feedback models due to the numerous intriguing physical processes taking place within them. In Figure 1, we present a schematic diagram encapsulates some of these processes. At high redshifts, low-metallicity gas streams into dark matter halos through narrow, cold flows, providing ample fuel for star formation (Keres et al. 2005; Wright et al. 2021). However, as we transition towards lower redshifts (\(z\lesssim 2\)), gas accretion shifts to the hot mode, and prominent cold streams diminishes in simulations (Faucher-Giguere et al. 2011; Nelson et al. 2016). The cosmic star formation rate density (SFRD) exhibits a broad peak around \(z\simeq 3-5\)(e.g., Nagamine et al. 2000; Nagamine et al. 2004, 2006; Kistler et al. 2009; Madau & Dickinson 2014, and references therein), with the rising SFRD at high redshifts driven by the active formation of dark matter halos and galaxies through gravitational instability (Schaye et al. 2010). Observations employing instruments like ALMA have provided valuable insights into the emission lines expected from high-redshift galaxies, including Ly\(\alpha\), [C ii], [O iii] lines (e.g., Smit et al. 2018; Hashimoto et al. 2018, 2019). These observations suggest the presence of very early onset of star formation at \(z\sim 15\). The escape fraction (\(f_{\rm esc}\)) of ionizing and ultraviolet (UV) photons stands as a critical physical parameter that profoundly influences the radiative characteristics of high-redshift galaxies. However, estimating \(f_{\rm esc}\) observationally is challenging, and only a limited number of approximate measurements have been made. Hence, it is desirable to directly predict \(f_{\rm esc}\) for different types of galaxies using hydrodynamic simulations of galaxy formation (e.g., Cen 2003; Razoumov & Sommer-Larsen 2006; Gnedin et al. 2008; Wise & Cen 2009; Yajima et al. 2017). To make reliable predictions of \(f_{\rm esc}\), accurate computations of interstellar medium (ISM) structure and, thus, the attainment of high resolution are indispensable. Furthermore, understanding the intricate details of \(f_{\rm esc}\) holds significant importance in determining whether the reionization of the universe favors the "Early" or "Late" scenarios (e.g., Finkelstein et al. 2019; Naidu et al. 2020). By conducting in-depth studies on the escape fraction, we can gain valuable insights into the processes contributing to the ionization state of the universe and the timing of reionization events. Such investigations provide crucial information for comprehending the complex interplay between galaxies, their radiation, and the overall evolution of the early universe. ## 2 Feedback models in galaxy formation simulations In early cosmological hydrodynamic simulations, SN feedback was often simplified by injecting thermal energy on large scales (\(>\) kpc) (Cen & Ostriker 1992; Katz 1992; Katz et al. 1996; Cen & Ostriker 1999). However, this approach faced challenges in low-resolution simulations, as the injected thermal energy would rapidly dissipate through radiation due to the inability to resolve the detailed Sedov-Taylor phase of each SN or collective superbubble. This issue, known as the overcooling problem, posed a significant hurdle. To overcome the overcooling problem, effective models of SN feedback have been developed, employing various strategies within galaxy formation simulations. These strategies include: (i) Ignoring and bypassing unresolved scales; (ii) Scaling up the energy dynamics to a resolvable scale by considering cumulative energies; or (iii) Modeling physics on unresolved scales via subgrid models. As examples of method (i), the "delayed cooling" model temporarily ignores the cooling after a supernova event to enhance the impact of thermal feedback (Thacker & Couchman 2000; Stinson et al. 2006). The "constant velocity wind" model of Springel & Hernquist (2003) stochastically kicks gas particles and disables hydro forces until the wind particles exit the galaxies, within a smoothed particle hydrodynamics (SPH) code. In method (ii), the "stochastic thermal feedback" model increases the temperature of neighboring fluid elements by a certain value, \(\Delta T\) (Kay et al. 2003; Dalla Vecchia & Schaye 2012), so that the subsequent evolution of the hot bubble can be solved by a hydro solver with efficient thermal feedback. However, the choice of \(\Delta T\) remains somewhat arbitrary and uncertain. Method (iii) includes the "multiphase ISM model," where a single SPH particle is treated as a multiphase gas, and energy exchange between the hot and cold phases is accounted for using a subgrid equilibrium model (Yepes et al. 1997; Springel & Hernquist 2003; Keller et al. 2014). Another approach, as part of method (iii), involves injecting the terminal momentum of a single SN explosion based on the Sedov-Taylor solution (Kimm & Cen, 2014; Hopkins et al., 2018). For further discussions on different feedback treatments, refer to Nagamine (2018) and Oku et al. (2022). In Shimizu et al. (2019), we employed a combination of the delayed cooling model and kinetic feedback using the Sedov-Taylor self-similar solution within the GADGET3-Osaka code. Figure 2 presents an example of testing different SN feedback models. In the fiducial model (K30T70), we injected 30% of the SN energy as kinetic energy and 70% as thermal energy, following approaches in previous studies (e.g., Chevalier, 1974; Durier & Dalla Vecchia, 2012). To ensure the effectiveness of thermal feedback, cooling was temporarily disabled for the neighboring particles that received the thermal feedback energy. The gas distribution image in the face-on view of the "No-feedback" (No-FB) run exhibits a highly clumpy, cold, and dense gas distribution, with a noticeable absence of hot gas above the disk. As the fraction of thermal energy injection increases, the spiral arms become more diffuse compared to the Fiducial run. The "Stochastic thermal" (Stc-TH) and "Stochastic constant wind velocity" (Sto-CW) runs also display clumpy knots within the disk. However, the feedback effects above the disk are more pronounced in these runs, resulting in the presence of hot outflowing gas that excessively enriches the CGM. When cooling is activated in the "Cool-on" run, the gas heated by feedback rapidly Figure 1: Schematic illustration of key physical processes in high-redshift galaxies. Gas inflow occurs via cold streams with \(T\sim 10^{4}\,\)K, leading to the condensation of the gas into dense clouds (\(T\sim 10^{2}-10^{3}\,\)K) through radiative cooling. Subsequent star formation gives rise to massive stars that ionize the surrounding ISM, creating H ii regions. The UV radiation from massive stars impinges on the ISM surface, leading to the formation of photo-dissociation regions (PDRs), which emit infrared lines like [C ii] as reprocessed radiation. These emissions from high-\(z\) galaxies have been computed using cosmological zoom-in hydrodynamic simulations (e.g., Arata et al., 2019; Katz et al., 2022; Pallottini et al., 2022). In addition, prominent Ly\(\alpha\) emissions are also observed from high-\(z\) galaxies. After several million years, the massive stars die, resulting in the ejection of gas as galactic outflows. cools down and remains confined within the disk. Consequently, the Cool-on run exhibits similar disk features to the No-FB run. The Osaka feedback model presented in Shimizu et al. (2019) offers a valuable comparison of various SN feedback treatments. This model successfully achieves self-regulation of star formation and naturally generates galactic outflows. However, it still contained some unphysical treatments, such as the temporary disabling of cooling for effective thermal feedback. In Oku et al. (2022), we revisited the concept of single SN remnant (SNR) and superbubble, drawing inspiration from earlier studies (Chevalier, 1974; Weaver et al., 1977; Tomisaka & Ikeuchi, 1986; Ostriker & McKee, 1988; Martizzi et al., 2015; Kim & Ostriker, 2015; Kim et al., 2017), and investigated the metallicity dependence of the terminal moment of the SN shell. Using the Eulerian hydrodynamic code ATHENA++, we extended the analytic solution for the SNR shell-formation time by (Kim & Ostriker, 2015) to incorporate the effect of metallicity. Additionally, we obtained an analytic solution for the formation time of the superbubble shell. We found a universal scaling relations for the temporal evolution of momentum and radius for a superbubble when scaled by their values at the shell-formation time. Building upon these findings, we developed a SN feedback model based on the ATHENA++ simulation results. This involved employing Voronoi tessellation around each star particle and integrating it into the GADGET3-Osaka SPH code. We examined the mass/energy/metal loading factors and found that our stochastic thermal feedback model generated galactic outflows capable of transporting metals above the galactic plane while exhibiting a modest suppression of star formation. Incorporating additional mechanical feedback further suppressed star formation and improved Figure 2: An example of testing SN feedback models in isolated galaxy simulations, showing the projected gas density, the mass-weighted temperature, and the metallicity from top to bottom rows, respectively (Shimizu et al., 2019). Within each row, the top panels show a face-on view, while the lower panel presents an edge-on view. the agreement between simulation results and observations of the Kennicutt-Schmidt relation, considering the uncertainties in the observed data. We argued that both thermal and mechanical feedback are crucial in the SN feedback model of galaxy evolution, particularly in SPH simulations where individual SN bubbles remain unresolved. Some simulations have made significant progress in pushing the resolution limits by focusing on low-mass galaxies. For example, Hu (2019) employed the GADGET-3 SPH simulation to study SN feedback in a dwarf galaxy residing in a dark matter halo with a virial mass of \(M_{\rm vir}=10^{10}\,M_{\odot}\). They achieved a remarkable resolution of \(m_{\rm gas}=1\,M_{\odot}\), along with 0.3 pc for gravitational softening length and SPH smoothing length. Despite this high resolution, certain assumptions were still necessary, such as determining the fraction of energy given as kinetic energy and determining the number of SPH particles that receive the feedback energy. Nonetheless, they successfully simulated the formation of superbubbles with sizes of a few hundred parsecs and investigated their breakout from the galactic disk. Similarly, Ma et al. (2020) conducted simulations of dwarf galaxy formation during the reionization epoch using cosmological zoom-in simulations with the FIRE-2 GIZMO. They focused on a halo with a virial mass of \(M_{\rm vir}=3.7\times 10^{10}\,M_{\odot}\) and achieved a mass resolution of \(m_{\rm gas}=100\,M_{\odot}\) and a spatial resolution of approximately one parsec. They examined the location of star formation within these high-redshift dwarf galaxies in relation to the superbubbles walls and observed significant spatio-temporal variations in the escape fraction of ionizing photons. These findings are in line with previous studies by Wise and Cen (cf. 2009); Kimm and Cen (cf. 2014). As simulations continue to improve in mass resolution, approaching a "star-by-star" level, it becomes necessary to devise methods for stochastically sampling the initial mass function (IMF) for the star formation model. Several studies, including Ploeckinger et al. (2014); Hu (2019); Hirai et al. (2021), have explored this approach, but further investigation is needed to understand its impact on feedback processes as well as the overall galaxy evolution. ## 3 The AGORA code comparison project Conducting code comparison projects is indeed an important approach to test and improve galaxy formation codes. Several notable projects have been undertaken in this regard, including the Santa Barbara cluster comparison project (Frenk et al., 1999), the Aquila project (Scannapieco et al., 2012), the nIFTy project (Knebe et al., 2015), and the AGORA project (Kim et al., 2014, 2016; Roca-Fabrega et al., 2021). These initiatives bring together different research groups to systematically compare their simulation results, exchange ideas, identify strengths and weaknesses, and foster improvements in galaxy formation modeling. The Santa Barbara cluster comparison project played a crucial role in highlighting the diverse results obtained from different hydrodynamic schemes and the issue of spurious entropy generation in SPH codes. This project had a profound impact on the development of new SPH schemes that could better resolve shocks, such as the density-independent scheme (Saitoh and Makino, 2013) and more general formulations based on Lagrangian-based derivations (Springel and Hernquist, 2002; Hopkins, 2013). These improved SPH schemes offer enhanced stability and shock resolution compared to traditional versions. The incorporation of these new schemes has led to significant improvements in the fidelity of SPH simulations, addressing some of the earlier challenges and limitations associated with spurious entropy generation. The Aquila comparison project focused on investigating code-to-code variations in galactic properties at \(z=0\), including stellar mass, size, morphology, and gas content. The project concluded that, due to different feedback prescriptions employed in the simulations, the models were not yet capable of uniquely predicting galactic properties, even when the assembly history of dark matter halos was the same. This highlights the importance of refining and calibrating feedback models to achieve more accurate and consistent predictions. The AGORA project (Assembling Galaxies of Resolved Anatomy) (1) was designed to enable a more controlled environment for galaxy formation simulations. The project brought together multiple simulation codes, encompassing SPH, AMR, and moving mesh methods, and aimed to achieve consistent results by implementing common astrophysics setups. To establish a common baseline for comparison, the AGORA project initiated rigorous calibration steps that included adopting a common star formation recipe and utilizing the same Grackle cooling module (Smith et al., 2017) across all participating codes. With these standardized astrophysics setup, the AGORA project demonstrated more consistent behaviors among the different simulation codes. Kim et al. (2016) concluded that modern high-resolution galaxy formation simulations are primarily influenced by the input physics, such as feedback prescriptions, rather than intrinsic differences in numerical schemes. Building upon these findings, Roca-Fabrega et al. (2021) extended the comparison to cosmological zoom-in hydro simulations, and seven contemporary astrophysical simulation codes (ART-I, ENZO, RAMSES, CHANG, GADGET-3, GEAR, and GIZMO) were compared. The comparison process involved four systematic calibration steps, starting from a simple adiabatic run without cooling and star formation, and gradually incorpo Figure 3: Comparison of projected (density-square-weighted) metallicity of seven different code from \(z=8\) to \(z=4\). Adapted from Fig. 18 of (Roca-Fabrega et al., 2021). rating cooling, heating, and star formation in subsequent steps. In the final step, each code was tasked to reproduce a stellar mass of \(\sim 10^{9}\,M_{\odot}\) at \(z=4\) within a halo that would grow to \(10^{12}\,M_{\odot}\) by \(z=0\), employing code-specific SN feedback recipes. With a physical resolution of \(\lesssim 100\,\)pc at \(z=4\), the participating codes demonstrated a general agreement on gas and stellar properties. However, interesting differences emerged in the temperature and chemical enrichment of the CGM due to variations in feedback treatments, as illustrated in Fig. 3, and Strawn et al. (2023, in preparation). These results emphasized the need for further refinement and constraint of SN and AGN feedback models through comprehensive comparisons with a wide range of observations. Overall, the AGORA project provides a valuable framework for comparing simulation codes, promoting a deeper understanding of their similarities, differences, and areas of improvement. It highlighted the importance of standardized astrophysics setups and the ongoing development of more accurate and constrained feedback models for advancing our understanding of galaxy formation and evolution. ## 4 Probing the impact of feedback by cosmological simulations It is crucial to complement studies of feedback in isolated galaxies and zoom-in simulations with large-scale cosmological hydrodynamic simulations. These simulations offer the advantage of larger box sizes and a broader sample of galaxies, enabling researchers to investigate important galaxy statistics such as the galaxy stellar mass/luminosity functions and the stellar-to-halo-mass ratio (see the contribution by R. Somerville in these proceedings). In addition to galaxies, we would also like to probe the distribution of diffuse baryons via absorption and emission lines. For example, the distribution of neutral hydrogen (Hi) probed by the Ly\(\alpha\) forest (e.g., Weymann et al. 1981; Cowie et al. 1995; Rauch 1998) reflects the strength of UV background radiation field and the local ionizing radiation. We anticipate that the impact of feedback is imprinted in CGM/IGM (Cen et al. 1994; Hernquist et al. 1996; Miralda-Escude et al. 1996; Zhang et al. 1997, 1998; Theuns et al. 2002; Cen et al. 2005; Kollmeier et al. 2006). The Ly\(\alpha\) forest serves as a powerful tool for cosmological studies and has been used to constrain cosmological parameters and the matter power spectrum (Weinberg et al. 1998; Croft et al. 1998; McDonald et al. 2006; Irsic et al. 2017), as well as to investigate the mass of warm dark matter particles or neutrinos (e.g. Viel et al. 2005, 2013; Palanque-Delabrouille et al. 2015). This line of research is also related to the unresolved 'Missing baryon problem' (Cen & Ostriker 1999; Nicastro et al. 2005; Shull et al. 2012; de Graaff et al. 2019), which refers to the challenge of observationally accounting for the entire cosmic baryon content. IGM tomography represents an enhanced version of the Ly\(\alpha\) forest technique, enabling the generation of three-dimensional contour maps of Hi density in the IGM. This advanced approach utilizes a larger sample of star-forming galaxies as background sources, in addition to quasar sight-lines, to reconstruct the spatial distribution of Hi gas in three dimensions. Several groups have already demonstrated the feasibility of this approach (e.g., Lee et al. 2014, 2018; Cai et al. 2016; Mukae et al. 2020), and massive protoclusters at redshifts \(z=2-3\) have been identified (Lee et al. 2016; Cai et al. 2017). Moreover, this technique enables the derivation of the correlation between galaxy overdensity and Hi overdensity, providing valuable insights into the relationship between galaxies/AGNs and the surrounding Hi gas (Mukae et al. 2017; Liang et al. 2021; Momose et al. 2021). The scientific objectives of IGM tomography are: (i) to characterize the cosmic web at \(z>2\), (ii) to study the association between galaxies/AGNs and Hi gas, and (iii) to identify protoclusters and voids in an _unbiased_ manner. As a pathfinder to the IGM tomography studies by the Subaru PFS (Takada et al. 2014; Greene et al. 2022) and upcoming observations by the JWST/TMT/ELT, Nagamine et al. (2021) investigated the impact of feedback on basic Ly\(\alpha\) forest statistics by creating a light-cone data set at \(z=2-3\) and generating a mock Ly\(\alpha\) forest data. They used five cosmological hydro simulations by GADGET3-Osaka code with different models of feedback and UVB treatment (comoving boxsize \(L_{\rm box}\)=147.6 Mpc, particle number \(N=2\times 512^{3}\)), and examined the 1D flux probability distribution function, 1D flux power spectrum, flux contrast vs. impact parameter from galaxies, and Hi-galaxy cross-correlation. The flux contrast is defined as \(\eta_{F}\equiv-\delta_{F}=1-\frac{F}{\langle F\rangle}\), where \(F\) is the transmitted flux (\(F=e^{-\tau}\)), and \(\langle F\rangle\) is the average effective Ly\(\alpha\) optical depth adjusted to the observed value (Becker & Bolton 2013). Higher \(\eta_{F}\) in the vicinity of galaxies means stronger absorption, i.e., more Hi (left panel of Fig. 4). In other words, they found stronger Hi absorption with decreasing impact parameter from galaxies, consistently with earlier simulation results (e.g., Bruscoli et al. 2003; Kollmeier et al. 2003, 2006; Meiksin et al. 2015; Turner et al. 2017; Meiksin et al. 2017; Sorini et al. 2018). Their simulation results demonstrated overall agreement with current observational data, but with some interesting discrepancies of about 30% on small scales that are due to different treatments of feedback and UVB, or varying observational conditions (right panel of Fig. 4). The massive galaxies with \(M_{\star}\geqslant 10^{10}\,M_{\odot}\) strongly contribute to the flux contrast signal (left panel of Fig. 4), while lower-mass galaxies in the range of \(M_{\star}\approx 10^{8}-10^{10}\,M_{\odot}\) dilute the flux contrast signal from massive galaxies when averaged over the entire galaxy sample. The variations in \(\eta_{F}\) on scales of \(<1\) Mpc can be probed with future IGM tomography surveys with dense background source sampling by JWST/ELT/TMT. On larger scales, the average flux contrast smoothly connects to the IGM level, supporting the spherical infall model and concordance \(\Lambda\) cold dark matter model, as also found by Meiksin et al. (2017); Sorini et al. (2018). Interestingly, Sorini et al. (2020) reported negligible impact of AGN feedback on the flux contrast, suggesting that stellar feedback primarily determines the average physical properties of CGM at \(z=2-3\). However, further investigation in simulations incorporating AGN feedback is warranted to confirm this finding (cf. Tillman et al. 2022). In addition to Hi distribution, metal distribution can also be probed by emission and Figure 4: _Left:_ Ly\(\alpha\) flux contrast as a function of impact parameter from nearby galaxies. The data points are from Font-Ribera et al. (2013, orange filled circle; F13) and Prochaska et al. (2013, orange filled square; P13). _Right:_ Relative difference in the flux contrast from the Fiducial model, showcasing runs with different feedback and UVB treatments. Both figures are adapted from Nagamine et al. (2021). absorption lines. For example, the MEGAFLOW project has observed Mg ii lines in both absorption and emission in the galactic wind region of \(z\sim 0.7\) galaxy (Zabl et al. 2020, 2021). In a different study, Nelson et al. (2021) computed the resonantly scattered Mg ii emission from the TNG50 simulation. Their analysis indicates that the simulated galaxies exhibit somewhat steeper profiles (i.e., a faster decline with increasing radii) compared to the observed data points (see their Fig. 3). However, the currently observed sources are especially bright ones that are easily detectable, therefore, further comparisons with lower mass systems are necessary in the future. Interestingly, similar trends are observed in other emission lines, such as [C ii] (Arata et al. 2020; Fujimoto et al. 2019), and Ly\(\alpha\) (Zhang et al. 2020), where the simulated galaxies fail to reproduce the observed extended emission profiles. These discrepancies have interesting implications for the feedback efficiencies, such as the mass-loading factor of metals in galactic outflows (Pizzati et al. 2020), and therefore warrant further studies to constrain the efficiencies of chemical enrichment in CGM and IGM. ## 5 Summary In this article, we reviewed various feedback treatments in galaxy simulations, and discussed our development of physically-based SN feedback models and the tests within the AGORA project using isolated galaxies and zoom-in cosmological hydrodynamic simulations. We argued that considering both thermal and kinetic modes of SN feedback is important at the current resolution level (\(\gtrsim 10\) pc). In our recent work, Oku et al. (2022) showed that the kinetic feedback suppresses star formation while stochastic thermal feedback drives strong metal outflows. Further studies on both small and large scales are crucial to fully understand the role of feedback in galaxy evolution and the chemical enrichment of CGM/IGM. Notably there are indications that very high-resolution simulations (\(\lesssim\) pc scale) exhibit weaker winds compared to larger-scale simulations that capture the physics of galactic winds on supergalactic scales (see contributions by E. Ostriker and C.-G. Kim in this proceedings, as well as Hu 2019). Additionally, the inclusion of additional physics, such as cosmic rays and magnetic fields, will be essential for constructing more physically plausible models of star formation and feedback (e.g., Hopkins et al. 2022). It might also be possible to constrain the physics of feedback at larger scales of circumgalactic and intergalactic scales utilizing the Ly\(\alpha\) absorption by neutral hydrogen (commonly referred to as IGM tomography) and distribution of metals and dust. For example, Nagamine et al. (2021) have shown that SN feedback influences the radial distribution of H i gas and the Ly\(\alpha\) flux contrast signal at \(\sim\)30% level. Future comparisons between simulations and the CGM/IGM tomography surveys by WEAVE, MOONS, Subaru PFS, JWST, ELT, and TMT will provide valuable insights and further constrain the physics of feedback. I am grateful to all of my recent collaborators on the research results discussed in this article, including S. Arata, R. Cen, K. G. Lee, R. Momose, Y. Oku, L. Romano, I. Shimizu, K. Tomida, H. Yajima, and everyone in the AGORA project.
2305.11540
Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with Images as Pivots
Diffusion models have made impressive progress in text-to-image synthesis. However, training such large-scale models (e.g. Stable Diffusion), from scratch requires high computational costs and massive high-quality text-image pairs, which becomes unaffordable in other languages. To handle this challenge, we propose IAP, a simple but effective method to transfer English Stable Diffusion into Chinese. IAP optimizes only a separate Chinese text encoder with all other parameters fixed to align Chinese semantics space to the English one in CLIP. To achieve this, we innovatively treat images as pivots and minimize the distance of attentive features produced from cross-attention between images and each language respectively. In this way, IAP establishes connections of Chinese, English and visual semantics in CLIP's embedding space efficiently, advancing the quality of the generated image with direct Chinese prompts. Experimental results show that our method outperforms several strong Chinese diffusion models with only 5%~10% training data.
Jinyi Hu, Xu Han, Xiaoyuan Yi, Yutong Chen, Wenhao Li, Zhiyuan Liu, Maosong Sun
2023-05-19T09:20:27Z
http://arxiv.org/abs/2305.11540v1
# Efficient Cross-Lingual Transfer for Chinese Stable Diffusion ###### Abstract Diffusion models have made impressive progress in text-to-image synthesis. However, training such large-scale models (e.g. Stable Diffusion), from scratch requires high computational costs and massive high-quality text-image pairs, which becomes unaffordable in other languages. To handle this challenge, we propose IAP, a simple but effective method to transfer English Stable Diffusion into Chinese. IAP optimizes only a separate Chinese text encoder with all other parameters fixed to align Chinese semantics space to the English one in CLIP. To achieve this, we innovatively treat images as pivots and minimize the distance of attentive features produced from cross-attention between images and each language respectively. In this way, IAP establishes connections of Chinese, English and visual semantics in CLIP's embedding space efficiently, advancing the quality of the generated image with direct Chinese prompts. Experimental results show that our method outperforms several strong Chinese diffusion models with only 5% \(\sim\) 10% training data. ## 1 Introduction In recent years, diffusion models (Ho et al., 2020) have emerged as a promising generative model for various tasks, such as image generation (Dharibal and Nichol, 2021), speech synthesis (Chen et al., 2020), molecular generation (Xu et al., 2021), and text-to-image synthesis (Ramesh et al., 2022). Specifically, large-scale text-to-image diffusion models, such as DALL-E 2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022), and Stable Diffusion (Rombach et al., 2022), have gained significant attention for their powerful ability to produce highly realistic and relevant images given a text prompt. Despite their promising performance, large-scale diffusion models require massive training resources. For example, Stable Diffusion, the state-of-the-art open-source English text-to-image synthesis model, was trained on billions of text-image pairs. The high resource consumption of training makes it necessary to develop efficient methods for applying these models in various scenarios. Researchers have taken inspiration from the utilization of large-scale pre-trained language models and have designed finetuning methods for few-shot tasks, such as Textual Inversion (Gal et al., 2022), and DreamBooth (Ruiz et al., 2022). To boost the development of text-to-image synthesis around the world, previous works have attempted to train text-to-image models for languages other than English based on Stable Diffusion. In the Chinese community, Taiyi Diffusion (Wang et al., 2022) and AltDiffusion (Chen et al., 2022) were developed by training a new Chinese text encoder with the main parameters of Stable Diffusion fixed. However, using vanilla training methods, which only optimize the objective function Eq. (1) used in the original training of diffusion model, fails to establish a connection between the new text encoder and the CLIP encoder that provides text embeddings for Stable Diffusion. As a result, even though these models were trained on large datasets, the lack of interaction between the Chinese and CLIP text encoder leads to poor alignment between Chinese, English, and images. Cross-lingual transfer aims to apply models developed for a language with abundant resources to a relatively low-resource language. Previous studies have utilized the CLIP model (Radford et al., 2021), a powerful text-image representation model, to learn bilingual or multilingual vision-language models through contrastive objectives (Ko and Gu, 2022; Lee et al., 2022) or knowledge distillation (Carlsson et al., 2022). Unlike simple translation approaches, transferred end-to-end models enable direct alignment of image and target language features, allowing for controllable image generation or editing (Hertz et al., 2022). However, achieving cross-lingual transfer on Stable Diffusion remains an unsolved challenge. The main difficulty lies in aligning two sequences of vectors with dynamic sequence lengths, rather than aligning two single pooled vectors. To the best of our knowledge, our work is the first to systematically explore cross-lingual transfer in diffusion-based text-to-image models. In this work, we propose IAP1, a simple yet effective approach to transfer Stable Diffusion into Chinese. Similar to the previous work, the main component of Stable Diffusion is kept intact, and only a new Chinese text encoder is trained. Differently, we employ triplets (_image_, _English caption_, _Chinese caption_) as training instances and utilize the image as a pivot to minimize the distance of attentive features between _image_ to _English caption_ and _image_ to _Chinese caption_. This objective promotes the model to learn a representation that is similar to the CLIP text encoder for semantically identical caption pairs. Our experiments demonstrate that our method can achieve superior performance with minimal data compared to various existing Chinese text-to-image models. Footnote 1: IAP: Images as Pivots ## 2 Related Work **Cross-Lingual Transfer** Cross-Lingual Transfer has been proven effective in many NLP tasks, such as machine translation (Zoph et al., 2016; Ji et al., 2020), multilingual pretrained models (Conneau et al., 2020) and question answering (Lewis et al., 2020). In the case of multi-modal models, some work focused on the cross-lingual transfer of GAN-based text-to-image models (Jung et al., 2022; Zhang et al., 2022). Additionally, several works attempted to transfer the powerful vision-language representation model CLIP into other languages (Chen et al., 2022; Carlsson et al., 2022) or benefit multimodal generation (Dai et al., 2022). However, these methods only involve the final single vector representation, while in Stable Diffusion, we need to align sequences of vectors. To solve this problem, we propose IAP. **Text-to-Image Synthesis** Text-to-image synthesis has been a subject of interest for a long time. In the early stage, GAN was a popular choice as the architecture for text-to-image synthesis models (Zhu et al., 2019; Li et al., 2019). With the advent of Transformer (Vaswani et al., 2017), researchers began utilizing its capabilities for modeling sequences and proposed auto-regressive text-to-image models such as VQGAN (Esser et al., 2021), Cogview (Ding et al., 2021), DALLE (Ramesh et al., 2021) and Parti (Yu et al., 2022). Recently, large-scale diffusion models have greatly improved the quality and alignment of generated images, including DALLE-2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022), and Stable Diffusion (Rombach et al., 2022). ## 3 Method ### Preliminaries The architecture of Stable Diffusion is built on the latent diffusion model (Rombach et al., 2022) that comprises a CLIP text encoder \(f_{en}\), a pretrained image autoencoder \(\mathcal{E}\) and \(\mathcal{D}\), and a conditional denoising network \(\epsilon_{\theta}(z_{t},y,t)\). The pretrained image autoencoder encodes image \(x\) into a lower-resolution latent representation \(z=\mathcal{E}(x)\) and decodes latent representation \(z\) back to image \(\hat{x}=\mathcal{D}(z)\). During Figure 1: Architecture of IAP. We fix the parameters of Stable Diffusion and learn a separate Chinese text encoder by considering the images as pivots and minimizing the distance of cross-attention layer output between the images and each language. training, the CLIP text encoder and image autoencoder are frozen, and the conditional denoising network learns a generative process in the latent space conditioned on text prompt representation: \[\mathcal{L}_{LDM}=\mathbb{E}_{\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0,1),t}|| \epsilon_{\theta}(z_{t},f_{en}(y),t)-\epsilon||_{2}^{2}, \tag{1}\] The specifics of training and sampling for diffusion models can be found in Appendix A. The conditional denoising network \(\epsilon_{\theta}(z_{t},y,t)\) is essentially implemented by incorporating a cross-attention mechanism (Vaswani et al., 2017) into an underlying UNet architecture (Ronneberger et al., 2015). Formally, at the \(i\)-th layer, we compute cross-attention as follows: \[\mathrm{Attn}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d}}\right)V, \tag{2}\] where the flattened intermediate representation of the image \(\varphi_{i}(z_{t})\in\mathbb{R}^{d_{i}^{t}\times h}\) is projected into the query \(Q=W_{Q}^{i}\varphi_{i}(z_{t})\), and the representation of text \(f_{en}(y)\in\mathbb{R}^{d\times l}\) is projected into key and value \(K=W_{K}^{i}f_{en}(y),V=W_{V}^{i}f_{en}(y)\). \(W_{Q}^{i}\in\mathbb{R}^{d\times d_{t}^{i}},W_{K}^{i}\in\mathbb{R}^{d\times d_{ t}},W_{V}^{i}\in\mathbb{R}^{d\times d_{t}}\) are parameters to be learned. ### Image As Pivots **Problem Formulation** Given a set of triplet training instances \(\mathcal{S}=\{(x,y,s)_{n}\}\), where \(x,y,s\) is the image, English caption, and translated Chinese caption, our goal is to learn a new Chinese text encoder \(f_{zh}\) while freezing the Stable Diffusion. Ideally, for a Chinese text prompt \(s\), the representation \(f_{zh}(s)\in\mathbb{R}^{l_{s}\times d_{t}}\) is aligned with the representation \(f_{en}(y)\in\mathbb{R}^{l_{y}\times d_{t}}\). Then, the denoising network can generate a relevant image with \(\epsilon_{\theta}(z_{t},f_{zh}(s),t)\). As we need to align two sequences of vector \(f_{zh}(s)\in\mathbb{R}^{l_{s}\times d_{t}}\) and \(f_{en}(y_{z})\in\mathbb{R}^{l_{y}\times d_{t}}\) with different length, commonly used methods, such as contrastive learning or distillation, which are designed for aligning single vector representations at the sentence-level, are not suitable in this scenario. To solve this problem, we propose IAP that considers images as pivots for two languages and minimizes the cross-attention output between images and two languages. Formally, given a triplet \((x,y,s)\), we feed two representations \(f_{en}(y)\) and \(f_{zh}(s)\) into each cross-attention layer of UNet, respectively. As the output shape of the cross-attention layer is identical regardless of sequence length, we can calculate the element-wise mean squared error between the two outputs: \[\mathcal{L}_{IAP}\!=\!||\mathrm{Attn}(\!Q,\!K_{en},\!V_{en})\!-\!\mathrm{Attn }(\!Q,\!K_{zh},\!V_{zh})||_{2}^{2}, \tag{3}\] where \(Q=W_{Q}^{i}\varphi_{i}(z_{t}),K_{en}=W_{K}^{i}f_{en}(y),K_{zh}=W_{V}^{i}f_{zh}( s),V_{en}=W_{V}^{i}f_{en}(y),V_{zh}=W_{V}^{i}f_{zh}(s)\). In this way, the Chinese text encoder is optimized to generate similar keys and values with the CLIP text encoder's to represent images. We sum up the mean squared error of all layers of UNet as our final loss objective. ## 4 Experiment ### Implementation Details We train IAP on the CC3M dataset (Sharma et al., 2018) that consists of around 3 million image-text pairs. The Chinese captions were translated from the original English captions in the dataset using \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & Training & \multicolumn{2}{c|}{MS-COCO (30K)} & \multicolumn{2}{c|}{COCO-CN (1k)} & \multicolumn{2}{c}{AIC-ICC (30k)} \\ \cline{3-8} & \multirow{2}{*}{Data} & FID\(\downarrow\) & CLIP\(\uparrow\) & FID\(\downarrow\) & CLIP\(\uparrow\) & FID\(\downarrow\) & CLIP\(\uparrow\) \\ \hline Cogview (Ding et al., 2021) & 10M & 18.14 & 18.2 & - & - & - & - \\ Cogview2 (Ding et al., 2022) & 30M & 24.0 & 22.4 & - & - & - & - \\ ERNIE-ViLG (Zhang et al., 2021a) & 145M & 14.70 & 22.4 & - & - & - & - \\ \hline AltDiffusion (Chen et al., 2022) & 100M & 18.14 & 34.56 & 74.04 & 34.67 & 23.07 & 34.09 \\ Taiyi (Wang et al., 2022) & 120M & 17.09 & 33.27 & 69.64 & 33.54 & 22.51 & 33.22 \\ SD + Translation (Wang et al., 2022) & - & 14.88 & **35.76** & 66.15 & **36.04** & **19.63** & 34.12 \\ \hline IAP & 3M & **13.43** & 35.35 & **65.43** & 35.65 & 20.49 & **34.55** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results for text-to-image synthesis on MS-COCO, COCO-CN, and AIC-ICC datasets. The results for Cogview, Cogview2 and ERNIE-ViLG are copied from their papers. AltDiffusion and Taiyi Diffusion are evaluated with the same setting as our model. SD + Translation means first translating Chinese prompts into English and then feeding them into Stable Diffusion. machine translation. Following AltDiffusion and Taiyi Diffusion, we load Stable Diffusion v1-4 checkpoint 2. We initialize the Chinese text encoder with Taiyi Chinese CLIP 3Wang et al. (2022), same as Taiyi Diffusion used. More details are listed in Appendix B.1. We used the default scheduler PNDMScheduler Liu et al. (2022) with a guidance scale of 7.5 to sample images for 50 steps. Footnote 2: [https://huggingface.co/CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) Footnote 3: [https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese](https://huggingface.co/IDEA-CCNL/Taiyi-CLIP-RoBERTa-102M-ViT-L-Chinese) ### Automatic Evaluation We evaluate IAP on the MS-COCO Lin et al. (2014), COCO-CN Li et al. (2019) and AIC-ICC dataset Wu et al. (2019) using zero-shot FID score Heusel et al. (2017), and CLIP Score Radford et al. (2021). FID evaluates the quality and diversity of the generated images, while CLIP Score evaluates the relevance of the generated images to the text. We calculated the FID Score using the torch-fidelity package Obukhov et al. (2020) and computed the CLIP Score based on Chinese CLIP Yang et al. (2022). The evaluation process is detailed in Appendix B.2. The experiment results are presented in Table 1. IAP achieves satisfactory FID and CLIP scores on all datasets. It outperforms previous methods and performs comparably to a strong translation-pipeline baseline. This demonstrates that IAP can align Chinese and English semantics explicitly, retaining and transferring the powerful capabilities of Stable Diffusion into Chinese with minimal training resources. ### Human Evaluation Following Chang et al. (2023), we conduct a side-by-side human evaluation to assess the performance of IAP, AltDiffusion and Taiyi Diffusion. We sample 200 prompts from PartiPrompts Yu et al. (2009), a curated collection to evaluate model abilities across various categories. We invite five independent annotators and ask them to select the preferable one in each group. More details are presented in Appendix B.3. As shown at the top of Fig. 2, IAP was selected as the best one in 43% of input prompts. These results are consistent with the IAP's superiority on the automatic metrics. ### Analysis **Dataset Size** We compare the performance of IAP training on only partial data pairs with different sizes. As the bottom of Fig. 2 shows, IAP achieves satisfactory even with only 0.1M training pairs. Also, with the increasing training data, IAP consistently improves the FID and CLIP scores, indicating its potential capability for large-scale training. **Initialization** We assess the performance of IAP on MS-COCO using various initialization methods, including random initialization, loading a text-pretrained model (Mengti-T5-encoder (Zhang et al., 2021)), and loading a multi-modal pretrained model (Taiyi Chinese CLIP). The results in Table 2 reveal two key findings. Firstly, loading a pretrained model significantly enhances performance. When comparing the multi-modal encoder and the pure text model encoder, Taiyi CLIP slightly outperforms Mengti-T5 encoder, suggesting that pretraining on image-text pairs benefits the text-to-image synthesis task. Secondly, even when trained with random initialization, IAP still achieves a satisfactory FID score, comparable to AltDiffusion. This indicates the robustness of IAP to model initialization and its ability to provide incremental improvements with better initialization. **Dataset Overlap** To ensure that the improvement in performance on the zero-shot dataset is not due to overlap between the training set (CC3M) and the test set, we trained the text encoder with loss function Eq. (1) on CC3M. As shown in Table 2, compared with IAP, vanilla training has very poor performance on the zero-shot testset. This indicates that CC3M does not have a similarity overlap with testset. IAP shows strong generalization ability in generating images for unseen text prompts. ### Case Study We select two groups of generated images from the human evaluation set. As shown in Fig. 3, for the first case, the other two models failed to generate two colors of flowers. For the second case, the details of images from IAP have the best fidelity. We present more cases in Appendix C. ## 5 Conclusion In this work, we introduce IAP, an efficient cross-lingual transfer method for the state-of-the-art text-to-image model, Stable Diffusion. By treating images as pivots and minimizing the cross-attentive feature between images and two languages, our new text encoder is able to quickly align with the CLIP text encoder. Despite using fewer data, our model outperforms several Chinese text-to-image models trained with large-scale data. We believe that IAP can be easily applied to other languages and other diffusion models with similar architectures to Stable Diffusion as well. ## 6 Limitations While IAP achieves impressive performance in Chinese text-to-image synthesis, it still has some limitations. First, due to space and resource constraints, we only conducted experiments in Chinese, but we believe IAP is a general method that can be applied to other languages as well. Second, we observed that the capability for compositional generation is still limited, which is a challenge for all text-to-image synthesis models. We plan to further improve IAP's ability through structured guidance to tackle this limitation. ## 7 Ethics Statement The research and development team behind Stable Diffusion has considered the potential ethical issues that could arise with the use of this model. To ensure that the images generated by text-to-image models are safe for public consumption, they have developed a safety filter to prevent the inclusion of any NSFW information. Similarly, we will follow this process with IAP to ensure that it is used responsibly and in a healthy manner. ## Acknowledgement This work is supported by the National Key R&D Program of China (No. 2020AAA0106502) and Institute Guo Qiang at Tsinghua University.
2306.08541
Fine-Tuned but Zero-Shot 3D Shape Sketch View Similarity and Retrieval
Recently, encoders like ViT (vision transformer) and ResNet have been trained on vast datasets and utilized as perceptual metrics for comparing sketches and images, as well as multi-domain encoders in a zero-shot setting. However, there has been limited effort to quantify the granularity of these encoders. Our work addresses this gap by focusing on multi-modal 2D projections of individual 3D instances. This task holds crucial implications for retrieval and sketch-based modeling. We show that in a zero-shot setting, the more abstract the sketch, the higher the likelihood of incorrect image matches. Even within the same sketch domain, sketches of the same object drawn in different styles, for example by distinct individuals, might not be accurately matched. One of the key findings of our research is that meticulous fine-tuning on one class of 3D shapes can lead to improved performance on other shape classes, reaching or surpassing the accuracy of supervised methods. We compare and discuss several fine-tuning strategies. Additionally, we delve deeply into how the scale of an object in a sketch influences the similarity of features at different network layers, helping us identify which network layers provide the most accurate matching. Significantly, we discover that ViT and ResNet perform best when dealing with similar object scales. We believe that our work will have a significant impact on research in the sketch domain, providing insights and guidance on how to adopt large pretrained models as perceptual losses.
Gianluca Berardi, Yulia Gryaditskaya
2023-06-14T14:40:50Z
http://arxiv.org/abs/2306.08541v2
# Fine-Tuned but Zero-Shot 3D Shape Sketch View Similarity and Retrieval ###### Abstract Recently, encoders like ViT (vision transformer) and ResNet have been trained on vast datasets and utilized as perceptual metrics for comparing sketches and images, as well as multi-domain encoders in a zero-shot setting. However, there has been limited effort to quantify the granularity of these encoders. Our work addresses this gap by focusing on multi-modal 2D projections of individual 3D instances. This task holds crucial implications for retrieval and sketch-based modeling. We show that in a zero-shot setting, the more abstract the sketch, the higher the likelihood of incorrect image matches. Even within the same sketch domain, sketches of the same object drawn in different styles, for example by distinct individuals, might not be accurately matched. One of the key findings of our research is that meticulous fine-tuning on one class of 3D shapes can lead to improved performance on other shape classes, reaching or surpassing the accuracy of supervised methods. We compare and discuss several fine-tuning strategies. Additionally, we delve deeply into how the scale of an object in a sketch influences the similarity of features at different network layers, helping us identify which network layers provide the most accurate matching. Significantly, we discover that ViT and ResNet perform best when dealing with similar object scales. We believe that our work will have a significant impact on research in the sketch domain, providing insights and guidance on how to adopt large pretrained models as perceptual losses. ## 1 Introduction As image vision algorithms rapidly advance, we see a recent surge of interest in sketch understanding [54, 33, 23, 10, 35, 26] and generation [48, 5]. Sketch is the earliest form of visual communication for humanity as a whole, as well as for each individual. However, for vision algorithms, it poses a number of challenges caused by the diversity of sketching styles, skills, and sketch sparsity. Sketches can be very abstract and visually different from photos. Each sketching scenario results in visually very different renditions. People can easily interpret sketches from very abstract to highly detailed or stylized, but is there an algorithm or model that can reliably handle all styles and scenarios? Inspired by the success of so-called foundation models trained on large datasets in a range of zero-shot applications in the image domain [30, 38, 31, 37, 25], several works exploit its application in the sketch domain. There are a number of inspiring attempts of adapting CLIP (Contrastive Language-Image Pre-Training) [36] as a perceptual loss for deep model training [15, 43, 48, 47], for model performance evaluation [60], or as a way to alleviate the need for the sketch data during training for a downstream task [40]. However, are the used encoders able to discriminate fine-grained differences within a sketch domain or across sketch and image domains? Several works indicate that these models do not necessarily perform that well in a zero-shot sketch-to-image comparison [11, 39]. The works are then either residing to fine-tuning existing models [11, 39] or training from scratch by adding additional losses or sketch-targeted solutions [26, 42]. With our work, firstly, we aim to shed light on the ability of the popular pretrained models to discriminate individual 3D instances in their multi-modal 2D projections. To achieve this, we evaluate encoders trained with CLIP [36] and via a classification task training on the ImageNet dataset [2]. Namely, we study their performance in matching viewpoints and object identities in sketches and images. Secondly, we investigate alternative fine-tuning strategies, inspired by [4, 20, 61]. In our work, we compare visual prompt learning [20], layer normalization weights learning [14, 39] with a careful fine-tuning of all weights. Thirdly, we show that well-designed fine-tuning on a single shape class can lead to improved performance on other shape classes, sometimes surpassing the accuracy of supervised methods. Importantly, we show that fine-tuning can be done on synthetically-generated sketches for a set of 3D shapes without the requirement to use freehand sketches. We demonstrate the generalization of our approach to rel atively abstract freehand sketches from the AmateurSketch dataset [32]. We refer to this scenario as _fine-tuned but zero-shot_. As a test application, we consider sketch-based 3D shape retrieval. Effectively, we introduce the first sketch-based 3D shape retrieval method with state-of-the-art performance that does not require per-class training or fine-tuning. Our fine-tuning only requires a set of 3D shapes of just one category. It is a reasonable assumption for the 3D shape retrieval task. Fourthly, we perform a detailed performance analysis of different layers of ViT and ResNet-based encoders pre-trained either with CLIP training or with a classification task on the ImageNet dataset. We study how the line width and object scale affect performance and find that similar settings can be considered optimal for ViT and ResNet-based encoders. We note that most works [11, 39, 26] use the activation of the final layer of an encoder. Yael et al. [48] observed that while these features excel at capturing semantic meaning, intermediate layers are more suitable when comparing spatial structures. In our research, we offer an in-depth analysis with regard to our specific problem. In summary, our key contributions are: * A comprehensive study of the ability of the popular pretrained encoders to discriminate individual 3D instances in their multi-modal 2D projections; * Extensive analysis of the similarity estimation performance using various layer features and exploration of the impact of the object's scale; * Comparison of various fine-tuning strategies on the task of matching sketches in distinctive sketch styles; * Fine-tuning approach that requires as little as a set of 3D shapes of a single category, and generalizes to freehand sketches and other shape class categories, reaching the performance of state-of-the-art fully supervised methods. ## 2 Related work ### Sketch-based 3D shape retrieval #### 2.1.1 Category-level and fine-grained retrieval Most of the works in sketch-based 3D model retrieval [13, 55, 27, 22, 49, 58, 24, 62, 51, 12, 19, 34, 21, 7, 53, 9, 59, 52] focus on the problem of _category level_ retrieval: They aim to retrieve any instance of a particular object category. In other words, the retrieval is considered to be successful if, given a sketch of an object, the retrieved top \(N\) 3D models belong to the same category. Only two works [32, 8] addressed fine-grained sketch-based 3D model retrieval in a supervised setting. Qi et al. [32] collected the first dataset of instance-level paired freehand sketches and 3D models, which we also use to test our model. They use triplet loss training [50], classic for retrieval tasks, and represent 3D models using multi-view RGB renderings. The main novelty of their paper lies in learning view attention vectors. In concurrent to our work, Chen et al. [8] train and test on the data by Qi et al. [32]. Unlike [32], they learn to project all sketch views to the same latent representation. The main performance gain is caused by dividing the images into three parts and learning to match the features of each part individually. Unlike both of these works, we represent 3D shapes using NPR renderings rather than RGB images. To reduce the domain gap between sketch queries and 3D models, several works [29, 28] study fine-grained retrieval from a 3D sketch created while wearing a virtual reality headset. Our model aims for much more accessible inputs that can be created with a computer mouse or on paper. #### 2.1.2 Multi-view feature aggregation Like majority of the works on sketch-based 3D model retrieval, we use multi-view shape representation, however many of these works differ in how they aggregate features across viewpoints. Thus, Xie et al. [51] use the Wasserstein barycentric of 3D shapes projections in the CNN feature space to represent 3D shapes. He et al. [19] follow MVCNN [46] and aggregate views features with element-wise maximum operation across the views in the view-pooling layer. Lei et al. [21] proposed a representative view selection module that aims to merge redundant features for similar views. Chen et al. [7] learn multi-view feature scaling vectors which are applied prior to average pooling vector, in order to deal with non-aligned 3D shape collections. Qi et al. [32] learn view attention vectors conditioned on the input sketch, which allow to reduce the domain gap between a sketch and multi-view projections of a 3D shape. Zhao et al. [59] leverages spatial attention [56] to exploit view correlations for more discriminative shape representation. In our work, we focus on learning view features that can be used to find the correct shape identity and view across different sketch styles: e.g. freehand and NPR. ### Multi-modal retrieval Multi-modal retrieval is not directly related to our work, but two concurrent works [44, 42] are worth mentioning as they rely on encoders pretrained with the CLIP model. They explore CLIP embeddings for retrieval from multi-modal inputs such as 2D sketches or images and text. Sangkloy et al. [42] study image retrieval and focus on fine-tuning CLIP using triplets of synthetic sketches, images, and their captions. They rely on the availability of textual descriptions matching their images, while we require only the availabil ity of 3D shapes from just one 3D shape class. Similarly to us, Schlachte et al. [44] study zero-shot 3D model retrieval using the CLIP model, but only explore the weighted fusion of CLIP features from multiple inputs for artistic control. Unlike them, we perform an in-depth study analyzing object scale, feature layers, and fine-tuning strategies. ### Sketch datasets With the advent of sketch datasets [13, 27, 3, 41, 18, 16, 32, 11] the research on sketching thrives. However, it is costly and challenging to collect a dataset of free-hand sketches, especially when there is a requirement for instance-level pairing between several domains. The common practice is to let participants study a reference image for a short period of time and then let them draw from memory [13, 3, 41, 11, 42]. This task becomes increasingly challenging when the pairing is required to be between 3D shapes and sketches, as one has to ensure that the viewpoints are representative of those that people are more likely to sketch from [27, 58, 53, 32, 16]. To the best of our knowledge, there is only one dataset [32] of freehand sketches by participants with no prior art experience paired with 3D shapes, that takes views into account and follows the protocol of sketching from memory. The small dataset collected by Zhang et al. [57] for each object contains only one sketch viewpoint, and the viewpoints are non-representative, they are uniformly sampled around 3D shapes. It contains too few examples and is too noisy for retrieval performance evaluation. The recent dataset of paired sketches and 3D models of cars [17] similarly was collected without taking into account viewpoints preferences, and the sketches are drawn directly on top of image views and mostly contain outer shape contours. We, therefore, evaluate our approach on the dataset by Qi et al. [32], as the only existing representative dataset with instance-level pairing between sketches and 3D shapes. ## 3 Method In this and the following sections, we present our method for zero-shot sketch-based 3D retrieval. We then provide a comparison to alternative strategies in Sec. 6. To enable sketch-based 3D shape retrieval, we represent 3D shapes using their multi-view projections, commonly used in sketch-based retrieval [51, 19, 21, 7, 59, 27, 58, 53]. To reduce the domain gap, we use NPR renderings instead of RGB renderings for multi-view 3D shape representation. In the supplemental, we provide a detailed study of the ability of the popular pretrained models to discriminate individual 3D instances in their multi-modal 2D projections: we compare RGB renderings, NPR renderings, and freehand sketches. ### Zero-shot Given an encoder, trained on a pretext task, we first compute embeddings of a Query sketch \(Q\) and Gallery 3D shape \(G\) views using features of a chosen encoder's layer. We then assign the similarity between a sketch and a 3D shape as the maximum cosine similarity between a sketch embedding and individual 3D shape views embeddings. Formally, this can be written as follows: \[\mathrm{sim}(Q,G)=\max_{v\in views}\mathrm{d}(\mathrm{E}_{\ell}(Q),\mathrm{E}_ {\ell}(G_{v})), \tag{1}\] \(G_{v}\) is a 3D shape view, \(E_{l}(\cdot)\) denotes layer \(\ell\) features extracted with the encoder \(E\) and \(d\) stands for the cosine similarity1. Footnote 1: We experimented with the Mean Squared Error (MSE) distance, taking the minimum MSE distance between a query and shape individual views. We have not observed an obvious advantage of one other another. We center and scale 3D objects in query and shape views to fit the same bounding box in both representations. ### Fine-tuned but zero-shot We propose a contrastive view-based fine-tuning approach that leverages synthetically-generated sketches of single or multiple 3D shape classes. We represent all available 3D shapes with \(V\) views, using two different approaches to synthetic sketch generation: model-based [1] and view-based [5] NPR algorithms. We then adapt CLIP contrastive loss [36, 45] to match identical shape views in these two synthetic sketch styles. Namely, given a batch with \(B\) objects, we randomly select one view in two styles for each. We then compute the pairwise weighted dot product between any two views in two different styles: \[s_{i,j}:=s(G_{i}^{st1},G_{j}^{st2}):=e^{t}<\mathrm{E}_{\ell}(G_{i}^{st1}), \mathrm{E}_{\ell}(G_{j}^{st2})>, \tag{2}\] \(<\cdot,\cdot>\) is a dot product, \(G_{i}^{st}\) is some view of the \(i\)-th object in the mini-batch in one of two styles, and \(t\) is a learned parameter. We provide details on styles and views in Sec. 4.3.1. Figure 1: Examples of sketches in our datasets. Please see Sec. 4.2 for details. We then compute the following contrastive loss: \[\mathcal{L}=-\frac{1}{2B}\sum_{i=1}^{B}\left(\log\frac{\exp(s_{i,i})} {\sum_{j=1}^{B}\exp(s_{i,j})}+\right.\] \[\left.\log\frac{\exp(s_{i,i})}{\sum_{j=1}^{B}\exp(s_{j,i})}\right). \tag{3}\] Due to our batch construction, this objective trains the network to produce features such that the same views of the same object in different styles have similar embeddings. This objective neither pushes different views of the same object to have identical embeddings nor pushes them apart. Fine-tuning updates the weights of the visual encoder and the temperature parameter \(t\). Note that Eqs. (1) and (2) can be computed based on the features from any layer and not only the final one. In this case, we only updated the weights up to the layer whose features we use to compute similarity. ## 4 Implementation details ### Encoder In the default setting, as an encoder, we use ViT pre-trained with CLIP. We compute similarity using the 6-th layer. ### Datasets We use two types of datasets: (1) the dataset of freehand sketches by participants without art experience, and (2) the dataset of synthetically generated sketches in two styles for 11 classes of the ShapeNet 3D shape dataset [6]. Different styles are shown in Fig. 1, and described in detail below. #### 4.2.1 Freehand sketches We use the dataset of freehand sketches by Qi et al. [32] to evaluate the models' performance. This dataset contains sketches for two shape categories: _chair_ and _lamp_, representing 1,005 and 555 3D shapes from the respective class of the ShapeNet dataset [6]. The sketches are created by participants without any prior sketching experience, and fit well the scenario we are targeting. The sketches are drawn from a viewpoint with a zenith angle of around 20 degrees. For each category three settings of azimuth angles are used. For the _chair_ category, they are \(0^{\circ}\), \(30^{\circ}\) and \(75^{\circ}\), while for the _lamp_ category they are \(0^{\circ}\), \(45^{\circ}\) and \(90^{\circ}\). These particular viewpoints are selected based on sketching literature and pilot studies, as the most likely viewpoints. The dataset provides a split to training, validation, and test data. To facilitate comparison with previous supervised work, we only use a test set of sketches to test models. The test set consists of 201 and 111 sketch-3D shape quadruplets for the _chair_ and _lamp_ categories, respectively. We do not use any freehand sketches for training. Prior to testing, we re-scale and center objects' projections in freehand sketches to occupy the central image area of \(129\times 129\). #### 4.2.2 Synthetic sketches Additionally, we create a dataset of synthetic sketches in two styles, representing 3D shapes from the ShapeNetCore 3D shape dataset [6]. We select 11 of the 13 ShapeNetCore classes, discarding two classes with the lowest number of 3D shapes. Views and camera settingWe follow camera settings used to collect sketches in the dataset of freehand sketches [32]. In particular, we use for all shape classes viewpoints with the following azimuth angles: \(0^{\circ},30^{\circ},45^{\circ},75^{\circ}\) and \(90^{\circ}\). We set the camera distance to an object to \(2.5\) and the camera zenith angle to \(20^{\circ}\). The size of rendered views is \(224\times 224\) unless specified otherwise. NPR (style-1)We render views using silhouettes and creases lines in Blender Freestyle [1]. We render views as SVGs and then re-scale and center objects' projections in freehand sketches to occupy the central image area of \(129\times 129\). Prior to rasterization, we assign each stroke a uniform stroke width of \(2.2px\). Anime (style-2)We obtain the second synthetic sketch style by first rendering RGB images of 3D shapes using Blender Freestyle with the same camera settings as for the first NPR synthetic style. We then re-scale and center objects' projections in RGB renderings to occupy the central image area of \(129\times 129\). Finally, we generate synthetic sketches in the second style, using the pre-trained network [5] in _anime_ style. ### Data usage #### 4.3.1 3D Shape representation We represent a 3D shape with its multi-view NPR projections: We use the set of \(0^{\circ},30^{\circ},45^{\circ},75^{\circ}\) and \(90^{\circ}\), common for chair and lamp category sketches from the _AmateurSketch_ dataset [32], to represent 3D models. #### 4.3.2 Fine-tuning We split 3D shapes in each class into training (70 %), validation (15%), and test (15%) sets. We set the learning rate to \(10^{-7}\), the batch size to 64, and use the Adam optimizer. _We note that the choice of the learning rate is critical, as larger learning rates will result in overfitting harming the performance._ Data augmentationWhile fine-tuning, we augment synthetic sketches in the anime style with random affine transformation, translation, rotation, and scaling operations. This augmentation simulates the type of distortions that we can encounter in freehand query sketches. Even if we scale and center objects in freehand sketches in processing, sketches might contain small rotations. The translation moves an image along the \(x\) and \(y\) axes for a random number of pixels in the range \([-10\%,+10\%]\) of the image size. The rotation is sampled between \([-10,+10]\) degrees. Finally, we increase or decrease the object's bounding box size by a random value in the range [-10%, +10%] of the image size. Checkpoint selectionWe train our fine-tuning model for 500 epochs. At test time we use the weights from the last epoch. #### 4.3.3 Test time We test our retrieval models on the freehand sketches. We also test on synthetic sketches to show generalization to other shape classes. By default, we use sketches in the anime style with azimuth angles set to \(0^{\circ}\), \(45^{\circ}\), and \(90^{\circ}\) as queries. To facilitate comparisons with performance on freehand sketches, for each shape class we form the final test sets by randomly selecting just 200 3D shapes non-overlapping with training or validation sets. ## 5 Results To evaluate retrieval accuracy, we use the standard for retrieval tasks Top-1 (Acc@1), and Top-5 (Acc@5) accuracy measures. They evaluate the percentage of times the ground-truth is returned among the top 1 and top 5 ranked retrieval results, respectively. Tab. 1 (_ViT-CLIP L-6_) shows the retrieval accuracy of our zero-shot setting on the freehand sketches and synthetic sketches in anime style. We then perform three individual fine-tuning experiments on three classes: _boat_, _airplane_, and _bench_, using synthetic sketches, and report an average accuracy over the three experiments in Tab. 1 (_ViT-CLIP* L-6_). We compare with two supervised works by Qi et al. [32] and Chen et al.[8] who train on one class at a time and use freehand sketches from [32]. As no code is available for the competitors, we report the numbers from their re \begin{table} \begin{tabular}{l|c c|c c|c c} & \multicolumn{3}{c|}{Chairs} & \multicolumn{3}{c|}{Lamps} & \multicolumn{3}{c}{Avg. score.} \\ & & & & & Anime \(\rightarrow\) NPR \\ \hline Method & acc@1 & acc@5 & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline [32] & 56.72 & 87.06 & 57.66 & 87.39 & n.a. & n.a. \\ [8] & **83.08** & **97.01** & 78.08 & **95.50** & n.a. & n.a. \\ \hline ViT-CLIP L-6 & 74.79 & 89.39 & 73.27 & 89.49 & 82.48 & 93.82 \\ ViT-CLIP* L-6 & 77.11 & 92.32 & **78.38** & 92.39 & **87.84** & **97.13** \\ \end{tabular} \end{table} Table 1: Our zero-shot results versus supervised methods: [32] and concurrent to our work [8]. Neither [32] nor [8] provide code, therefore, we use the numbers provided in their respective papers. For the _ViT-CLIP_ methods, we center and scale objects in reference and query views according to optimal scaling. _L-6_ indicates the layer whose features we use for similarity computation. _ViT-CLIP*_ represent the average performance results of three individual fine-tuning experiments on the three classes: _boat_, _airplane_, and _bench_, using synthetic sketches. _Avg. score. anime_ represents average results on 11 classes where queries are in anime style and gallery shapes are represented using multi-view NPR projections. The boldface font highlights the best results, and the underscore highlights the second-best results. Figure 2: Qualitative results obtained with features of the 6th layer of the ViT encoder pretrained on CLIP and fine-tuned using our method. The queries are freehand sketches from the AmateurSketch dataset [32]. Green boxes highlight groundtruth shapes. (a) shows retrieved shapes and the best matching view according to Eq. (1); (b) shows retrieval results without our fine-tuning and with fine-tuning on each of the free classes: boats, benches, and airplanes. spective papers. Our zero-shot models are able to achieve remarkable results, surpassing [32] in all respects. This shows the generalization ability of our method to different styles and diverse shape classes. Compared to concurrent to our work [8], we can see that accuracy of our zero-shot method can be further improved. Note that on the _lamp_ category we outperform the concurrent supervised method in top-1 accuracy, while our method is zero-shot! Our _fine-tuning but zero-shot_ improves top-1/5 retrieval accuracy on average by \(4.3\) and \(3\) points, respectively, over zero-shot performance. The visual results for our method are shown in Fig. 2. ## 6 Ablation studies ### Choice of an encoder and pretext task In this section, we compare (1) two types of encoders: ViT and ResNet, and (2) two types of pretext tasks: CLIP training and classification task training on the ImageNet dataset [2]. All the models share the same input image size of \(224\times 224\) except for ViT pre-trained on ImageNet. In the latter case, the image size is \(384\times 384\), and we re-scale and center objects' projections in freehand and synthetic sketches to occupy the central area of \(291\times 291\). Tab. 2 shows the comparison in retrieval accuracy in a zero-shot setting without fine-tuning for several best-performing layers. It shows that ViT encoder pretrained with CLIP model achieves the best results, and justifies the use of it as our default in most of the experiments. Interestingly, training on the ImageNet for the RestNet encoder gives slightly better performance than training with the CLIP model. ### Object projection area, line width and feature layer In our preliminary experiments, we observed that scaling sketches and 3D model projections to fit the same bounding box area results in improved retrieval accuracy (See Tab. 3). These findings also align with the experiments in [8]. We then are interested in how sensitive different backbones (ViT and ResNet) are to (1) the scale of the object in the image plane; (2) the line width, and (3) how the accuracy of feature similarities according to features from different layers varies with object scale. \begin{table} \begin{tabular}{l|c c|c c|c c} & \multicolumn{2}{c|}{Chairs \(\rightarrow\) NPR} & \multicolumn{2}{c|}{Lamps \(\rightarrow\) NPR} & \multicolumn{2}{c}{Avg.score} \\ & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c}{Anime \(\rightarrow\) NPR} \\ \hline Method & acc@1 & acc@5 & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline ViT CLIP L-6 & **74.79** & **89.39** & **73.27** & **89.49** & 82.48 & 93.82 \\ ViT ImageNet L-5 & 63.35 & 82.92 & 66.67 & 87.99 & 81.77 & 94.14 \\ \hline ResNet CLIP L-3 & 65.17 & 84.08 & 69.97 & 87.99 & 76.97 & 90.36 \\ ResNet ImageNet L-3 & 66.50 & 82.09 & 65.77 & 88.89 & **83.82** & **95.29** \\ \end{tabular} \end{table} Table 2: Comparison of ResNet and ViT encoders trained either with CLIP model or classification task on the ImageNet dataset. See Sec. 6.1 for the details. In all cases, objects in sketches are optimally scaled and centered. Figure 3: Role of the object projection area, line width, and feature layer in the ability to predict similarity between views in different domains. Please see Sec. 6.2 for the details. \begin{table} \begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{Chair} & \multicolumn{2}{c}{Lamp} \\ & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline ViT-CLIP L-6 w/o alignment & 69.82 & 86.40 & 67.27 & 87.69 \\ \hline ViT-CLIP L-6 & **74.93** & **89.39** & **73.27** & **89.49** \\ \end{tabular} \end{table} Table 3: Comparison of the zero-shot retrieval performance of the ViT encoder trained with CLIP on the datasets without objects centering and rescaling vs. on the datasets where objects in sketches are centered and scaled as described in Sec. 4.3. #### 6.2.1 Object bounding box size & line width We first obtain an initial common bounding box size (\(170\times 170\)) by taking the smallest square bounding box that fully encompasses objects in all sketches in the dataset of free-hand sketches in the form they are provided by Qi et al. [33]. We rescale and center all object projections in all freehand and synthetic SVG sketch versions to this bounding box. We then use two settings of line width: thick (set to 2.2px) and thin (set to 1.0px) that we assign to all strokes (Fig. 3: 1st vs. 2nd rows), and rasterize the sketches. We evaluate varying scaling of the original \(170\times 170\) bounding box size, by rescaling raster images so that the object projections are within varying bounding box sizes from \(85\times 85\) to \(187\times 187\) with 60 uniform steps (Fig. 3: scale in horizontal axes). First, we observe that among the two considered line settings, thicker lines usage results in better retrieval accuracy. For freehand sketches, scaling between 0.7 and 0.8 that represents bounding boxes with sizes \(119\times 119\) and \(136\times 136\), respectively, result on average in top performance across encoder architectures and feature layers. For synthetic sketches, the large the object in a sketch is, the more accurate is the prediction. We believe that is caused by two factors (1) the great degree of spatial alignment between two types of synthetic sketches and (2) the presence of very thin lines in anime style sketches at smaller object scales. #### 6.2.2 Feature layers We study how retrieval accuracy varies when feature similarity is computed on features from different layers for different object projections bounding box sizes. In Fig. 3, we plot accuracy for similarity in Eq. (1) computed with features from layers \(4\), \(6\) and \(11\) of the ViT encoder, and layers \(2\), \(3\) and \(4\) of the ResNet, both trained with CLIP (Fig. 3: first three columns vs. last three columns). Fig. 3 shows that on the two categories of the dataset of freehand sketches features from mid-layers - ViT layer 6 and ResNet layer 3 - result in the best performance for both architectures. On synthetic sketches, slightly better performance is achieved with features from lower layers: layer 4 of ViT and layer 2 of ResNet. It can be also observed that for lower layers (ViT layer 4 and ResNet layer 2) the performance is increasing as object area is decreasing, while for higher layers (ViT layer 11 and ResNet layer 4) the behavior is opposite. The intuition is that the features from higher layers are better suited for more abstract sketches and extracting sketch semantic meaning, while lower layers focus more on spatial details. Indeed, as NPR and anime synthetic sketches are spatially more similar than NPR and freehand sketches, the lower layers result in better performance when anime sketch is used as a query. ### Fine-tuning vs. training from scratch To show the advantage of fine-tuning in the zero-shot scenario, we compare our approach with training from scratch on a single _bench_ class. We use our fine-tuning training objective to train ViT encoder from scratch. Features from the 6th layer are used. Therefore, we keep only the network part up to the 6th layer including. Since we train from scratch, we set a larger starting learning rate of \(10^{-5}\) for the Adam optimizer. Fig. 4 shows that training from scratch is prone to overfitting: It results in a drop in Top-1 retrieval accuracy on the test set of the _bench_ class starting from the 150th epoch. After the 80th epoch, the accuracy improves very slowly for the _lamp_ class and does not improve anymore for the _chair_ class. Note that during training the contrastive loss Eq. (3) decreases over all 500 epochs. Moreover, for all three considered classes, the retrieval accuracy is quite low: it is below \(30\%\), while the Top-1 retrieval accuracy of our approach surpasses \(70\%\). The overfitting result is similar to observations in [8] when they use only one branch to represent both sketch and image modalities. #### 6.3.1 3D shape representation: NPR or anime As we have two synthetic sketch styles (Sec. 4.2.2), we evaluate our choice of representing 3D shapes with NPR views against views in the anime style. Tab. 4 shows a clear advantage of representing 3D shapes using NPR renderings in \begin{table} \begin{tabular}{l|c|c|c|c||c|c|c||c||c|c||c} & \multicolumn{2}{c|}{Chairs \(\rightarrow\) NPR} & \multicolumn{2}{c||}{Chairs \(\rightarrow\) Anime} & \multicolumn{2}{c||}{Lamps \(\rightarrow\) NPR} & \multicolumn{2}{c||}{Lamps \(\rightarrow\) Anime} & \multicolumn{2}{c||}{Avg.score} & \multicolumn{2}{c}{Avg.score} \\ & \multicolumn{2}{c|}{Chairs \(\rightarrow\) NPR} & \multicolumn{2}{c||}{Chairs \(\rightarrow\) Anime} & \multicolumn{2}{c||}{Lamps \(\rightarrow\) NPR} & \multicolumn{2}{c||}{Lamps \(\rightarrow\) Anime} & \multicolumn{2}{c||}{Anime \(\rightarrow\) NPR} & \multicolumn{2}{c}{NPR \(\rightarrow\) Anime} \\ \hline Method & acc@1 & acc@5 & acc@1 & acc@5 & acc@1 & acc@5 & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline ViT CLIP L-6 & **74.79** & **89.39** & 63.35 & 80.93 & **73.27** & **89.49** & 62.16 & 82.88 & **82.48** & **93.82** & 77.03 & 90.94 \\ \end{tabular} \end{table} Table 4: NPR vs. Anime 3D shape representation. In the notation \(X\to Y\), \(X\) is a query domain and \(Y\) is a 3D shape representation domain. Figure 4: Top-1 retrieval accuracy vs. epoch number, when ViT encoder is trained from scratch as described in Sec. 6.3 on synthetic sketches of the _bench_ class. both considered cases: when query sketches are freehand sketches or synthetic sketches. #### 6.3.2 Feature aggregation strategy We evaluate our similarity computation strategy between a query sketch and 3D shape, given by Eq. (1), against an alternative strategy of computing the cosine similarity between the query sketch embedding and the average of 3D shape views embeddings: \[\mathrm{sim}(Q,G)=\mathrm{d}\left(\mathrm{E}_{\ell}(Q),\frac{1}{V}\sum_{v\in views }\mathrm{E}_{\ell}(G_{v})\right), \tag{4}\] where, as in Eq. (1), \(Q\) and \(G\) denote a query sketch and a gallery shape; \(G_{v}\) is a 3D shape view, \(V\) is the number of views for an object (5 in our case), \(E_{\ell}(\cdot)\) denotes \(\ell\)-th layer features of the encoder \(E\), and \(d\) stands for the cosine similarity. Tab. 5 shows the comparison of the two similarity computations strategies for the ViT encoder trained with the CLIP model in zero-shot or our fine-tuned setting. It can be seen that in all settings our strategy is superior to this alternative strategy, with a gap of almost \(3\) points in Top-1 retrieval accuracy on _chairs_, and of more than \(10\) points in both Top-1 and Top-5 on _lamps_. #### 6.3.3 Fine-tuning strategies We compare our fine-tuning strategy with two alternative strategies of fine-tuning only the weights of layer normalization layers [14] and Visual Prompt Tuning (VPT) [20], which we refer to as _ViT-CLIP LayerNorm_ and _ViT-CLIP VPT_, respectively. We train the two additional strategies under the same conditions and loss as our fine-tuning strategy but set a higher learning rate of \(10^{-5}\). The VPT approach consists in adding learnable tokens to the attention layers of the feature extractor. During training, all the original network weights are fixed and only the new tokens are updated. We use the deep prompt setting and add 5 additional tokens on the first 6 layers of ViT. As we observe that with VPT the performance on the validation set of the freehand sketch dataset starts to decrease after 100 epochs, we stop the training at 100th epoch and use the last checkpoint. Tab. 6 shows that both, the layer normalization layer tuning (ViT CLIP LayerNorm L-6) and VPT (ViT CLIP VPT L-6), allow for increased performance compared to the zero-shot ViT (ViT-CLIP L-6) without fine-tuning. However, our fine-tuning strategy (ViT-CLIP* L-6) achieves the best performance. ## 7 Limitations and Future Work While the ViT transformer was proven to be a very efficient encoder for an image domain, it might be not the best for sparse sketches. In the case of sketches, non-overlapping patches can contain too little meaningful information and alternative encoder designs should be considered. One such design was recently proposed by Lin et al. [26]. Another direction to explore is to combine vector and raster sketch encoders. To achieve zero-shot performance, the models with tailored encoders then can be trained in a multi-modal setting. Next, our fine-tuning strategy can be expanded to include multi-modal training. For example, if textual descriptions of 3D shapes are available, they can be seamlessly integrated into our fine-tuning process. ## 8 Conclusion In this work, we introduced an effective zero-shot sketch-based 3D shape retrieval method. We demonstrated how to efficiently adapt models pretrained on different pre-text tasks, like CLIP, to the studied problem. We show that it is possible to fine-tune a model leveraging only synthetic sketches of a single shape category and demonstrated generalization to freehand abstract sketches of other shape categories. We also showed that performance is similar independently of the choice of a shape category for fine-tuning. We bring insights into the role of object scale in the image plane and provide recommendations taking into account query abstraction. We compare the performance of two popular image encoders ViT and ResNet and show that the same object scale is beneficial for the two encoders under consideration regardless of the pretext task used. We also carefully study the role of object scale in the image plane and provide recommendations taking into account query abstraction. We believe that our work provides valuable information for methods aimed at assessing the perceptual similarity between sketches in different styles. \begin{table} \begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{Chair} & \multicolumn{2}{c}{Lamp} \\ & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline ViT-CLIP L-6 & 74.93 & 89.39 & 73.27 & 89.49 \\ \hline ViT-CLIP LayerNorm L-6 & 74.96 & 90.71 & 73.87 & 91.59 \\ ViT-CLIP VPT L-6 & 73.80 & 90.22 & 73.57 & 90.99 \\ ViT-CLIP* L-6 (Ours) & **77.17** & **92.32** & **78.38** & **92.39** \\ \hline \end{tabular} \end{table} Table 6: Comparison with the alternative fine-tuning strategies on the test set of the dataset of freehand sketches. \begin{table} \begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{Chair} & \multicolumn{2}{c}{Lamp} \\ & acc@1 & acc@5 & acc@1 & acc@5 \\ \hline Avg. - ViT-CLIP L-6 & 70.32 & **89.72** & 63.06 & 78.08 \\ Max. - ViT-CLIP L-6 & **74.93** & 89.39 & **73.27** & **89.49** \\ \hline Avg. - ViT-CLIP* L-6 & 74.72 & 90.71 & 66.97 & 82.28 \\ Max. - ViT-CLIP* L-6 & **77.11** & **92.32** & **78.38** & **92.39** \\ \hline \end{tabular} \end{table} Table 5: Comparison of feature selection strategies on the test set of the freehand sketch dataset.
2302.04854
A discrete-time averaging theorem and its application to zeroth-order Nash equilibrium seeking
In this paper we present an averaging technique applicable to the design of zeroth-order Nash equilibrium seeking algorithms. First, we propose a multi-timescale discrete-time averaging theorem that requires only that the equilibrium is semi-globally practically stabilized by the averaged system, while also allowing the averaged system to depend on ``fast" states. Furthermore, sequential application of the theorem is possible, which enables its use for multi-layer algorithm design. Second, we apply the aforementioned averaging theorem to prove semi-global practical convergence of the zeroth-order information variant of the discrete-time projected pseudogradient descent algorithm, in the context of strongly monotone, constrained Nash equilibrium problems. Third, we use the averaging theory to prove the semi-global practical convergence of the asynchronous pseudogradient descent algorithm to solve strongly monotone unconstrained Nash equilibrium problems. Lastly, we apply the proposed asynchronous algorithm to the connectivity control problem in multi-agent systems.
Suad Krilašević, Sergio Grammatico
2023-02-09T18:55:21Z
http://arxiv.org/abs/2302.04854v1
# A discrete-time averaging theorem and its application to zeroth-order Nash equilibrium seeking ###### Abstract In this paper we present an averaging technique applicable to the design of zeroth-order Nash equilibrium seeking algorithms. First, we propose a multi-timescale discrete-time averaging theorem that requires only that the equilibrium is semi-globally practically stabilized by the averaged system, while also allowing the averaged system to depend on "fast" states. Furthermore, sequential application of the theorem is possible, which enables its use for multi-layer algorithm design. Second, we apply the aforementioned averaging theorem to prove semi-global practical convergence of the zeroth-order information variant of the discrete-time projected pseudogradient descent algorithm, in the context of strongly monotone, constrained Nash equilibrium problems. Third, we use the averaging theory to prove the semi-global practical convergence of the asynchronous pseudogradient descent algorithm to solve strongly monotone unconstrained Nash equilibrium problems. Lastly, we apply the proposed asynchronous algorithm to the connectivity control problem in multi-agent systems. A + Footnote †: footnote]This work was partially supported by the ERC under research project COSMOS (802348). E-mail addresses: {s.krilasevic-1, s.grammatico}@tudelft.nl. veraging theorem, equilibrium seeking, asynchronous algorithm ## 1 Introduction Given a complex dynamical system, averaging techniques are used to construct a simpler system, called the _averaged system_, that is easier to analyze than the given one. Ideally, the averaged system should satisfy certain properties so that it is possible to infer stability properties of the original system based on the averaged one. These techniques are used extensively in extremum seeking results, in continuous-time systems [11], [6], [17], discrete-time systems [4], [23], and hybrid systems [20], [19]. _Literature review:_ Discrete-time averaging techniques have received intense attention over the years. In [1], the authors show that the original dynamics render the equilibrium exponentially stable under the assumption of exponential stability of the equilibrium for the averaged dynamics. Furthermore, they prove a similar result with similar assumptions for a mixed time-scale system where the fast dynamics converge to zero. The requirement for exponential stability is relaxed in [27] to just semi-global practical asymptotic stability, for single time-scale systems. Furthermore, the authors include noise into their analysis and provide input-to-state stability results. In [29], the authors provide upper bounds for the time-scale separation parameter in the case of linear switched averaged systems by using a time-delay approach and similar assumptions as in [1]. The previously mentioned single time-scale results assume that the jump mapping is time-varying and that this dependence gets "smoothed out" using the averaging technique. Thus, the main source of "perturbations" in the original system is this time dependence. Likewise, it is possible to assume that jump mapping is a function of some stochastic perturbations and that the goal of the averaging is to "smooth out" the dependence on these perturbations. In [13], the authors prove that under certain technical assumptions, the discrete-time stochastic algorithm can be approximated by its continuous counterpart, and that equilibrium of the original dynamics is weakly exponentially stable if the equilibrium of the continuous counterpart is exponentially stable. The usual approach to design of extremum seeking algorithms consists of choosing a well-behaved full-information gradient-based algorithm in the case of optimization, or pseudogradient-based in the case of games, integrated with a (pseudo)gradient zeroth-order information estimation scheme [11], [6], [19], [18]. The produced estimate then replaces the real value of the (pseudo)gradient in the algorithm. A typical estimation technique it that of injecting sinusoidal perturbations into the inputs of a cost function, whose output is then correlated with the same perturbations. Via averaging techniques, it can be proven that this estimation behaves as the (pseudo)gradient, on average. The theory of averaging and singular perturbations for continuous and hybrid systems [21], [28] enables the adaptation of a wide spectrum of algorithms. In [11], the authors adapt the gradient descent algorithm for the zeroth-order information case, together with the additional high-pass and low-pass filters to improve performance. An extremum seeking variant of the pseudogradient descent algorithm used for solving unconstrained games is presented in [6]. Recently, the authors in [18] propose a fixed-time zeroth-order algorithm for solving games, based on a similar full-information fixed-time algorithm. An accelerated first-order algorithm has been adapted for optimization problems in [19]. Unfortunately, the same variety of extremum seeking algorithms in not available in discrete-time due to the limitations of the discrete-time averaging theory. In [4], the authors prove exponential convergence to the optimum of a quadratic function under the zeroth-order variant of the gradient descent algorithm with filtering. The authors in [30] prove ultimate boundness in a similar setup where the plant is assumed to be general dynamic nonlinear and the trajectories of the averaged system ultimately bounded. A similar approach is used in [23] to prove convergence to the Nash equilibrium in a game without constraints. In [13], the authors prove stability of its stochastic variant. On the other hand, zeroth-order methods that use other approaches for gradient estimation appear to be more successful and a recent overview for methods in optimization can be found here [14]. The authors in [16] solve an N-coalition game without local constraints for strongly monotone games by using Gaussian smoothing to estimate the pseudogradient, while the authors in [24] propose an algorithm for solving cooperative multi-agent cost minimization problem with local constraints, also with Gaussian smoothing. Both papers assume synchronous sampling of the agents, albeit with possible information delay. Similar approach to Gaussian smoothing is the residual feedback estimator that uses a previous evaluation of the cost function for the second point of the pseudogradient approximation, thus reducing the numbers of cost functions samples that need to be taken in one iteration. Using this approach, the authors in [8] adapt two extra-gradient algorithms and prove convergence to the Nash equilibrium in pseudo-monotone plus games for diminishing step sizes and query radiuses. Authors in [25] and [26] estimate the pseudogradient using the idea of continuous action-set learning automaton and prove convergence for strictly monotone games and merely monotone games, respectively, via diminishing step sizes, and Tikhonov regularization. Asynchronous zeroth-order optimization algorithms have been well studied and an overview can be found here [12]. For example, the authors in [22] use the residual feedback estimator in an asynchronous gradient descent scheme to prove convergence. In the current state of the art, zeroth-order discrete-time Nash equilibrium seeking algorithms based on averaging use pseudogradient descent without projections, while algorithms based on other methods are more general, yet still assume synchronous sampling. _Contribution_: Motivated by the above literature and open research problems, to the best of our knowledge, we consider an averaging technique for mixed time-scale discrete-time systems and merely semi-globally practically convergent averaged systems, with the application to the problem of learning Nash equilibria via zeroth-order discrete-time algorithms, in the cases of locally constrained agents, and asynchronous sampling. Specifically, our main technical contributions are summarized next: * We extend the current results on averaging theory by using a mixed time-scale formulation of the original system and requiring that the averaged systems renders the equilibrium set SGPAS, unlike [27, Thm. 2], where a single time-scale, time-variant system is used, and differently from [1, Thm. 2.2.4], [15, Thm. 8.2.28] where exponential stability is needed and the fast subsystem state is assumed to converge to the origin. Furthermore, we allow certain types of additive perturbation dynamics to interfere with the nominal averaging dynamics, and that the averaged jump mapping is a function of the fast states, thus enabling easier _consecutive_ application of the averaging theorem and the design of more complex algorithms. * Enabled by our extended averaging theory, we propose two novel zeroth-order algorithms for game equilibrium seeking in discrete time. The first algorithm solves the equilibrium in games with local constraints, differently from [23], [16] where agents have no constraints; while the second one solves the problem in the case where the agents are asynchronous, i.e. the agents do not sample at the same time, nor do they coordinate in any way, differently from [23], [16] where the agents sample synchronously. _Notation_: The set of real numbers and the set of non-negative real numbers are denoted by \(\mathbb{R}\) and \(\mathbb{R}_{+}\), respectively. Given a set \(\mathcal{Z}\), \(\mathcal{Z}^{n}\) denotes the Cartesian product of \(n\) sets \(\mathcal{Z}\). For a matrix \(A\in\mathbb{R}^{n\times m}\), \(A^{\top}\) denotes its transpose. For vectors \(x,y\in\mathbb{R}^{n}\) and \(M\in\mathbb{R}^{n\times n}\) a positive semi-definite matrix and \(\mathcal{A}\subset\mathbb{R}^{n}\), \(\langle x\mid y\rangle,\|x\|,\|x\|_{M}\) and \(\|x\|_{\mathcal{A}}\) denote the Euclidean inner product, norm, weighted norm and distance to set respectively. Given \(N\) vectors \(x_{1},\ldots,x_{N}\), possibly of different dimensions, \(\operatorname{col}\left(x_{1},\ldots x_{N}\right)\coloneqq\left[x_{1}^{\top}, \ldots,x_{N}^{\top}\right]^{\top}\). Collective vectors are denoted in bold, i.e, \(\boldsymbol{x}\coloneqq\operatorname{col}\left(x_{1},\ldots,x_{N}\right)\) and for each \(i=1,\ldots,N\), \(\boldsymbol{x}_{-i}\coloneqq\operatorname{col}\left(x_{1},\ldots,x_{i-1},x_{i+ 1},\ldots,x_{N}\right)\) as they collect vectors from multiple agents. Given \(N\) ma trices \(A_{1},A_{2},\ldots,A_{N}\), blkdiag \((A_{1},\ldots,A_{N})\) denotes the block diagonal matrix with \(A_{i}\) on its diagonal. Given a vector \(x\), \(\mathrm{diag}(x)\) represents a diagonal matrix whose diagonal elements are equal to the elements of the vector \(x\). For a function \(v:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\) differentiable in the first argument, we denote the partial gradient vector as \(\nabla_{x}v(x,y)\coloneqq\mathrm{col}\left(\frac{\partial v(x,y)}{\partial x_{1 }},\ldots,\frac{\partial v(x,y)}{\partial x_{N}}\right)\in\mathbb{R}^{n}\). We use \(\mathbb{S}^{1}:=\left\{z\in\mathbb{R}^{2}:z_{1}^{2}+z_{2}^{2}=1\right\}\) to denote the unit circle in \(\mathbb{R}^{2}\). The set-valued mapping \(\mathrm{N}_{S}:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) denotes the normal cone operator for the set \(S\subseteq\mathbb{R}^{n}\), i.e., \(\mathrm{N}_{S}(x)=\varnothing\) if \(x\notin S\), \(\left\{v\in\mathbb{R}^{n}|\sup_{z\in S}v^{\top}(z-x)\leq 0\right\}\) otherwise. Id is the identity operator; \(I_{n}\) is the identity matrix of dimension \(n\) and \(\mathbf{0}_{n}\) is vector column of \(n\) zeros; their index is omitted where the dimensions can be deduced from context. The unit ball of appropriate dimensions depending on context is denoted with \(\mathbb{B}\). A continuous function \(\gamma:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{K}\) if it is zero at zero and strictly increasing. A continuous function \(\alpha:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{L}\) if is non-increasing and converges to zero as its arguments grows unbounded. A continuous function \(\beta:\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) is of class \(\mathcal{KL}\) if it is of class \(\mathcal{K}\) in the first argument and of class \(\mathcal{L}\) in the second argument. UGES and SGPAS refer to uniform global exponential stability and semi-global practical asymptotic stability, respectively, as defined in [19, Def. 2.2, 2.3]. ## 2 Discrete-time averaging We consider the following discrete-time system written in hybrid system notation [7, Eq. 1.1, 1.2] \[\left\{\begin{array}{ll}u^{+}&=u+\varepsilon G(u,\mu)\\ \mu^{+}&=M(u,\mu)\end{array}\right.,\,(u,\mu)\,\in\mathcal{U}\times\Omega. \tag{1}\] where \(u\) and \(\mu\) are the state variables, \(\mathcal{U}\subset\mathbb{R}^{m}\), \(\Omega\subset\mathbb{R}^{l}\), \(G:\mathcal{U}\times\Omega\to\mathbb{R}^{m}\) and \(M:\mathcal{U}\times\Omega\to\Omega\) are the state jump functions for the states \(u\) and \(\mu\) respectively, and \(\varepsilon>0\) is a small parameter. Furthermore, the mapping \(G\) is parametrized by a small parameter \(\gamma>0\), i.e. \(G=G_{\gamma}\), but for notational convenience, this dependence is not written explicitly in the equations. Next we consider now the trajectories of the state \(\mu\) oscillate indefinitely and in turn create oscillations in the trajectory of the state \(u\). An equivalent system that produces trajectories \(\tilde{u}\) without oscillations should be easier to analyze. We refer to such systems as averaged systems [1, Eq. 2.2.12], [15, Eq. 8.33], and we focus on those of the following form: \[\left\{\begin{array}{ll}\tilde{u}^{+}=\tilde{u}+\varepsilon G_{\mathrm{avg} }(\tilde{u},\tilde{\mu})\\ \tilde{\mu}^{+}=M(\tilde{u},\tilde{\mu})\end{array}\right.,\,(\tilde{u}, \tilde{\mu})\,\in\mathcal{U}\times\Omega, \tag{2}\] where \(G_{\mathrm{avg}}:\mathcal{U}\times\Omega\to\mathbb{R}^{m}\) and is also parametrized by \(\gamma>0\). Unlike [1, Thm. 2.2.4], [15, Thm. 8.2.28], we take into consideration the case where the function \(G_{\mathrm{avg}}\) depends on the fast state \(\tilde{\mu}\), not only on \(\tilde{u}\). To postulate the required relation between the function \(G\) and the mapping \(G_{\mathrm{avg}}\), we should introduce an auxiliary system that describes the behaviour of system (1) when the state \(u\) is kept constant, i.e. \(\varepsilon=0\), the so-called boundary layer system [28, Eq. 6]: \[\left\{\begin{array}{ll}u_{\mathrm{bl}}^{+}=u_{\mathrm{bl}}\\ \mu_{\mathrm{bl}}^{+}=M(u_{\mathrm{bl}},\mu_{\mathrm{bl}})\end{array}\right., (u_{\mathrm{bl}},\mu_{\mathrm{bl}})\in\mathcal{U}\times\Omega. \tag{3}\] Thus, a function \(G_{\mathrm{avg}}\) is called an average of the mapping \(G\) with the boundary layer dynamics in (2) if the following condition holds true: **Assumption 1**.: _For any compact set \(K\subset U\) and any solution \((u_{\mathrm{bl}},\mu_{\mathrm{bl}})\) of (2) where \(u_{\mathrm{bl}}\) is contained in the compact set \(K\), it holds that:_ \[\left\|\frac{1}{N}\sum_{i=0}^{N-1}\left[G\left(u_{\mathrm{bl}}(i), \,\mu_{\mathrm{bl}}(i)\right)-G_{\mathrm{avg}}\left(u_{\mathrm{bl}}(i),\mu_{ \mathrm{bl}}(i)\right)\right]\right\|\] \[\leq\sigma\left(N\right), \tag{4}\] _for some function \(\sigma:\mathbb{R}_{+}\to\mathbb{R}_{+}\) of class \(\mathcal{L}\). \(\Box\)_ In plain words, in Assumption 1 we postulate that by using more samples over time, our approximation of the mapping \(G\) becomes better. Furthermore, let us assume local Lipschitz continuity of the mappings as in [15, Assum. 8.2.13], [27, Eq. 1, Def. 1] and compactness : **Assumption 2**.: _The functions \(G\), \(M\) and \(G_{\mathrm{avg}}\) in (1), (2) are continuous in their arguments and locally bounded; the mapping \(G_{\mathrm{avg}}\) is locally Lipschitz continuous in its first argument. The set \(\Omega\) is compact. \(\Box\)_ The averaging method can be used in unison with other algorithms via time-scale separation. In such cases, often the averaged system does not exponentially or asymptotically stabilize the equilibrium as in [1, Thm. 2.22], [15, Thm. 8.2.28], due to the introduction of perturbations from other dynamics. Here, we assume the weaker property of semi-global practical stability of the set \(\mathcal{A}\times\Omega\) under \(v\)-perturbed dynamics of the averaged system, where the perturbations are given by the dynamical system \[v^{+}=U(\tilde{u},\tilde{\mu},v) \tag{5}\] with \(v\in\mathbb{R}^{m}\), \(U:\mathcal{U}\times\Omega\times\mathbb{R}^{m}\to\mathbb{R}^{m}\) being a function parametrized by a some \(\varepsilon>0\). **Assumption 3**.: _Consider the system in (2) and perturbation dynamics in (5), respectively. For any set \(K\subset\mathcal{U}\) _and all trajectories \((\tilde{u},\tilde{\mu})\) contained in \(K\times\Omega\), there exists a function of class \(\mathcal{K}\), \(\overline{v}\), such that \(\max_{k\in\mathrm{dom}(v)}\|v(k)\|\leq\overline{v}(\varepsilon)\). \(\Box\)_ **Assumption 4**: _The set \(\mathcal{A}\times\Omega\) is SGPAS as \(\gamma\to 0\) for the dynamics in (2), perturbations in (5), and the corresponding Lyapunov function \(V_{\mathrm{a}}\) satisfies:_ \[\underline{\alpha}_{\mathrm{a}}\left(\left\|z\right\|_{\mathcal{A }}\right)\leq V_{\mathrm{a}}(z,\mu)\leq\overline{\alpha}_{\mathrm{a}}\left( \left\|z\right\|_{\mathcal{A}}\right) \tag{6a}\] \[V_{\mathrm{a}}(z^{+},\mu^{+})-V_{\mathrm{a}}(z,\mu)\leq-\tilde{ \alpha}_{\varepsilon}\left(\varepsilon\right)\alpha_{\mathrm{a}}\left(\left\|z \right\|_{\mathcal{A}}\right)\] \[\text{for }\left\|z\right\|_{\mathcal{A}}\geq\alpha_{\gamma}( \gamma), \tag{6b}\] _where \(z=\tilde{u}+v\), \(\underline{\alpha}_{\mathrm{a}},\overline{\alpha}_{\mathrm{a}},\tilde{\alpha }_{\varepsilon},\alpha_{\mathrm{a}},\alpha_{\gamma}\) are functions of class \(\mathcal{K}\), and the function \(\frac{\varepsilon}{\tilde{\alpha}_{\varepsilon}(\varepsilon)}\) is bounded for \(\varepsilon\in(0,\overline{\varepsilon})\). \(\Box\)_ Under these assumptions, we claim that the original system is semi-global practically asymptotically stable, as formalized next: **Theorem 1**: _Let Assumptions 1, 2, 3 and 4 hold. The set \(\mathcal{A}\times\Omega\) is SGPAS as \((\varepsilon,\gamma)\to 0\) for the discrete dynamics in (1) with perturbations in (5). The corresponding Lyapunov function \(V_{\mathrm{a}}\) satisfies:_ \[\underline{\alpha}_{\mathrm{a}}\left(\left\|\xi\right\|_{\mathcal{ A}}\right)\leq V_{\mathrm{a}}(\xi,\mu)\leq\overline{\alpha}_{\mathrm{a}}\left( \left\|\xi\right\|_{\mathcal{A}}\right)\] \[V_{\mathrm{a}}(\xi^{+},\mu^{+})-V_{\mathrm{a}}(\xi,\mu)\leq- \hat{\alpha}_{\varepsilon}\left(\varepsilon\right)\alpha_{\mathrm{a}}\left( \left\|\xi\right\|_{\mathcal{A}}\right)\] \[\text{for }\left\|\xi\right\|_{\mathcal{A}}\geq\max\{\alpha_{ \gamma}(\gamma),\alpha_{\varepsilon}(\varepsilon)\},\] _where \(\xi\coloneqq u+v+\eta\), \(\eta\) is the perturbation state with dynamics_ \[\eta^{+}=(1-\varepsilon)\eta+\varepsilon[G_{\mathrm{avg}}(u,\mu) -G(u,\mu)],\] \[\max_{k\in\mathbb{N}}\|\eta(k)\|\leq\overline{\eta}(\varepsilon), \tag{7}\] _the \(v\) dynamics are given by (5), and \(\hat{\alpha}_{\varepsilon}\), \(\alpha_{\varepsilon}\), \(\overline{\eta}\) are functions of class \(\mathcal{K}\). \(\Box\)_ [Proof.] See Appendix A. \(\blacksquare\) ## 3 Applications of the averaging theorem In this section, we apply our averaging theorem, Theorem 1, to derive two novel convergence results for NEPs. First, we propose a zeroth-order algorithm for solving strongly monotone NEPs with local constraints _in discrete time_. Secondly, we propose an algorithm for solving strongly monotone unconstrained NEPs where the agents sample their states _asynchronously_. ### Zeroth-order discrete time forward-backward algorithm Let us consider a multi-agent system with \(M\) agents indexed by \(i\in\mathcal{I}\coloneqq\{1,2,\ldots M\}\), each with cost function \[J_{i}(x_{i},\mathbf{x}_{-i}), \tag{8}\] where \(x_{i}\in\Omega_{i}\subset\mathbb{R}^{m_{i}}\) is the decision variable, \(J_{i}:\mathbb{R}^{m_{i}}\times\mathbb{R}^{m_{-i}}\to\mathbb{R}\), \(m\coloneqq\sum_{j\in\mathcal{I}}m_{j}\), \(m_{-i}\coloneqq\sum_{j\neq i}m_{j}\), \(\Omega\coloneqq\Omega_{i}\times\cdots\times\Omega_{N}\). Formally, let the goal of each agent be to reach a steady state that minimizes their own cost function, i.e., \[\forall i\in\mathcal{I}:\min_{x_{i}\in\Omega_{i}}J_{i}(x_{i},\mathbf{x}_{-i}). \tag{9}\] A popular solution to this problem is the so-called Nash equilibrium: **Definition 1** (Nash equilibrium): _A set of decision variables \(\mathbf{x}^{*}\coloneqq\mathrm{col}\left(x_{i}^{*}\right)_{i\in\mathcal{I}}\) is a Nash equilibrium if, for all \(i\in\mathcal{I}\),_ \[x_{i}^{*}\in\operatorname*{argmin}_{v_{i}\in\Omega_{i}}J_{i}\left(v_{i},\mathbf{x} _{-i}^{*}\right).\qquad\Box \tag{10}\] A fundamental mapping in NEPs is the pseudogradient mapping \(F:\mathbb{R}^{m}\to\mathbb{R}^{m}\), which is defined as: \[F(\mathbf{x}):=\mathrm{col}\left(\left(\nabla_{x_{i}}J_{i}\left(x_{i},\mathbf{x}_{-i} \right)\right)_{i\in\mathcal{I}}\right). \tag{11}\] Let us also define \(C_{\mathrm{F}}:=\overline{\mathrm{co}}\{F(\Omega)\}\), the convex hull of the image of the pseudogradient. To ensure existence and uniqueness of the Nash equilibrium, we assume certain regularity properties [2, Thm. 4.3]: **Assumption 5**: _For each \(i\in\mathcal{I}\), the function \(J_{i}\) in (8) is continuously differentiable in \(x_{i}\) and continuous in \(\mathbf{x}_{-i}\); the function \(J_{i}\left(\cdot,\mathbf{x}_{-i}\right)\) is strictly convex for every fixed \(\mathbf{x}_{-i}\). \(\Box\)_ Furthermore, let us assume that no agent can compute their part of the the pseudogradient \(F\) directly, but they can only measure their instantaneous cost \(h_{i}=J_{i}(x_{i},\mathbf{x}_{-i})\), a common assumption in extremum-seeking problems [10], [17], [20], [23]. The full-information problem where \(F\) is known can be solved in many ways, depending on the technical assumptions on the problem data. Here we choose to study a simple forward-backward algorithm [3, Equ. 26.14]: \[\mathbf{x}^{+}=(1-\lambda)\mathbf{x}+\lambda\mathrm{proj}_{C}\left(\mathbf{x}-\gamma F(\bm {x})\right), \tag{12}\] for which the Lyapunov function \(V(\mathbf{x})=\left\|\mathbf{x}-\mathbf{x}^{*}\right\|^{2}\) satisfies the inequality \[V(\mathbf{x}^{+})-V(\mathbf{x})\leq-\lambda(1-c)(2-\lambda c)V(\mathbf{x}), \tag{13}\] where \(c:=\frac{\sqrt{1+\gamma^{2}L^{2}}}{1+\gamma\mu_{F}}\) and \(\mathbf{x}^{*}\) is the Nash equilibrium of the game in (8). We note that this Lyapunov function satisfies Assumption 4. A naive approach to adapting the algorithm in (12) for zeroth-order implementation would be to use a gradient estimation scheme as in [4], [23] and plug in the estimate directly into (12). However, because of the projection, Assumption 1 would not be satisfied. Thus, an additional time-scale separation is hereby proposed: \[\left\{\begin{array}{ll}\boldsymbol{x}^{+}&=(1-\alpha\beta)\boldsymbol{x}+ \alpha\beta\mathrm{proj}_{C}\left(\boldsymbol{x}-\gamma\boldsymbol{\xi} \right)\\ \boldsymbol{\xi}^{+}&=(1-\alpha)\boldsymbol{\xi}+\alpha 2A^{-1}J(\boldsymbol{x}+A \mathbb{D}\boldsymbol{\mu})\mathbb{D}\boldsymbol{\mu}\\ \boldsymbol{\mu}^{+}&=\mathcal{R}\boldsymbol{\mu}\end{array}\right. \tag{14}\] where \(\boldsymbol{\xi}\in\mathbb{R}^{m}\) are filter states, \(\boldsymbol{\mu}\in\mathbb{S}^{m}\) are the oscillator states, \(\alpha,\beta>0\) are small time-scale separation parameters, \(\mathcal{R}\coloneqq\mathrm{blkdiag}\left((\mathcal{R}_{i})_{i\in\mathcal{I}}\right)\), \(\mathcal{R}_{i}\coloneqq\mathrm{blkdiag}\left(\left[\begin{smallmatrix}\cos( \omega_{j}^{\prime})&-\sin(\omega_{j}^{\prime})\\ \sin(\omega_{j}^{\prime})&\cos(\omega_{j}^{\prime})\end{smallmatrix}\right]_{j \leq m_{i}}\right)\), \(\omega_{j}^{j}>0\) for all \(i\) and \(j\), \(\mathbb{D}\in\mathbb{R}^{m\times 2m}\) is a matrix that selects every odd row from the vector of size \(2m\), \(a_{i}>0\) are small perturbation amplitude parameters, \(A\coloneqq\mathrm{diag}\left((a_{i})_{i\leq m}\right)\) and \(J(\boldsymbol{x})=\mathrm{blkdiag}\left((J_{i}(x_{i},\boldsymbol{x}_{-i})I_{m _{i}})_{i\in\mathcal{I}}\right)\). We claim that the dynamics in (14) render the set \(\{\boldsymbol{x}^{*}\}\times C_{\mathrm{F}}\times\mathbb{S}^{m}\) practically stable. To the best of our knowledge, it is not possible to prove convergence of the algorithm in (14), using the current averaging theory for discrete-time systems, since [15, Thm. 8.2.28], [1, Thm. 2.22] require exponential stability of the origin via the averaged system, and [27, Thm. 2] does not incorporate boundary-layer dynamics. We claim that under the strong monotonicity assumption of the pseudogradient, and a proper choice of the perturbation frequencies, the algorithm in (14) converges to a Nash equilibrium. **Assumption 6**.: _The pseudogradient mapping \(F\) is \(\mu_{f}\)-strongly monotone and \(L\)-Lipschitz continuous, i.e. \(\langle\boldsymbol{x}-\boldsymbol{y}\mid F(\boldsymbol{x})-F(\boldsymbol{y} )\rangle\geq\mu_{f}\|\boldsymbol{x}-\boldsymbol{y}\|\), \(\|F(\boldsymbol{x})-F(\boldsymbol{y})\|\leq L\left\|\boldsymbol{x}-\boldsymbol {y}\right\|\), for all \((\boldsymbol{x},\boldsymbol{y})\in\mathbb{R}^{2m}\). \(\Box\)_ **Assumption 7**.: _The sets \((\Omega_{i})_{i\in\mathcal{I}}\) are convex, closed and bounded. \(\Box\)_ **Assumption 8**.: _The rotational frequencies of each agent \(i\), \(\boldsymbol{\omega}_{i}=\mathrm{col}\left((\omega_{i}^{j})_{j\leq m_{i}}\right)\), are chosen so that \(\omega_{i}^{j}\pm\omega_{u}^{v}\neq 2\pi z^{\prime},z^{\prime}\in\mathbb{Z}\), for every \(u\in\mathcal{I}\), for every \(j\in\{1,\ldots,m_{i}\}\), for every \(v\in\{1,\ldots,m_{u}\}\), apart for the case when \(i=u\) and \(j=v\). \(\Box\)_ **Theorem 2**.: _Let Assumptions 5, 6, 7 and 8 hold. The set \(\{\boldsymbol{x}^{*}\}\times C_{\mathrm{F}}\times\mathbb{S}^{m}\) is SGPAS as \((\alpha,\overline{a},\beta)\to 0\) for the dynamics in (14). \(\Box\)_ * See Appendix B. \(\blacksquare\) ### Asynchronous zeroth-order discrete time forward algorithm #### 3.2.1 First-order information feedback We now consider the same NEP as in Section 3.1 with \(\Omega_{i}\coloneqq\mathbb{R}^{m_{i}}\), but where the agent are _asynchronous_, i.e. each agent samples their states independently of others without a global clock for synchronization. For ease of exposition, we assume that the initial conditions are chosen so that simultaneous sampling never occurs. In the full-information case, such algorithm can be represented in the following form: \[\mathrm{col}\left(\dot{x}_{i},\dot{\tau}_{i},\dot{\kappa_{i}}, \dot{t}\right)=\mathrm{col}\left(\boldsymbol{0},\tfrac{1}{T_{i}},0,1\right)\] \[\mathrm{if}\,\,\mathrm{col}\left(x_{i},\tau_{i},\kappa_{i},t \right)\in\mathbb{R}^{m_{i}}\times[0,1]\times\mathbb{N}\times\mathbb{R}\] (15a) \[\left\{\begin{array}{ll}x_{i}^{+}&=x_{i}-\,\alpha\nabla_{x_{i}}J_{i}(x_{i}, \boldsymbol{x}_{-i})\\ \tau_{i}^{+}&=0\\ \kappa_{i}^{+}&=\kappa_{i}+1\\ t^{+}&=t\end{array}\right.\] \[\mathrm{if}\,\,\mathrm{col}\left(x_{i},\tau_{i},\kappa_{i},t \right)\in\mathbb{R}^{m_{i}}\times\{1\}\times\mathbb{N}\times\mathbb{R},\] (15b) which in collective form reads as \[\mathrm{col}\left(\dot{\boldsymbol{x}},\dot{\boldsymbol{\tau}}, \dot{\boldsymbol{\kappa}},\dot{k},\dot{t}\right)=\mathrm{col}\left(\boldsymbol{0 },\boldsymbol{T}^{-1},\boldsymbol{0},0,1\right)\] \[\mathrm{if}\,\,\mathrm{col}\left(\boldsymbol{x},\boldsymbol{\tau}, \boldsymbol{\kappa},k,t\right)\in\mathbb{R}^{m}\times\mathcal{T}\times\mathbb{N}^ {N+1}\times\mathbb{R}\] (16a) \[\left\{\begin{array}{ll}\boldsymbol{x}^{+}&=\boldsymbol{x}- \alpha S_{x}(\boldsymbol{\tau})F(\boldsymbol{x})\\ \boldsymbol{\tau}^{+}&=(I-S_{\tau}(\boldsymbol{\tau}))\boldsymbol{\tau}\\ \boldsymbol{\kappa}^{+}&=\boldsymbol{\kappa}+S_{\tau}(\boldsymbol{\tau})\\ k^{+}&=k+1\\ t^{+}&=t\end{array}\right.\] \[\mathrm{if}\,\,\mathrm{col}\left(\boldsymbol{x},\boldsymbol{\tau}, \boldsymbol{\kappa},k,t\right)\in\mathbb{R}^{m}\times\mathcal{T}_{\mathrm{R}} \times\mathbb{N}^{N+1}\times\mathbb{R},\] (16b) where \(\tau_{i}\) are timer states, \(t\) is the "experienced" global time, \(\boldsymbol{T}^{-1}\coloneqq\mathrm{col}\left((\tfrac{1}{T_{i}})_{i\in\mathcal{I}}\right)\) is the vector of inverse sampling times, \(\mathcal{T}\subset[0,1]^{N}\) is a closed invariant set in which all of the timers evolve and it excludes the initial conditions and their neighborhood for which we have concurrent sampling, \(\mathcal{T}_{\mathrm{R}}\coloneqq\left(\cup_{i\in\mathcal{I}}[0,1]^{i-1}\times \{1\}\times[0,1]^{N-i}\right)\cap\mathcal{T}\) is the set of timer intervals where at least one agent has triggered its sampling, \(\kappa_{i}\) are private event counters, \(k\) is the global event counter, \(S_{x}:\mathcal{T}\to\mathbb{R}^{m\times m}\) and \(S_{\tau}:\mathcal{T}\to\mathbb{R}^{N\times N}\) are continuous functions that output diagonal matrices with ones on the positions that correspond to states and timers of agents with \(\tau_{i}=1\), respectively, while other elements are equal to zero, when evaluating at \(\boldsymbol{\tau}\in\mathcal{T}_{\mathrm{R}}\). We note that the functions \(S_{x},S_{\tau}\) are introduced only to write down the algorithm in the collective form, while the agents themselves do not require them for their dynamics and in fact just follow (15). Furthermore, the counter states \(\kappa_{i},k\) and global time \(t\) are not necessary for the algorithm convergence, yet they help with understanding the setup of the algorithm. We choose to represent the algorithm in the hybrid dynamical system framework to replicate the behaviour of sampled systems with different sampling periods, and to see its effects on the functions \(S_{x},S_{\tau}\). Later, we represent and model the system as a fully discrete-time system in order to study convergence. First, we show that the solution \(\boldsymbol{\tau}(t,j)\) is periodic and that implies that \(S_{x}(\boldsymbol{\tau}(t,j))\) and \(S_{\tau}(\boldsymbol{\tau}(t,j))\) are also periodic. We make the following assumption: **Assumption 9**: _There exist natural numbers \(\left(p_{i}\right)_{i\in\mathcal{I}}\), such that the proportion \(T_{1}:T_{2}:\cdots:T_{N}=p_{1}:p_{2}\cdots:p_{N}\) holds, where \(\left(T_{i}\right)_{i\in\mathcal{I}}\) are the sampling times. \(\Box\)_ **Lemma 1**: _Let Assumption 9 hold. Denote \(r_{i}=\frac{p}{p_{i}}\) and \(r=\sum_{i\in\mathcal{I}}r_{i}\), where \(p\) is the least common multiple of \((p_{i})_{i\in\mathcal{I}}\). For any trajectory \(S_{x}(\boldsymbol{\tau}(t,j))\) and \(S_{\tau}(\boldsymbol{\tau}(t,j))\), where \(\boldsymbol{\tau}(t,j)\) is a solution of the system in (16), there exists \(\boldsymbol{T}>0\) such that \(S_{x}(\boldsymbol{\tau}(t,j))=S_{x}(\boldsymbol{\tau}(t+T,j+r))\) and \(S_{\tau}(\boldsymbol{\tau}(t,j))=S_{\tau}(\boldsymbol{\tau}(t+T,j+r))\) for all \((t,j)\in\mathrm{dom}(\boldsymbol{\tau})\) such that a jump occurred at time \(t\). \(\Box\)_ [Proof.] See Appendix F. Because the values of \(S_{x}\) and \(S_{\tau}\) are used only during jumps, we define \[\hat{S}_{x}(k;\boldsymbol{\tau}(0,0))=S_{x}(\max_{(t\in\mathrm{ dom}(\boldsymbol{\tau}(.,k))}t,k) \tag{17}\] \[\hat{S}_{\tau}(k;\boldsymbol{\tau}(0,0))=S_{\tau}(\max_{(t\in \mathrm{dom}(\boldsymbol{\tau}(.,k))}t,k), \tag{18}\] where functions \(\hat{S}_{x}:\mathbb{N}\to\mathbb{R}^{m\times m}\) and \(\hat{S}_{\tau}:\mathbb{N}\to\mathbb{R}^{N\times N}\) are parametrized by the vector of initial conditions of the timers, since different initial conditions can change the order in which the agents are sampling their actions. Due to Lemma, 1, for every initial condition \(\boldsymbol{\tau}(0,0)=\boldsymbol{\tau}^{0}\), it follows that \(\hat{S}_{x}(k,\boldsymbol{\tau}^{0})=\hat{S}_{x}(k+r,\boldsymbol{\tau}^{0})\) for all \(k\in\mathbb{N}\). Now we consider the following discrete time systems \[x_{i}(k+1)=x_{i}(k)-\,\alpha\hat{S}_{x}^{i}(k;\boldsymbol{\tau}_{0})\nabla_{x_ {i}}J_{i}(x_{i}(k),\boldsymbol{x}_{-i}(k)) \tag{19}\] which in collective form read as \[\boldsymbol{x}(k+1)=\boldsymbol{x}(k)-\alpha\hat{S}_{x}\left(k;\boldsymbol{ \tau}_{0}\right)F(\boldsymbol{x}(k)), \tag{20}\] where the function \(\hat{S}^{i}:\mathbb{N}\to\mathbb{R}^{m_{i}\times m_{i}}\) returns the rows of \(\hat{S}_{x}^{i}(k;\boldsymbol{\tau}_{0})\) corresponding to agent \(i\). We can show that for every solution of (20) there exists a corresponding solution of (16) and vice versa. We claim that under the strong monotonicity assumption, an additional regularity assumption due to the unboundedness of the decision set, and proper choice of the parameter \(\alpha\), the dynamics in (20) converge to the solution of the game, with the same minimal convergence rate, regardless of the initial conditions of the timers. **Assumption 10**: _For each \(i\in\mathcal{I}\), the function \(J_{i}(\cdot,\boldsymbol{x}_{-i})\) in (8) is radially unbounded for every fixed \(\boldsymbol{x}_{-i}\). \(\Box\)_ **Theorem 3**: _Let Assumptions 5, 6, 9 and 10 hold. Then, for all vectors of initial conditions \(\boldsymbol{\tau}_{0}\), there exists \(\alpha^{*}\), such that for any \(\alpha\in(0,\alpha^{*})\), the NE solution \(\boldsymbol{x}^{*}\) is UGES for the dynamics in (20). Furthermore, the corresponding Lyapunov function satisfies Assumption 4. \(\Box\)_ [Proof.] See Appendix E. Moreover, for the hybrid system representation in (16), since the trajectories of \((\boldsymbol{\tau},\boldsymbol{\kappa},k,t)\) are invariant to the set \(\mathcal{T}\times\mathbb{N}^{N+1}\times\mathbb{R}\), and by the structure of the flow and jump sets in (16) that assures complete solutions with unbounded time and jump domains, it follows that the dynamics in (16) render the set \(\{\boldsymbol{x}^{*}\}\times\mathcal{T}\times\mathbb{N}^{N+1}\times\mathbb{R}\) UGES, as formalized next. **Corollary 1**: _Let the Assumptions 5, 6, 9 and 10 hold. Then, the set \(\{\boldsymbol{x}^{*}\}\times\mathcal{T}\times\mathbb{N}^{N+1}\times\mathbb{R}\) is UGES for the dynamics in (16). Furthermore, the corresponding Lyapunov function satisfies Assumption 4. \(\Box\)_ #### 3.2.2 Zeroth-order information feedback Now, consider that each agent only has access to the measurements of the cost function. They can modify the algorithm in (15) by implementing a pseudogradient estimation scheme, similar to the one in Equation (14): \[\mathrm{col}\left(\dot{x}_{i},\dot{\xi}_{i},\dot{\mu}_{i},\dot{ \tau}_{i},\dot{\kappa}_{i},\dot{t}\right)=\mathrm{col}\left(\boldsymbol{0}, \boldsymbol{0},\boldsymbol{0},\frac{1}{T_{i}},0,1\right) \tag{21a}\] \[\mathrm{if}\ \mathrm{col}\left(x_{i},\xi_{i},\mu_{i},\tau_{i},\kappa_{i}, t\right)\in\mathbb{R}^{2m_{i}}\times\mathbb{S}^{m}\times[0,1]\times\mathbb{N} \times\mathbb{R},\] \[\left\{\begin{aligned} & x_{i}^{+}&=x_{i}-\alpha\beta\xi_{i} \\ &\xi_{i}^{+}&=(1-\alpha)\xi+\alpha\frac{2}{a_{i}}J_{i}(x+A \mathbb{D}\mu)\mathbb{D}_{i}\mu_{i}\\ &\mu_{i}^{+}&=\mathcal{R}_{i}\mu_{i}\\ &\tau_{i}^{+}&=0\\ &\kappa_{i}^{+}&=\kappa_{i}+1\\ & t^{+}&=t\end{aligned}\right.\] (21b) \[\mathrm{if}\ \mathrm{col}\left(x_{i},\xi_{i},\mu_{i},\tau_{i}, \kappa_{i},t\right)\in\mathbb{R}^{2m_{i}}\times\mathbb{S}^{m}\times\{1\} \times\mathbb{N}\times\mathbb{R},\] which in the collective form reads as: \[\mathrm{col}(\dot{x},\dot{\boldsymbol{\xi}},\dot{\boldsymbol{\mu}},\dot{ \boldsymbol{\tau}},\dot{\boldsymbol{\kappa}},\dot{k},\dot{t})=\mathrm{col} \left(\boldsymbol{0},\boldsymbol{0},\boldsymbol{0},\boldsymbol{T}^{-1}, \boldsymbol{0},0,1\right) \tag{22a}\] \[\mathrm{if}\ \mathrm{col}\left(x,\boldsymbol{\xi},\boldsymbol{\mu}, \boldsymbol{\tau},\boldsymbol{\kappa},k,t\right)\in\mathbb{R}^{2m}\times \mathbb{S}^{m}\times\mathcal{T}\times\mathbb{N}^{N+1}\times\mathbb{R},\] \[\begin{cases}\mathbf{x}^{+}=\mathbf{x}-\alpha\beta S_{x}(\mathbf{\tau})\mathbf{\xi}\\ \mathbf{\xi}^{+}=\mathbf{\xi}+\alpha S_{x}(\mathbf{\tau})\left(2A^{-1}J(\mathbf{x}+A\mathbb{D}\bm {\mu})\mathbb{D}\mathbf{\mu}-\mathbf{\xi}\right)\\ \mathbf{\mu}^{+}=(I-S_{\mu}(\mathbf{\tau}))\mathbf{\mu}+S_{\mu}(\mathbf{\tau}))\mathcal{R}\mathbf{ \mu}\\ \mathbf{\tau}^{+}=(I-S_{\tau}(\mathbf{\tau}))\mathbf{\tau}\\ \mathbf{\kappa}^{+}=\mathbf{\kappa}+S_{\tau}(\mathbf{\tau})\\ k^{+}=k+1\\ t^{+}=t\end{cases} \tag{22b}\] \[\text{if}\;\operatorname{col}\left(\mathbf{x},\mathbf{\xi},\mathbf{\mu},\mathbf{\tau},\mathbf{ \kappa},k,t\right)\in\mathbb{R}^{2m}\times\mathbb{S}^{m}\times\mathcal{T}_{ \text{R}}\times\mathbb{N}^{N+1}\times\mathbb{R},\] where \(S_{\mu}:\mathcal{T}\to\mathbb{R}^{2m\times 2m}\) is a continuous functions that outputs a diagonal matrix with ones on the positions that correspond to oscillator states of agents with \(\tau_{i}=1\), while other elements are equal to zero, when evaluating at \(\mathbf{\tau}\in\mathcal{T}_{\text{R}}\), and other notation is defined as in (14) and (16). Under the assumption of properly chosen perturbation frequencies, we claim semi-global practical stability of the set of solutions. **Assumption 11**.: _The rotational frequencies of every agent \(i\), \(\mathbf{\omega}_{i}=\operatorname{col}\left((\omega_{i}^{j})_{j\leq m_{i}}\right)\), are chosen so that \(\omega_{i}^{j}r_{i}\pm\omega_{u}^{w}r_{j}\neq 2\pi z^{\prime},z^{\prime}\in \mathbb{Z}\), \(r_{i}=\frac{p}{p_{i}},r_{j}=\frac{p}{p_{j}}\), for every \(u\in\mathcal{I}\), for every \(j\in\{1,\ldots,m_{i}\}\), for every \(v\in\{1,\ldots,m_{u}\}\), apart for the case when \(i=u\) and \(j=v\). \(\Box\)_ **Theorem 4**.: _Let the Assumptions 5, 6, 9, 11 hold. The set \(\{x^{*}\}\times\mathbb{R}^{m}\times\mathbb{S}^{m}\times\mathcal{T}\times \mathbb{N}^{N+1}\times\mathbb{R}\) is SGPAS as \((\alpha,\overline{a},\beta)\to 0\) for the dynamics in (22). \(\Box\)_ [Proof.] The result is proven by following the same steps as the proof of Theorem 2 and by using system in (16) with additional filtering state \(\mathbf{\xi}\) like in (15) as the second averaged system. \(\blacksquare\) ## 4 Illustrative example The connectivity control problem has been considered in [23] as a Nash equilibrium problem. In many practical scenarios, multi-agent systems, besides their primary objective, are designed to uphold certain connectivity as their secondary objective. In what follows, we consider a similar problem in which each agent is tasked with finding a source of an unknown signal while maintaining certain connectivity. Unlike [23], we only consider the case without vehicle dynamics. Consider a system consisting of multiple agents indexed by \(i\in\mathcal{I}\coloneqq\{1,\ldots N\}\). Each agent is tasked with locating a source of a unique unknown signal. The strength of all signals abides by the inverse-square law, i.e. proportional to \(1/r^{2}\). Therefore, the inverse of the signal strength can be used as a cost function. Additionally, the agents must not drift apart from each other too much, as they should provide quick assistance to each other in case of critical failure. This is enforced by incorporating the signal strength of the fellows agents in the cost functions. Thus, we design the cost \[\forall i\in\mathcal{I}:J_{i}(\mathbf{x})=\|x_{i}-x_{i}^{s}\|^{2}+c\sum_{j\in \mathcal{I}_{-i}}\|x_{i}-x_{j}\|^{2}, \tag{23}\] where \(\mathcal{I}_{-i}\coloneqq\mathcal{I}\setminus\{i\}\), \(c,b>0\) and \(x_{i}^{s}\) represents the position of the source assigned to agent \(i\). Goal of each agent is to minimize their cost function, and the solution to this problem is a Nash equilibrium. Furthermore, agents are mutually independent so their sampling time are not synchronised. To solve this problem, we use the asynchronous pseudogradient descent algorithm in (22). For our numerical simulations, we choose the parameters: \(x_{1}^{s}=(-4,-8)\), \(x_{2}^{s}=(-12,-3)\), \(x_{3}^{s}=(1,7)\), \(x_{4}^{s}=(16,8)\), \(c=0.04\), \(\gamma=0.1\), \(\alpha=0.1\), \(\beta=0.003\), \(a_{i}=0\) for all \(i\), \(T=(0.01,0.015,0.02,0.01)\), \(\mathbf{\tau}(0,0)=(0,0.002,0.004,0.006)\), the perturbation frequencies \(\omega_{i}^{j}\) were chosen as different natural numbers with added random numbers of maximal amplitude of \(0.5\). The numerical results are illustrated on Figures 1 and 2. We note that the trajectories converge to a small neighborhood of the Nash equilibrium. This can be partially attributed to the robustness properties of the pseudo-gradient descent with strongly monotone operators. ## 5 Conclusion Averaging theory can be adapted for use in discrete systems with multiple timescales. Furthermore, strongly monotone Nash equilibrium problem with constrained action sets, or with asynchronous action sampling, can Figure 1: State trajectories in the \(x_{1}-x_{2}\) plane. Circle symbols represent locations of the sources, while the \(\times\) symbols represent locations of the NE. Perturbations signals are added to the states. be solved via zeroth-order discrete-time algorithms that leverage novel averaging theory results.
2310.14856
Inheritance of the exciton geometric structure from Bloch electrons in two-dimensional layered semiconductors
We theoretically studied the exciton geometric structure in layered semiconducting transition metal dichalcogenides. Based on a three-orbital tight-binding model for Bloch electrons which incorporates their geometric structures, an effective exciton Hamiltonian is constructed and solved perturbatively to reveal the relation between the exciton and its electron/hole constituent. We show that the electron-hole Coulomb interaction gives rise to a non-trivial inheritance of the exciton geometric structure from Bloch electrons, which manifests as a valley-dependent center-of-mass anomalous Hall velocity of the exciton when two external fields are applied on the electron and hole constituents, respectively. The obtained center-of-mass anomalous velocity is found to exhibit a non-trivial dependence on the fields, as well as the wave function and valley index of the exciton. These findings can serve as a general guide for the field-control of the valley-dependent exciton transport, enabling the design of novel quantum optoelectronic and valleytronic devices.
Jianju Tang, Songlei Wang, Hongyi Yu
2023-10-23T12:25:38Z
http://arxiv.org/abs/2310.14856v2
# Inheritance of the exciton geometric structure from Bloch electrons ###### Abstract We theoretically studied the exciton geometric structure in layered semiconducting transition metal dichalcogenides. Using the well-developed three-orbital tight-binding models for the electron and hole constituents, an effective exciton Hamiltonian can be constructed and solved perturbatively. We show that the electron-hole Coulomb interaction gives rise to a non-trivial inheritance of the exciton geometric structure from Bloch electrons, which manifests as a center-of-mass anomalous Hall velocity of the exciton when two external fields are applied on the electron and hole constituents, respectively. The form of the center-of-mass anomalous velocity is obtained, which is found to exhibit a non-trivial dependence on the fields as well as the exciton wave function. \({}^{1}\) Guangdong Provincial Key Laboratory of Quantum Metrology and Sensing & School of Physics and Astronomy, Sun Yat-Sen University (Zhuhai Campus), Zhuhai 519082, China \({}^{2}\) State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-Sen University (Guangzhou Campus), Guangzhou 510275, China * E-mail: [email protected] ## 1 Introduction Atomically thin layers of semiconducting transition metal dichalcogenides (TMDs) have gained substantial interest owing to their potential as versatile platforms for exploring condensed matter phases and their promising applications in optoelectronic devices [1-5]. The direct band gap of monolayer TMDs is located at the hexagonal Brillouin zone corners, labelled as \(\pm\)**K** valleys. Due to the large effective masses at band edges and reduced dielectric screening in two-dimensional (2D) systems, these materials exhibit an exceptionally strong Coulomb interaction between charged carriers. As a result, the bound state of an electron-hole pair through the Coulomb interaction, called the exciton, plays a crucial role in photonic and optoelectronic properties of TMDs. The excitons can be viewed as a two-body system comprising a center-of-mass (CoM) motion and an electron-hole (e-h) relative motion. The e-h relative motion manifests as a discrete series of Rydberg states (\(1s\), \(2s\), \(2p_{z}\),...) [6, 7], akin to the 2D hydrogen atom [8]. Excitons in different valleys of TMDs are endowed with valley optical selection rules [9-12], implying that the valley degree-of-freedom can be manipulated optically. In bilayer TMDs, excitons can be classified into intralayer and interlayer excitons [13], depending on whether the electron and hole reside in the same or different constituent monolayers. With the versatile tunability of monolayer and bilayer TMDs, the properties and dynamics of excitons can be tailored by various control knobs, such as in-plane electric fields, interlayer twisting, gate fields and so on [1-5]. Bloch electrons in \(\pm\)**K** valleys can have non-trivial geometric structures quantified by Berry curvatures in momentum space, which play crucial roles in many exotic quantum phenomena including the valley orbital magnetic moments [14-16] and spin/valley Hall effects [17-23]. As a composite particle, the exciton can inherit geometric structures from its Bloch
2301.02975
Traditional Readability Formulas Compared for English
Traditional English readability formulas, or equations, were largely developed in the 20th century. Nonetheless, many researchers still rely on them for various NLP applications. This phenomenon is presumably due to the convenience and straightforwardness of readability formulas. In this work, we contribute to the NLP community by 1. introducing New English Readability Formula (NERF), 2. recalibrating the coefficients of old readability formulas (Flesch-Kincaid Grade Level, Fog Index, SMOG Index, Coleman-Liau Index, and Automated Readability Index), 3. evaluating the readability formulas, for use in text simplification studies and medical texts, and 4. developing a Python-based program for the wide application to various NLP projects.
Bruce W. Lee, Jason Hyung-Jong Lee
2023-01-08T04:33:43Z
http://arxiv.org/abs/2301.02975v3
# Traditional Readability Formulas Compared for English ###### Abstract Traditional English readability formulas, or equations, were largely developed in the 20th century. Nonetheless, many researchers still rely on them for various NLP applications. This phenomenon is presumably due to the convenience and straightforwardness of readability formulas. In this work, we contribute to the NLP community by 1. introducing New English Readability Formula (NERF), 2. recalibrating the coefficients of "old" readability formulas (Flesch-Kincaid Grade Level, Fog Index, SMOG Index, Coleman-Liau Index, and Automated Readability Index), 3. evaluating the readability formulas, for use in text simplification studies and medical texts, and 4. developing a Python-based program for the wide application to various NLP projects. ## 1 Introduction Readability Assessment (RA) quantitatively measures the ease of understanding or comprehension of any written text (Feng et al., 2010; Klare, 2000). Understanding text readability, or difficulty, is essential for research on any originated, studied, or shared ideas (Collins-Thompson, 2014). Such inherent property leads to RA's close applications to various areas of healthcare (Wu et al., 2013), education (Dennis, 2018), communication (Zhou et al., 2017), and Natural Language Processing (NLP), such as text simplification (Aluisio et al., 2010). Machine learning (ML) or transformer-based methods have been reasonably successful in RA. The RoBERTa-RF-T1 model by Lee et al. (2021) achieves a \(99\%\) classification accuracy on OneStopEnglish dataset (Vajjala and Lucic, 2018) and a BERT-based ReadNet model from Meng et al. (2020) achieves about \(92\%\) accuracy on WeeBit dataset (Vajjala and Meurers, 2012). However, "traditional readability formulas" still seem to be actively used throughout the research published in popular NLP venues like ACL or EMNLP (Uchendu et al., 2020; Shardlow and Nawaz, 2019; Scarton and Specia, 2018; Schwartz et al., 2017; Xu et al., 2016). The tendency to opt for traditional readability formulas is likely due their convenience and straightforwardness. In this work, we hope to assist the NLP community by recalibrating five traditional readability formulas - originally developed upon 20th-century military or technical documents. The formulas are adjusted for the modern, standard U.S. education curriculum. We utilize the appendix B (Text Examples and Sample Performance Tasks) dataset, provided by the U.S. Common Core State Standards1. Then, we evaluate the performances and applications of these formulas. Lastly, we develop a Python-based program for convenient application of the recalibrated versions. Footnote 1: corestandards.org But traditional readability formulas lack wide linguistic coverage (Feng et al., 2010). Therefore, we create a _new formula_ that is mainly motivated by lexico-semantic and syntactic linguistic branches, as identified by Collins-Thompson (2014). From each, we search for the representative features. The resulting formula is named the New English Readability Formula, or simply **NERF**, and it aims to give the most generally and commonly accepted approach to calculating English readability. To sum up, we make the contributions below. The related public resources are in appendix A. **1.** We recalibrate five traditional readability formulas to show higher prediction accuracy on modern texts in the U.S. curriculum. **2.** We develop NERF, a generalized and easy-to-use readability assessment formula. **3.** We evaluate and cross-compare six readability formulas on several datasets. These datasets are carefully selected to collectively represent the diverse audiences, education curricula, and reading levels. **4.** We develop <Anonymous>, a fast open-source readability assessment software based on Python. ## 2 Related Work The earliest attempt to "calculate" text readability was by Lively and Pressey (1923), in response to their practical problem of selecting science textbooks for high school students DuBay (2004). In the consecutive years, many well-known readability formulas were developed, including Flesch Kincaid Grade Level (Kincaid et al., 1975), Gunning Fog Count (or Index) Gunning et al. (1952), SMOG Index (Mc Laughlin, 1969), Coleman-Liau Index (Coleman and Liau, 1975), and Automated Readability Index (Smith and Senter, 1967). These formulas are mostly linear models with two or three variables, largely based on superficial properties concerning words or sentences Feng et al. (2010). Hence, they can easily combine with other systems with less burden of a large trained model Xu et al. (2016). Such property also proved helpful in research fields outside computational linguistics, with some applications directly related to the public medical knowledge - measuring the difficulty of a patient material Gaeta et al. (2021); van Ballegooie and Hoang (2021); Bange et al. (2019); Haller et al. (2019); Hansberry et al. (2018); Kiwanuka et al. (2017). ## 3 Datasets ### Common Core - Appendix B (CCB) We use the CCB corpus to calibrate formulas. The article excerpts included in CCB are divided into the categories of story, poetry, informational text, and drama. For the simplification of our approach, we limit our research to story-type texts. This left us with only 69 items to train with. But those are directly from the U.S. Common Core Standards. Hence, we assume with confidence that the item classification is generally agreeable in the U.S. CCB is the only dataset that we use in the calibration of our formulas. All below datasets are mainly for feature selection purposes only. ### WeeBit (WBT) WBT, the largest native dataset available in RA, contains articles targeted at readers of different age groups from the Weekly Reader magazine and the BBC-Bitesize website. In table 1, we translate those age groups into U.S. schools' K-* format. We downsample to \(625\frac{\text{item}}{\text{class}}\) as per common practice. ### Cambridge English (CAM) CAM Xia et al. (2016) classifies 300 items in the Common European Framework of Reference (CEFR) Verhelst et al. (2001). The passages are from the past reading tasks in the five main suites Cambridge English Exams (KET, PET, FCE, CAE, CPE), targeted at learners at A2-C2 levels of CEFR. ### Corpus of the Korean ELT (English Lang. Train.) Curriculum (CKC) CKC Lee and Lee (2020),a) is less-explored. It developed upon the reading passages appearing in the Korean English education curriculum. These passages' classifications are from official sources from the Korean Ministry. CKC represents a non-native country's official ESL education curriculum. ### OneStopEnglish (OSE) OSE is a recently developed dataset in RA. It aims at ESL (English as Second Language) learners and consists of three paraphrased versions of an article from The Guardian Newspaper. Along with the original OSE dataset, we created a paired version (OSE-Pair). This variation has 189 items and each item has advanced-intermediate-elementary pairs. In addition, OSE-Sent is a sentence-paired version of OSE. The dataset consists of three parts: adv-ele (1674 pairs), adv-int (2166), int-ele (2154). ### Newsela (NSL) NSL Xu et al. (2015) is a dataset particularly developed for text simplification studies. The dataset consists of 1,130 articles, with each item re-written 4 times for children at different grade levels. We create a paired version (NSL-Pair) (2125 pairs). ### Asset ASSET Alva-Manchego et al. (2020) is a paired sentence dataset. The dataset consists of 360 sen \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Properties** & **CCB** & **WBT** & **CAM** & **CKC** & **OSE** & **NSL** \\ \hline audience & Ntve & Ntve & ESL & ESL & ESL & Ntve \\ grade & K1-12 & K2-10 & A2-C2 & S7-12 & N/A & N/A \\ curriculum? & Yes & No & Yes & Yes & No & No \\ balanced? & No & Yes & Yes & No & Yes & No \\ \#class & 6 & 5 & 5 & 6 & 3 & 5 \\ \#item/class & 11.5 & 625 & 60.0 & 554 & 189 & 2125 \\ \#word/item & 362 & 213 & 508 & 117 & 669 & 752 \\ \#sent/item & 25.8 & 17.0 & 28.4 & 54.0 & 35.6 & 50.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Modified data. These stats are based on respective original versions. S: S.Korea Grade, Ntve: Native tences, with each item simplified 10 times. ## 4 Recalibration ### Choosing Traditional Read. Formulas We start by recalibrating five readability formulas. We considered Zhou et al. (2017) and the number of Google Scholar citations to sort out the most popular traditional readability formulas. Further, to make a fair performance comparison with our adjusted variations, we choose the formulas originally intended to output U.S. school grades but are based on 20th-century texts and test subjects. Flesh-Kincaid Grade Level (FKGL) is primarily developed for U.S. Navy personnel. The readability level of 18 passages from Navy technical training manuals was calculated. The criterion was that \(50\%\) of subjects with reading abilities at the specific level had to score \(\geq 35\%\) on a cloze test for a text item to be classified as the specific reading level. Responses from 531 Navy personnel were used. \[\text{FKGL}=a\cdot\frac{\text{\#word}}{\text{\#sent}}+b\cdot\frac{\text{\# syllable}}{\text{\#word}}+c\] where sent is sentence, and # refers to "count of." The genius of Gunning Fog Index (FOGI) is the idea that word difficulty highly correlates with the number of syllables. Such a conclusion was deduced upon the inspection of Dale's list of easy words (Zhou et al., 2017; Dale and Chall, 1948). However, the shortcoming of FOGI is the over-generalization that "all" words with more than two syllables are difficult. Indeed, "banana" is quite an easy word. \[\text{FOGI}=a\cdot(\frac{\text{\#word}}{\text{\#sent}}+b\cdot\frac{\text{ \#difficult word}}{\text{\#word}})+c\] Simple Measure of Gobbledygook (SMOG) Index, known for its simplicity, resembles FOGI in that both use the number of syllables to classify a word's difficulty. But SMOG sets its criterion a little high to more than three syllables per word. Additionally, SMOG incorporates a square root approach instead of a linear regression model. \[\text{SMOG}=a\cdot\sqrt{b\cdot\frac{\text{\#polysyllable word}}{\text{\#sent }}}+c\] Coleman-Liau Index (COLE) is more of a lesser-used variation among the five. But we could still find multiple studies outside computational linguistics that still partly depend on COLE (Kue et al., 2021; Szmuda et al., 2020; Joseph et al., 2020; Powell et al., 2020). The novelty of COLE is that it calculates readability without counting syllables, which was viewed as a time-consuming approach. \[\text{COLE}=a\cdot 100\cdot\frac{\text{\#letter}}{\text{\#word}}+b\cdot 100 \cdot\frac{\text{\#sent}}{\text{\#word}}+c\] Automated Readability Index (AUTO) is developed for U.S. Air Force to handle more technical documents than textbooks. Like COLE, AUTO relies on the number of letters per word, instead of the more commonly-used syllables per word. Another quirk is that non-integer scores are all rounded up. \[\text{AUTO}=a\cdot\frac{\text{\#letter}}{\text{\#word}}+b\cdot\frac{\text{ \#word}}{\text{\#sent}}+c\] ### Recalibration & Performance #### 4.2.1 Traditional Formulas, Other Text Types We only recalibrate formulas on the CCB dataset. As stated in section 2.1, we limit to CCB's story-type items. In a preliminary investigation, we obtained low r2 scores (\(<0.3\), before and after recalibration) between the traditional readability formulas and poetry, informational text, and drama. #### 4.2.2 Details on Recalibration We started with a large feature extraction software, LingFeat (Lee et al., 2021) and expanded it to include more necessary features. From CCB texts, we extracted the surface-level features in traditional readability formulas (i.e. \(\frac{\text{\#letter}}{\text{\#word}}\), \(\frac{\text{\#word}}{\text{\#sent}}\), \(\frac{\text{\#yllable}}{\text{\#word}}\)) and put them in a dataframe. CCB has 6 readability classes, but they are in the forms of range: K1, K2-3, K4-5, K6-8, K9-10, K11, and CCR (college and above). During calibration and evaluation, we estimated readability classes to K1, K2.5, K4.5, K7, K9.5, or K12 to model the general trend of CCB. Using the class estimations as true labels and the created dataframe as features, we ran an optimization function to calculate the best coefficients (a, b, c in **SS4.1**). We used non-linear least squares in fitting functions (Virtanen et al., 2020). Additional details are available in appendix B. #### 4.2.3 Coefficients & Performances Table 2-a shows the original coefficients and the adjusted variations, rounded up to match significant figures. The adjusted traditional readability formulas can be obtained by simply plugging in these values to the formulas in section 4.1. ## 5 The New English Readability Formula ### Criteria Considering the value of traditional readability formulas as essentially the generalized definition of readability for the non-experts (section 1), what really matters is the included features. The coefficients (or weights) can be recalibrated anytime to fit a specific use. Therefore, it is important to first identify handcrafted linguistic features that universally affect readability. Additionally, to ensure breadth and usability, we set the following guides: **1.** We avoid surface-level features that lack linguistic value (Feng et al., 2010). They include \(\frac{\text{\#letter}}{\text{word}}\). **2.** We include at most one linguistic feature from each linguistic subgroup. We use the classifications from Lee et al. (2021); Collins-Thompson (2014). **3.** We stick to a simplistic linear equation format. ### Feature Extraction & Ranking We utilize LingFeat for feature extraction. It is a public software that supports 255 handcrafted linguistic features in the branches of advanced semantic, discourse, syntactic, lexico-semantic, and shallow traditional. They further classify into 14 subgroups. We study the linguistically-meaningful branches: discourse (entity density, entity grid), syntax (phrasal, tree structure, part-of-speech), and lexico-semantics (variation ratio, type token ratio, psycholinguistics, word familiarity). After extracting the features from CCB, WBT, CAM, CKC, and OSE, we first create feature performance ranking by Pearson's correlation. We used Sci-Kit Learn Pedregosa et al. (2011). We take extra measures (Approach A & B) to model the features' general performances across datasets. Each approach runs under differing premises: **Premise A**: "Human experts' dataset creation and labeling are partially faulty. The weak performance of a feature in a dataset does not necessarily indicate its weak performance in other data settings". **Premise B**: "All datasets are perfect. The weak performance of a feature in a dataset indicates the feature's weakness to be used universally." After 78 hours of running, we decided not to extract features from NSL. Computing details are in appendix E. Among the features included in LingFeat, there are traditional readability formulas, like FKGL and COLE. These formulas performed generally well but a single killer feature, like type token ratio (TTR), often outperformed formulas. Traditional readability formulas and shallow traditional features are excluded from the rankings. ### Approach A - Comparative Ranking Under premise A, each dataset poses a different linguistic environment to feature performance. Further, premise A takes human error into consideration and agrees that data labeling is most likely inconsistent in some way. The literal correlation value itself is not too important under premise A. Rather, we look for features that perform better than the others, under the same test settings. Thus, approach A's rewarding system is rank-dependent. In a dataset, features that rank 1-10 are rewarded 10 points, rank 11-20 get 9 points,... and rank 91-100 get 1 point. Since there are five feature correlation rankings (one per dataset), the maximum score is 50. The results are in Table 3, in the order of score. ### Approach B - Absolute Correlation Under premise B, the weak correlation of a feature in a dataset is solely due to the feature's weakness to generalize. This is because all datasets are supposedly perfect. Hence, we only measure the feature's absolute correlation across datasets. Approach B's rewarding system is correlation-dependent. In a dataset, features that show correlation value between 0.9-10 are rewarded 10 points, value between 0.8-0.89 get 9 points,... and value between 0.0-0.09 get 1 point. Like approach A, the maximum score is 50. The result is in Table 4. ### Analysis & Manual Feature Selection First and the most noticeable, the top features under premise A & B are similar. In fact, the two results are almost replications of each other except for minor changes in order. We initially set two \begin{table} \begin{tabular}{l c c c c c} \hline \hline **a) Coef.s** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline original-a & 0.390 & 0.4000 & 1.043 & 0.05880 & 4.710 \\ **adjusted-a** & **0.1014** & **0.1229** & **2.694** & **0.03993** & **6.000** \\ original-b & 11.80 & 100.0 & 30.00 & -0.2960 & 0.5000 \\ **adjusted-b** & **20.89** & **415.7** & **8.815** & **-0.4976** & **0.1035** \\ original-c & -15.59 & 0.0000 & 3.129 & -15.80 & -21.43 \\ **adjusted-c** & **-21.94** & **1.866** & **3.367** & **-5.747** & **-19.61** \\ \hline **b) Perf.** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline r2 score & -0.03835 & -0.3905 & 0.1613 & 0.4341 & -0.5283 \\ **r2 score** & **0.4423** & **0.4072** & **0.3192** & **0.4830** & **0.4263** \\ Pearson r & 0.5698 & 0.5757 & 0.5649 & 0.6800 & 0.5684 \\ **Pearson r** & **0.6651** & **0.6381** & **0.5649** & **0.6949** & **0.6529** \\ \hline \hline \end{tabular} \end{table} Table 2: a) Original & adjusted coefficients. b) Perform-ance on CCB. Measured on U.S. Standard Curriculum’s K-* Output. Bold refers to our new, adjusted versions. premises to introduce differing views (and hence the results) to feature rankings. Then, we would choose the features that perform well in both. But there seems to be an inseparable correlation between ranking-based (premise A) and correlation-based (premise B) approaches. CorrNoV_S (Corrected Noun Variation) was the only new top feature introduced under premise B. Second, discourse-based features (mostly entity-related) performed poorly for use in our final NERF. As an exception, ra_NNToT_C (noun-noun transitions : total) scored 28 under premise A and 26 under premise B. On the other hand, a majority of lexico-semantic and syntactic features performed well throughout. This strongly suggests that a possible discovery of universally-effective features for readability is in lexico-semantics or syntax. Third, the difficulty of a document heavily depended on the difficulty of individual words. In detail, as_AAKu_C, as_AAKu_C, to_AAKu_C, to_AAKuW_C showed consistently high correlations across the five datasets. As shown in Section 2, these five datasets have different authors, target audience, average length, labeling techniques, and the number of classes. Each dataset had at least one of these features among the top 5 performances. The four features come from age-of-acquisition research by Kuperman et al. (2012), which now prove to be an important resource for RA. Such direct classification of word difficulties always outperformed frequency-based approaches like Sub-texUS Brysbaert and New (2009). Back to feature selection, we follow the steps below. **1.** From top to bottom, go through ranking (table 3 & 4) to sort out the features that performed the best in each linguistic subgroup. **2.** Conduct step 1 to both datasets and compare the results to each other. Though this process, we only \begin{table} \begin{tabular}{l l l|r r|r r|r r|r r} \hline \hline & \multicolumn{3}{c|}{**Feature**} & \multicolumn{3}{c|}{**CCB**} & \multicolumn{3}{c|}{**WBT**} & \multicolumn{3}{c}{**CAM**} & \multicolumn{3}{c}{**CKC**} & \multicolumn{3}{c}{**OSE**} \\ \hline Score & Branch Subgroup & LingFeat Code & Brief Explanation & r & rk & r & rk & r & rk & r & rk \\ \hline 35 & LxSem Psycholinguistic & as,AAKuW\_C & Kuperman Lemma AoA per Sent & 0.540 25 & 0.505 & 1 & 0.722 42 & 0.711 & 4 & 0.601 25 \\ 35 & LxSem Psycholinguistic & as,AAKuW\_C & Kuperman Word AoA per Sent & 0.537 28 & 0.503 & 2 & 0.722 43 & 0.711 & 6 & 0.602 24 \\ 32 & LxSem Psycholinguistic & at,AAKuW\_C & Kuperman Lemma AoA per Word & 0.723 2 & 0.323 & 35 & 0.785 24 & 0.650 22 & 0.453 67 \\ 33 & LxSem Psycholinguistic & at,AAKuW\_C & Kuperman Word AoA per Word & 0.703 5 & 0.308 & 36 & 0.784 20 & 0.643 21 & 0.455 66 \\ 31 & Synta Prnasal & as,NoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Synta Prnasal & as,NoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Synta Prnasal & as,Contw\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Synta Prnasal & as,PoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Synta Prnasal & as,PoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Synta Prnasal & as,PoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Syntax & as,PoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Syntax & as,PoPrn\_C & \(\&\) \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) & \(k\) \\ 31 & Syntax & as,Po leave the features that duplicate in both rankings. The steps above produce the same results for both approach A and B. The final selected features are as_AAKul_C (psycholinguistic), as_TreeH_C (tree structure), as_ContW_C (part-of-speech), as_NoPhr_C (phrasal), as_SbL1C_C (word familiarity), CorrTTR_S (type token ratio). CorrNov_S (variation) only appeared under approach B, and we did not include it. ### More on NERF & Calibration The final NERF (section 4.5) is brought in three parts. The first is lexico-semantics, which measures lexical difficulty. It adds the total sum of each word's age-of-acquisition (Kuperman's) and the sum of word familiarity scores (Lg10CD in SubtlexUS). The sum is divided by # sentences. The second is syntactic complexity, which deals with how each sentence is structured. We look at the number of content words, noun phrases, and the total sum of sentence tree height. Here, content words (CW) are words that possess semantic content and contribute to the meaning of the specific sentence. Following LingFeat, we consider a word to be a content word if it has "NOUN", "VERB", "NUM", "ADJ", "ADV" as a POS tag. Also, a sentence's tree height (TH) is calculated from a constituency-parsed tree, which we used the CRF parser Zhang et al. (2020) to obtain. The related algorithms from NLTK Bird et al. (2009) were used in calculating tree height. The same CRF parser was also used to count the number of noun phrase (NP) occurrences. The third is lexical richness, given through type token ratio (TTR). This is the only section of NERF that is averaged on the word count. TTR measures how many unique vocabularies appear with respect to the total word count. TTR is often used as a measure of lexical richness Malvern and Richards (2012) and ranked the best performance on two native datasets (CCB and CAM). Importantly, these two datasets represent US and UK school curriculums, and TTR seems a good evaluator. What was interesting is that out of the five TTR variations from Lee et al. (2021); Vajjala and Meurers (2012), corrected TTR generalized particularly well. Like section 3, we use the non-linear least fitting method on CCB to calibrate NERF. The results match what we expected. For example, the coefficient for word familiarity, which measures how frequently the word is used in American English, is negative since common words often have faster lexical comprehension times Brysbaert et al. (2011). ## 6 Evaluation, against Human Here, we check the human-perceived difficulty of each item in CCB. We used Amazon Mechanical Turk to ask U.S. Bachelor's degree holders, "Which U.S. grade does this text belong to?" Every item was answered by \(10\) different workers to ensure breadth. Details on survey & datasets are in appendix B, C. Table 5 gives a performance comparison of NERF against other traditional readability formulas and human performances. The human predictions were made by the U.S. Bachelor's degree holders living in the U.S. Ten human predictions were averaged to obtain the final prediction for each item, for comparison against CCB. The calibrated formulas show a particularly great \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Metric** & **Human** & **NERF** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline MAE & N.A. & N.A. & 2.844 & 3.413 & 3.114 & 2.537 & 3.377 \\ **MAE** & 3.509 & 2.154 & 2.457 & 2.516 & 2.728 & 2.378 & 2.514 \\ r2 score & N.A. & N.A. & -0.03835 & -0.3905 & 0.1613 & 0.4341 & -0.5283 \\ **r2 score** & -0.0312 & 0.5536 & 0.4423 & 0.4072 & 0.3192 & 0.4830 & 0.4263 \\ Pearson & N.A. & N.A. & 0.5698 & 0.5757 & 0.5649 & 0.6800 & 0.5684 \\ **Pearson** & 0.0838 & 0.7440 & 0.6651 & 0.6381 & 0.5649 & 0.6949 & 0.6520 \\ \hline \hline \end{tabular} \end{table} Table 5: Scores on CCB. Measured on U.S. Standard Curriculum’s K-* Output. Bold for new or adjusted. increase in r2 score. This likely means that the new recalibrated formulas can capture the variance of the original CCB classifications much better when compared to the original formulas. We believe that such an improvement stems from the change in datasets. The original formulas are mostly built on human tests of 20th century's military or technical documents, whereas the recalibration dataset (CCB) are from the student-targeted school curriculum. Further, CCB is classified by trained professionals. Hence, the standards for readability can differ. The new recalibrated versions are more suitable for analyzing the modern general documents and giving K-* output by modernized standards. MAE (Mean Absolute Error), r2 score, and Pearson's r improve once more with NERF. Even though the same dataset, same fitting function, and same evaluation techniques (no split, all train) were used, the critical difference was in the features. The shallow surface-level features from the traditional readability formulas also showed top rankings across all datasets but lacked linguistic coverage. Hence, NERF could capture more textual properties that led to a difference in readability. Lastly, we observe that it is highly difficult for the general human population to exactly guess the readability of a text. Out of 690 predictions, only 286 were correct. We carefully posit that this is because: 1. the concept of "readability" is vague and 2. everyone goes through varying education. It could be easier to choose which item is more readable, instead of guessing how readable an item is. Given the general population, it is always better to use some quantified models than trust human. ## 7 Evaluation, for Application ### Text Simplification - Passage-based All readability formulas, whether recalibrated or not, show near-perfect performances in ranking the simplicity of texts. On both OSE-Pair & NSL-Pair, we designed a simple task of ranking the simplicity of an item. Both paired datasets include multiple simplified versions of an original item. Each row consists of various simplifications. A correct prediction is the corresponding readability formula output matching simplification level (e.g. original: highest prediction,..., simplest: lowest prediction). In OSE-Pair, a correct prediction must properly rank three simplified items. NERF showed a meaningfully improved performance than the other five traditional readability formulas before recalibration. NERF correctly classified 98.7% pairs, while the others stayed \(\leq\)95% (FKGL: 93.4%, FOGI: 92.6%, SMOG: 94.4%, COLE: 94.9%, AUTO: 92.6%). Recalibration generally helped the traditional readability formulas but NERF still showed better performance (FKGL: 97.8%, FOGI: 97.1%, SMOG: 94.4%, COLE: 89.9%, AUTO: 95.8%). In NSL-Pair, a correct prediction must properly rank five simplified items, which is a more difficult task than the previous. Nonetheless, all six formulas achieved 100% accuracies. The same results were achieved before and after CCB-recalibration. This hints that NSL-Pair is thoroughly simplified. Readability formulas seem to perform well in ranking several simplifications on a passage-level. But there certainly are limits. First, one must understand that calculating "how much simple" is a much difficult task (Table 5). Second, the good results could be because sufficient simplification was done. For more fine grained simplifications, readability formulas could not be enough. ### Text Simplification - Sentence-based We were surprised that some existing text simplification studies are directly using traditional readability formulas for sentence difficulty evaluation. Our results show that using a formula-based approach is particularly useless in evaluating a sentence. We tested both CCB-recalibrated and original formulas on ASSET. Here, a correct prediction must properly rank eleven simplified items. Despite the task difficulty, we anticipated seeing some correct predictions as there were 360 pairs. SMOG guessed 37 (after recalibration) and 89 (before recalibration) correct out of 360. But all the other formulas failed to make any correct prediction. OSE-Sent poses an easier task. Since the dataset is divided into adv-int, adv-ele, and int-ele, the readability formulas now had to guess which is \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **a) Adv-Ele** & **NERF** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline Accuracy & N.A. & 74.2\% & 64.9\% & 11.4\% & 66.0\% & 78.0\% \\ **Accuracy** & 77.4\% & 62.7\% & 51.8\% & 11.4\% & 71.1\% & 65.2\% \\ \hline **b) Adv-Int** & **NERF** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline Accuracy & N.A. & 70.2\% & 63.0\% & 12.2\% & 63.6\% & 74.7\% \\ **Accuracy** & 77.8\% & 60.4\% & 51.3\% & 12.2\% & 67.7\% & 65.9\% \\ \hline **c) Int-Ele** & **NERF** & **FKGL** & **FOG1** & **SMOG** & **COLE** & **AUTO** \\ \hline Accuracy & N.A. & 69.8\% & 61.3\% & 9.02\% & 61.9\% & 73.2\% \\ **Accuracy** & 73.1\% & 59.7\% & 48.9\% & 9.02\% & 66.5\% & 62.1\% \\ \hline \hline \end{tabular} \end{table} Table 6: Scores on OSE-Sent. Bold for new or adjusted. more difficult, out of the given two. We do obtain some positive results, showing that readability formulas can be useful in the cases where only two sentences are compared. On ranking two sentences, NERF performs better by a large margin. ### Medical Documents We argue that NERF is effective in fixing the over-inflated prediction of difficulty on medical texts. Such sudden inflation is widely-reported (Zheng and Yu, 2017) as the common weaknesses of traditional readability formulas on medical documents. The U.S. National Institute of Health (NIH) guides that patient documents be \(\leq\)K-6 of difficulty. The most distinct characteristic of medical documents is the use of lengthy medical terms, like otolaryngology, urogynecology, and rheumatology. This makes traditional formulas, based on syllables, unreliable. But NERF uses familiarity and age-of-acquisition to penalty and reward word difficulty. A medical term not found in Kuperman's and SubtlexUS will have no effect. Instead, it will simply be labeled a content word. But in traditional formulas, the repetitive use of medical terms (which is likely the case) results in an insensible aggregation of text difficulty. In case various medical terms appear, NERF rewards each as a unique word. Among recent studies is Haller et al. (2019), which analyzed the readability of urogynecology patient education documents in FKGL, SMOG, and Fry Readability. We also analyze the same 18 documents from the American Urogynecologic Society (AUGS) by manual OCR-based scraping. As Figure 1 shows, it is evident that NERF helps regulate the traditional readability formulas' tendencies to over-inflate on medical texts. An example of the collected resource is given in appendix B. ## 8 Conclusion So far, we have recalibrated five traditional readability formulas and assessed their performances. We evaluated them on CCB and proved that the adjusted variations help traditional readability formulas give output more in align with CCB, a common English education curriculum used throughout the United States. Further, we evaluated the recalibrated formulas' application on text simplification research. On ranking passage difficulty, our recalibrated formulas showed good performance. However, the formulas lacked performance on ranking sentence difficulty because they were calibrated on passage-length instances. We leave sentence difficulty ranking as an open task. Apart from recalibration traditional readability formulas, we also develop a new, linguistically-rich readability formulas named NERF. We prove that NERF can be much more useful when it comes to text simplification studies and analyzing the readability of medical documents. Also, our paper serves as a cross-comparison among readability metrics. Lastly, we develop a public Python-based software, for the fast dissemination of the results. ## 9 Limitations Our work's limitations mainly come from CCB. It is manifestly difficult to obtain solid, gold readability-labelled dataset from an officially accredited organization. CCB, the main dataset that we used to calibrate traditional readability formulas, has only 69 items available. Thus, we reasonably anticipate that variation in dialect, individual differences and general ability cannot be captured. However, we highlight that NERF is developed upon several more datasets that represent diverse background, audience, and reading level. Hence, we believe that NERF can counter some of the shallowness of the traditional readability formulas, despite the still existing weaknesses. One aspect of readability formulas that have not been deeply investigated is how the output changes depending on the text length. As we show in section 7, readability formulas fail to perform well on sentence-level items. But how about a passage of three sentences? Or does the performance have to do with the average number of words in the recalibration dataset? Is there some sensible range that the readability formulas work well for? These are some open question we fail to address in this work. Figure 1: On medical texts. NERF, against five others.
2306.09887
CANDID: Correspondence AligNment for Deep-burst Image Denoising
With the advent of mobile phone photography and point-and-shoot cameras, deep-burst imaging is widely used for a number of photographic effects such as depth of field, super-resolution, motion deblurring, and image denoising. In this work, we propose to solve the problem of deep-burst image denoising by including an optical flow-based correspondence estimation module which aligns all the input burst images with respect to a reference frame. In order to deal with varying noise levels the individual burst images are pre-filtered with different settings. Exploiting the established correspondences one network block predicts a pixel-wise spatially-varying filter kernel to smooth each image in the original and prefiltered bursts before fusing all images to generate the final denoised output. The resulting pipeline achieves state-of-the-art results by combining all available information provided by the burst.
Arijit Mallick, Raphael Braun, Hendrik PA Lensch
2023-06-16T14:55:44Z
http://arxiv.org/abs/2306.09887v1
# CANDID: Correspondence AlignNment for Deep-burst Image Denoising ###### Abstract With the advent of mobile phone photography and point-and-shoot cameras, deep-burst imaging is widely used for a number of photographic effects such as depth of field, super-resolution, motion deblurring, and image denoising. In this work, we propose to solve the problem of deep-burst image denoising by including an optical flow-based correspondence estimation module which aligns all the input burst images with respect to a reference frame. In order to deal with varying noise levels the individual burst images are pre-filtered with different settings. Exploiting the established correspondences one network block predicts a pixel-wise spatially-varying filter kernel to smooth each image in the original and prefiltered bursts before fusing all images to generate the final denoised output. The resulting pipeline achieves state-of-the-art results by combining all available information provided by the burst. Burst photography; Denoising; Image alignment; ## I Introduction Due to recent development of faster and lightweight portable CPUs especially in mobile and point-and-shoot cameras, burst photography has been gaining further prominence because of the noise reduction and motion blur removal capabilities. Burst photography [1, 2] can also be understood as multi-frame image restoration task [3, 4, 5, 6] which has a wider range of applications even in satellite photography [7, 8] for remote sensing. The sensors and lenses in smartphones are much smaller and more lightweight than those of professional cameras but they collect less light per pixel which leads to noisier images. Compensating this by longer exposures could introduce motion blur. As an alternative, a burst of many short and noisy images could be computationally combined into one sharp image. Camera phone APIs already provide denoising algorithms but they are not optimized for burst denoising. Current burst denoising methods perform the alignment simply based on homographies estimated between the burst images [9, 10], followed by some pixel denoising. Our proposed pixel-wise alignment based on optical flow is significantly more powerful in compensating for scenes with complex depth and camera or object motion. We still expect the motion between the burst images to be rather small. Our overall architecture is depicted in Figure 1. First, we generate enhanced burst inputs by applying the pre-trained self-guided filtering network (SGN) [11] for each image to generate pre-denoised bursts. Both, the original and the denoised images are aligned with respect to the reference frame using the RAFT [12] optical flow network. Based on the aligned images and extracted features a network block predicts a per-pixel adaptive filter kernel to denoise every pixel in every image. A final fusion block merges all predictions across all bursts into a single output. Secondly, we estimate pixel adaptive filter-kernels which per pixel describe where to collect color information from the aligned input bursts. The decoder then only applies those kernels, thus produces weighted averages over neighbouring pixels from all aligned images. We demonstrate the importance of each module in an ablation study. Our contributions are as follows: a) optical flow-based alignment of multiple pre-denoised burst images, b) adaptive per-pixel filtering of aligned burst images followed by cross-burst fusion, c) improved denoising performance, especially in low-noise scenarios. ## II Related Work Related work covers single image denoising, homography-based alignment, and deep-burst imaging. In the following section, we discuss related works pertaining to our problem statement, starting with single image denoising, followed by homography-based and optical flow-based alignment, and finally contemporary progress on deep-burst imaging. Single image denoisingMost photography hardware companies take advantage of the recently developed lightweight neural network denoising models; exploiting the significant increase in mobile computation power. In the early days of CNNs, models such as [11] improved performance compared to classical image denoising models based on Markov random fields but they could not compete with BM3D [13] which introduced a new denoising paradigm by combining 3D block matching and domain transform. They are later surpassed by a sparse denoising autoencoder models [14, 15]. Simple multi-layer perceptron-based models [16], residual link networks [17] and later deeper residual networks [18] and persistent memory-based networks [19] have shown superior performance due to enhanced receptive fields. All these models have the advantage of being trainable end-to-end exploiting simple to generate training data. For a multitude of image processing tasks, training can be accelerated using pre-trained models and transfer learning [20]. In this spirit, we incorporate the pre-trained self-guided network (SGN) [11] to enrich the burst input with smooth priors. SGN extracts large-scale contextual information and gradually propagates it to the higher resolution sub-networks for feature self-guidance and denoising at multiple scales. This efficient multiscale local features extraction property allows it to efficiently recover denoised images. Deep-burst DenoisingWhile single image denoising relies on learned image priors, deep-burst denoising assimilates features from multiple noisy frames to predict a better image. A similar idea is used in burst motion deblurring [21] where a sharp image is recurrently extracted from a burst of blurry ones. Similarly, recurrent neural networks have also been used for burst denoising. Bhat et al. [10] reparametrize the image formation process in the latent space, and integrate learned image priors for the denoised prediction. Kernel prediction networks [22, 23] leverage the localized pixel neighborhood weighted filters to predict a denoised image from multiple inputs. Dudhane et al.[24] proposed to extract pre-processed features from each burst frame following an edge-boosting burst alignment module. The pseudo-burst features are then enriched using multi-scale contextual information, which is followed by adaptively aggregate information from the corresponding features. Our novel burst denoising model also applies adaptive pixel-neighborhood filters but first performs an explicit alignment step. Correspondence AlignmentMultiple frame denoising usually involves some sort of alignment [5] of the frames in the burst for superior feature assimilation. Tico [25] demonstrates a block matching approach within the reference and the neighboring frames to support multiple frame denoising. VBM4D [26] and VBM3D [27] take the BM3D algorithm further to video denoising with faster homography flow-based alignment. We instead estimate per pixel correspondences for a more fine-grained alignment. When capturing a burst of images of a potentially dynamic scene with a handheld camera each image will show slightly different content. In order to effectively utilize information from those multiple frames for denoising, the frames need to be aligned [5]. Tico [25] demonstrates a block matching approach within the reference and the neighboring frames to support multiple frame denoising. VBM4D [26] and VBM3D [27] take the BM3D algorithm further to video denoising with block matching for alignment. Neural network optical flow models can leverage information beyond patch-level correspondence information to predict dense correspondences, i.e. estimating pixel motion between consecutive frames of a video [28]. Some of the first learning based optical flow methods used simple CNN architectures [29, 30, 31]. Recently they were superseded by recurrent techniques like RAFT [12] or transformer-based architectures like FlowFormer [32]. Those current state of the art techniques are very good and very close to ground truth [33]. In our approach we utilize the success in the optical flow field by using a pretrained RAFT implementation provided in torchvision [34]. RAFT provides the high-quality pixel-wise correspondence alignment that we rely on for our denoising approach. ## III Method The core idea of our burst denoising method is to first spatially align the pixels in the burst stack. Afterwards, each aligned image is denoised by a content-adaptive spatially-varying filter step followed by an adaptive fusion of all processed images (see Figure 1, left). ### _Prefiltering with SGN_ Our method starts by filtering the burst images. The amount of noise in input bursts can vary significantly, even within the same burst. Because of varying degrees of noise and blur due to abrupt camera motion, precise alignment might be difficult. We, therefore, duplicate the input burst into three processing streams. The first stream B uses the original burst, the second stream B\({}_{10}\) (\(\sigma=10\)) a mildly pre-denoised version of the burst and the last one B\({}_{30}\) (\(\sigma=30\)) a strongly denoised version (see Figure 2). We apply the pretrained SGN [11] to each individual frame but any single-frame denoising algorithm could be used. The intermediate results from the different streams will be fused in the last step of our pipeline. Figure 1: Method overview. The input image burst is pre-filtered twice using SGN [11] with different filter strengths. For each stack we extract features, align both features and images and then apply a content-adaptive spatial filter with weights derived from the aligned features. The results from all three bursts are fused to predict the denoised output. ### _Feature Extraction_ To add local context to each pixel we enrich each image by processing it with a simple CNN. In addition, the estimated noise level of the image is concatenated as the fourth channel before processing. In each processing stream, we produce corresponding feature stacks. The same shared weights are used for each image in each stream. Both the image stack and the feature stack are used as inputs for the alignment module. ### _Alignment_ The central property that is exploited with burst denoising is that the content captured in the individual frames of the burst is very similar. In the original images, the scene content however might be shifting due to camera shake or scene dynamics. We use the pre-trained RAFT [12] model that is shipped with torchvision [34] to estimate the optical flow between the reference image frame and any other image frame in the burst. The estimated flow computed from the reference and secondary images is used to warp the secondary image frames and their corresponding feature maps with respect to the reference image frame and the reference feature frame respectively. The effectiveness of the RAFT-based alignment is visualized in Figure 3. ### _Collaborative Content-adaptive Spatial Filtering_ At this point, the images and features in the bursts are all aligned with respect to the reference frame. The next step is to filter the images spatially and combine the results pixel-wise for the final result. The spatial filtering is implemented with content-dependent per-pixel kernels. Those kernels are estimated by a CNN from the aligned feature stack, i.e. collaboratively considering all feature maps at the same time. The output activations of this CNN are reshaped into \(3\times 3\) and \(5\times 5\) filter kernels for all images and all pixels. The result is two kernels of shape \([N,H,W,3,3]\) and \([N,H,W,5,5]\) with the number of images \(N\), height \(H\) and width \(W\). The kernels are normalized via _softmax_ and applied to each image in the burst individually, effectively computing a weighted average color over the \(3\times 3\) and \(5\times 5\) neighborhood of each pixel as shown in Figure 4. ### _Burst Fusion_ Remember in Section III-A the burst was split into three processing streams B, B\({}_{10}\) and B\({}_{30}\), which are all processed individually in the same way so far. This means at this stage we have aligned and spatially filtered images and the corresponding aligned image features for each stream. The final step is to fuse all information from the different bursts into a single denoised image \(I_{\text{pred}}\). This denoised result is computed as a weighted average over the spatially filtered images from all three processing streams. As indicated in Figure 5, we concatenate the aligned features of the streams with the spatially filtered images and process them together in a 4-layer CNN. This CNN produces the weight volume. This volume contains a weight for every pixel and color of every image. A softmax over the image dimension is applied to the weights in order to ensure that summing up the weights over this dimension yields 1 for every color channel. The result \(I_{\text{pred}}\) is finally computed as a weighted sum per pixel. This is implemented as element-wise multiplication between weight volume and Fig. 4: The spatial content-adaptive filter kernels for every pixel are estimated by a CNN based on all aligned features. They are applied individually to the aligned images to produce the spatially filtered burst. Fig. 3: Pixel-wise alignment Fig. 2: Prefiltering with SGN spatially filtered images followed by a sum over the burst dimension. Every channel for every input image is therefore weighted individually, which is more powerful than just mixing the existing colors of the spatially filtered images. ### _Training_ Some components of the denoising pipeline like the SGN and the alignment module are pretrained. We stopped the gradients from going through the SGN networks, which effectively turns the three burst streams B, B\({}_{10}\) and B\({}_{30}\) into separate inputs. The RAFT network in the alignment module was frozen and used as fixed differentiable operation. The remaining trainable weights are in the CNNs for the feature extractor, the content-adaptive cooperative spatial filter, and the burst fusion module. We train end-to-end with ADAM [35] from a simple \(L1\)-loss \(\mathcal{L}=\left\|I_{\text{pred}}-I_{\text{gt}}\right\|_{1}\) on the ground truth \(I_{\text{gt}}\). ## IV Experiments We evaluate our method by comparing to state of the art and validate our architecture choices with an ablation study. ### _Training and experimental setup_ For the pre-denoising we use the SGN pre-trained with \(\sigma=10,30\)[36]. For the burst denoising training both the SGN pre-denoising and the RAFT alignment model are frozen. We trained on the OpenImages [37] dataset and evaluated on the grayscale burst benchmark [22] and RGB burst benchmark following the usual conventions [10, 23, 24]. The ground truth images are shifted and corrupted by adding heteroscedastic Gaussian noise [38] with variance \(\sigma_{r}^{2}+\sigma_{s}^{2}x\). Here \(x\) is the clean pixel value, while \(\sigma_{r}\) and \(\sigma_{s}\) denote the readout and shot noise parameters, respectively. Those noise parameters are assumed to be known both during training and testing, and are used in the feature extractor. During training they are sampled uniformly in the log-domain from the range \(\log(\sigma_{r})\in[-3,-1.5]\) and \(\log(\sigma_{s})\in[-4,-2]\). The comparisons are evaluated with 2 different noise lv.1 and lv.2, corresponding to noise parameters (-2.2, -2.6) and (-1.8, -2.2) respectively. Training was done on 2 TITAN Xp GPUs and took about 96 hours to converge. ### _Results_ The quantitative comparison with other methods in Table IV-B shows that our model delivers overall state-of-the-art performance the aforementioned benchmark in the evaluation of the model-unseen dataset. On deep introspection, we can say that due to SGN and further multiple kernel-based filtering, the model successfully recovers the image even from the heavy noise scenarios. In the future, one could add additional SGN-based denoising stages with different pre-trained noises to analyze whether further boosting of _lvl.1_ and _lvl.2_ would be possible. Additionally, larger filter kernels can be added to the model in order to enhance the results for higher noise scenarios. Exemplar qualitative results on individual images are shown in Figure 6 ### _Ablation Study_ Since our module consists of several pretrained blocks and trainable sub-modules, we analyse the effectiveness of each of the components with the corresponding ablation in Table II. Here, we removed individual parts of the pipeline and trained the network from scratch. Removing all SGN blocks effectively suppresses pre-denoising of the input. Although the low-noise evaluation performs comparatively well, image quality deteriorates as the noise increases due to the lack of cleaner proposals at the initial stages. Without the alignment module, the final fusion step is impaired and we see lower performance on all noise levels. Particularly _lvl.1_ is impacted. Fig. 5: Fusion Network. The aligned features from all three bursts are concatenated with the spatially filtered images and processed by a CNN to obtain a weight volume. The weights are used to compute the denoised result as a weighted per-pixel sum over the spatially filtered images. Finally, the cooperative content-adaptive filtering adds almost equally to the reconstruction quality of all noise levels levels. ### _Qualitative Results_ In Figure 6 we demonstrate the improved performance of our pipeline on a number of example images. Even for drastically different amount of noise our approach outperforms BIPNET [24] on every image. The denoised image is significantly closer to the ground truth result, as is evident from the error maps. A failure case is shown in Figure 7. In this example, apart from the high motion, the image consists of sharp features which is preserved by our network which is not detected as noise. ## V Conclusions We propose a deep-burst denoising model based on optical flow guided alignment and cooperative filtering. A well-established single image denoising module generates pre-denoised burst input images for two different assumed noise levels. Alignment to the reference frame is performed using a state of the art optical flow network. Providing the original input burst and the pre-denoised stacks ensures the good performance of the optical flow alignment. Based on the aligned features and images a set of content-adaptive spatially-varying filter kernel is predicted to smooth each input image individually. A fusion block finally combines all intermediate results to the final denoised output. In the future, one can also compare the effect of state of the art optical flow based correspondence alignment on the quality of the burst image denosing. Our approach yields state-of-the-art results across low noise levels on the standard benchmark data sets. Higher noise scenarios working on different pre-denoised images shows a comparable benefit. ## Acknowledgement This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC number 2064/1 - project number 390727645 and SFB 1233 - project number 276693517. It was supported by the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039A and Cyber Valley. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Arijit Mallick.
2307.00552
Adaptive reinforcement learning of multi-agent ethically-aligned behaviours: the QSOM and QDSOM algorithms
The numerous deployed Artificial Intelligence systems need to be aligned with our ethical considerations. However, such ethical considerations might change as time passes: our society is not fixed, and our social mores evolve. This makes it difficult for these AI systems; in the Machine Ethics field especially, it has remained an under-studied challenge. In this paper, we present two algorithms, named QSOM and QDSOM, which are able to adapt to changes in the environment, and especially in the reward function, which represents the ethical considerations that we want these systems to be aligned with. They associate the well-known Q-Table to (Dynamic) Self-Organizing Maps to handle the continuous and multi-dimensional state and action spaces. We evaluate them on a use-case of multi-agent energy repartition within a small Smart Grid neighborhood, and prove their ability to adapt, and their higher performance compared to baseline Reinforcement Learning algorithms.
Rémy Chaput, Olivier Boissier, Mathieu Guillermin
2023-07-02T12:22:02Z
http://arxiv.org/abs/2307.00552v1
Adaptive Reinforcement Learning of Multi-Agent Ethically-aligned Behaviours: the QSOM and QDSOM algorithms ###### Abstract The numerous deployed Artificial Intelligence systems need to be aligned with our ethical considerations. However, such ethical considerations might change as time passes: our society is not fixed, and our social mores evolve. This makes it difficult for these AI systems; in the Machine Ethics field especially, it has remained an under-studied challenge. In this paper, we present two algorithms, named QSOM and QDSOM, which are able to adapt to changes in the environment, and especially in the reward function, which represents the ethical considerations that we want these systems to be aligned with. They associate the well-known Q-Table to (Dynamic) Self-Organizing Maps to handle the continuous and multi-dimensional state and action spaces. We evaluate them on a use-case of multi-agent energy repartition within a small Smart Grid neighborhood, and prove their ability to adapt, and their higher performance compared to baseline Reinforcement Learning algorithms. Machine Ethics Artificial Moral Agents Multi-Agent Systems Reinforcement Learning Multi-Agent Reinforcement Learning ## 1 Introduction With the increasing deployment of systems using Artificial Intelligence (AI) techniques, questions are being raised within civil society and the scientific community about their impact on our lives. One of the most pressing questions is that of value alignment (Dignum, 2019; World Economic Forum, 2015): how can we ensure that these systems act in line with the moral values that are important to us? The field of Machine Ethics has proposed numerous approaches, based on a variety of techniques, from symbolic implementation to machine learning. However, the property of Continual Learning, which we believe is important, has not been studied enough. Continual Learning concerns the ability of artificial agents to learn continuously and therefore to change their behaviour as a function of the environment. This is a particularly critical property in Machine Ethics, because ethics are not fixed: our currently accepted social moves evolve over time. In this paper, we propose in Section 3 two reinforcement learning algorithms, QSOM and QQSOM, that can adapt to changes in the reward function, representing these "changes in ethics". These algorithms are then evaluated on an application case of multi-agent energy repartition within a small Smart Grid, described in Section 4. A discussion of their advantages and drawbacks is finally presented in Section 5. ## 2 State of the Art In this section, we introduce the necessary knowledge, and explore the state of the art in the fields related to our work: _Machine Ethics_ and _(Multi-Agent) Reinforcement Learning_. This exploration allows us to compare the existing approaches, their advantages, but also their limitations, and to define some concepts necessary to the understanding of our work. ### Machine Ethics The field of Machine Ethics is relatively recent among the other fields of Artificial Intelligence. A book published in 2011 gathers different essays on the nature of _Machine Ethics_, its importance, the difficulties and challenges to be solved, and also a few first approaches (Anderson and Anderson 2011). This book defines this new field of research: The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas we might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. (Anderson and Anderson 2011) Being a recent field, several articles have sought to position themselves, or to offer a philosophical background. For example, Moor (2009) proposes a definition of what might be an "ethical robot", and differentiates 4 different kinds of robots, ranging from those with the least ethical considerations to those which have near-human ethical reasoning abilities: _ethical impact agents_, _implicit ethical agents_, _explicit ethical agents_, and _full ethical agents_. The goal, for Machine Ethics designers and researchers, is to attain _explicit_ ethical agents, as it is still unsure whether artificial _full_ ethical agents can be built. In the following, we briefly list a few approaches, and present a set of "properties" that we argue are important to design such ethical agents. **Discrete or continuous domains**. In order to implement ethical considerations into an artificial agent, these considerations must be represented. This includes, e.g., data about the current situation, and the potential actions or decisions that are available to the agent. The choice of this representation must allow both for use-case richness, and for the agent's ability to correctly use these representations. Two types of representations are commonly used: either _discrete_ domains, which use a discrete set of symbols and discrete numbers, or _continuous_ domains, which use continuous numbers that lead to an infinite set of symbols. So far, discrete domains seem prevalent in _Machine Ethics_. For example, the emblematic Trolley Dilemma (Foot 1967) describes a situation where an uncontrolled trolley is driving on tracks towards a group of 5 persons. These persons, depending on the exact specification, are either unaware of the trolley, or unable to move. An agent may save this group by pulling up a lever, which would derail the trolley towards a single person. It can be seen that the representation of both the situation and the available actions are discrete in this dilemma: 2 actions are proposed, _pull the lever_ or _do nothing_, and on the tracks are present \(1\) and \(5\) persons, respectively. Similarly, the now defunct DilemmaZ database listed a plethora of moral dilemmas, proposed by the community, of which many apply to Artificial Intelligence and IT systems in general, e.g., smart homes, robots. Although a formal description of these dilemmas is not available, most of the natural language descriptions seem to imply discrete features. This is particularly clear for the definition of actions; for example, the "Smart home - Someone smoking marijuana in a house" dilemma, by Louise A. Dennis, offers the following 3 actions: "a) do nothing, b) alert the adults and let them handle the situation or c) alert the police". A final example is the _Morral Gridworlds_ idea of Haas (2020) to train a Reinforcement Learning agent "to attribute subjective rewards and values to certain'moral' actions, states of affairs, commodities, and perhaps even abstract representations". Moral Gridworlds are based on gridworlds, which represent the environment as a 2-dimensional grid of cells. A RL agent is placed in one of these cells, and may either act in its cell, or move to one of the adjacent cells. Again, the environment uses discrete features, both for perception, i.e., a discrete set of cells, and for actions, i.e., either act, move up, left, right, or down. Perhaps the ubiquitous use of discrete representations in Machine Ethics can be at least partially explained by their simplicity of usage within AI techniques. These "discrete dilemmas" are important, because they may very well happen one day in our society. We need systems that are able to make the best decision, with respect to our moral values, in such situations. However, there are other situations that cannot be easily described by discrete representations. For example, foretelling the Smart Grid use-case that we describe in Section 4, when considering an energy distribution system, we may transition from a closed question "Should the agent consume energy? yes/no" to a more open question "What power should the agent request during a given time step?". Arguably, such an action could be represented as a discrete set, by _discretizing_ the continuous domain into a set, e.g., \(\{0\text{Wh},1\text{Wh},\cdots,1000\text{Wh}\}\), which contains \(1001\) actions. But this solution is harder to leverage when considering multi-dimensional domains: in addition to "how much energy should it consume", we may also ask "What power should the agent buy?". In this case, discretizing the continuous and multi-dimensional domain would result in a combinatorial explosion. The set of discrete actions may be represented as \(\{(0\text{Wh},0\text{Wh}),(0\text{Wh},1\text{Wh}),(1\text{Wh},0\text{Wh}),(1 \text{Wh},1\text{Wh}),\cdots,(1000\text{Wh},1000\text{Wh})\}\), which contains \(1001\times 1001\) different actions, where each action is represented as a pair \((\text{consumed},\text{bought})\). We already see, on 2 dimensions and with a grain of 1WM, that a million actions would require too much time and computational resources to explore and analyze, in order to find the best one. The same argument can be made for perceptions as well: for example, instead of having a perception "the situation is fair", or "the situation is unfair", we may want to have an indicator of how fair the situation is, e.g., through well-known measures such as the Gini index, which is a real number comprised between \(0\) (perfect equality) and \(1\) (perfect inequality) (Gini 1936). Such situations, which imply a large, continuous and multi-dimensional domain, are as likely to happen in our society as the discrete ones. **Mono- or Multi-agent**. According to a survey (Yu et al. 2018), many works consider a single agent isolated in its environment. This is the case, to give some examples, of GenEth (Anderson, Anderson, and Berenz 2018), or the _ethics shaping_ technique (Wu and Lin 2018). Other approaches, such as Ethicaa (Cointe, Bonnet, and Boissier 2016), use multiple agents, which take actions and have an impact in a common, shared environment. As Murukannaiah et al. (2020) put it: Ethics is inherently a multiagent concern -- an amalgam of (1) one party's concern for another and (2) a notion of justice. In Ethicaa (Cointe, Bonnet, and Boissier 2016), a judgment process is defined to allow agents to both 1) select the best ethical action that they should make, and 2) judge the behaviour of other agents so as to determine whether they can be deemed as "ethical", with respect to one's own preferences and upheld moral values. One long-term objective of this second point can be to define and compute a trust indicator for other agents; if an agent acts ethically, we may trust it. This raises an interesting rationale for exploring Machine Ethics in Multi-Agent Systems: even if we manage to somehow create a full ethical agent, which is guaranteed to take moral values and ethical stakes into account, it will have to work with other agents. We cannot guarantee that these agents will follow the same ethical preferences, nor even that they will consider ethical stakes at all. Our own agent must therefore take this into account. Based on the previous reasons, we argue that the multi-agent case is important. Indeed, it corresponds to a more realistic situation: such artificial agents are bound to be included in our society, and thus to have to interact with other agents, whether artificial or human, or at least to live in an environment impacted by these other agents, and not in a perfectly isolated world. The question of the impact of other agents on an agent's decision-making is thus of primary importance. **Top-Down, Bottom-Up, and Hybrid approaches**. Approach type is probably the most discussed property in _Machine Ethics_. It characterizes the way designers implement ethical considerations into artificial agents. Similarly to the usual classification in AI, works are divided into 3 categories (Allen, Smit, and Wallach 2005): _Top-Down, Bottom-Up_, and _Hybrid_ approaches. Top-Down approaches are interested in formalizing existing ethical principles from moral philosophy, such as Kant's Categorical Imperative, or Aquinas' Doctrine of Double Effect. The underlying idea is that, if these moral theories could be transformed into an algorithm that agents could follow to the letter, surely these agents' behaviour would be deemed as ethical by human observers. This formalization is often done through symbolic representation and reasoning, e.g., through logic, rules-based techniques, or even ontologies. Reasoning over these symbolic representations can rely upon expert knowledge, a priori injected. They also offer a better readability, of both the injected knowledge, and the resulting behaviour. One of the advantages of Top-Down approaches is this ability to leverage such existing ethical principles from moral philosophy. Intuitively, it seems indeed better to rely on theories proposed by moral philosophers, which have been tested and improved over time. Another advantage, emphasized by the work of Bremner et al. (2019), is the ability to use formal verification to ensure that agents' behaviours stay within the limits of the specified rules. To do so, the _Ethical Layer_ they propose includes a planning module that creates plans, i.e., sequences of actions, and an ethical decision module to evaluate the plans, prevent unethical ones, and proactively ask for new plans if necessary. This formal verification ability is an important strength, as there are worries about agents malfunctioning. An agent that could be formally verified to stay within its bounds, could be said to be "ethical", with respect to the chosen ethical principle or theory. However, there are some weaknesses to _Top-Down_ approaches. For example, conflicts between different rules may arise: a simple conflict could be, for example, between the "Thou shalt not kill" rule, and another "You may kill only to defend yourself". The second one should clearly define when it is allowed to take precedence over the first one. A more complicated conflict would be two rules that commend different, non-compatible actions. For example, let us imagine two missiles attacking two different buildings in our country: the first one is a hospital, the second one is a strategic, military building, hosting our defense tools. An autonomous drone can intercept and destroy one of the two missiles, but not the two of them; which one should be chosen? A rule may tell us to protect human lives, whereas another encourages us to defend our arsenal, in order to be able to continue protecting our country. These two rules are not intrinsically in conflict, unlike our previous example: we would like to follow both of them, and to destroy the two missiles. Unfortunately, we are physically constrained, and we must make a choice. Thus, a rule has to be preferred to the other. _Ethicaa_(Cointe, Bonnet, and Boissier, 2016) agents make a distinction between the moral values and ethical principles, and they consider multiple ethical principles. Each ethical principle determines whether an action is ethical, based on the permissible and moral evaluations. Multiple actions can thus be evaluated as ethical by the ethical principles, and, in many cases, there is no single action satisfying all ethical principles. To solve this issue, agents also include a priority order over the set of ethical principles known to them. In this way, after an agent determines the possible, moral, and ethical actions, it can choose an action, even if some of its rules disagree and commend different actions. To do so, they filter out the actions that are not evaluated as ethical, and thus should not be selected, by their most preferred ethical principle, according to the ethical priority order. As long as multiple actions remain considered, they move on to the next preferred ethical principle, and so on, until a single action remains. Finally, another drawback is the lack of adaptability of these approaches. Indeed, due to their explicit but fixed knowledge base, they cannot adapt to an unknown situation, or to an evolution of the ethical consensus within the society. We argue that this capability to adapt is particularly important. It is similar to what Nallur (2020) calls the _Continuous Learning_ property: Any autonomous system that is long-lived must adapt itself to the humans it interacts with. All social mores are subject to change, and what is considered ethical behaviour may itself change. We further note that, in his landscape, only 1 out of 10 considered approaches possesses this ability (Nallur, 2020, Table 2). Bottom-Up approaches try to learn a behaviour through experience, e.g., from a dataset of labeled samples, or trial and error interactions. For example, GenEth (Anderson, Anderson, and Berenz, 2018) uses ethicists' decisions in multiple situations as a dataset representing the ethical considerations that should be embedded in the agent. This dataset is leveraged through Inductive Logic Programming (ILP) to learn a logical formula that effectively drives the agent's behaviour, by determining the action to be taken in each situation. ILP allows creating a logical formula sufficiently generic to be applied to other situations, not encountered in the dataset. An advantage of this approach is that it learns directly from ethicists' decisions, without having to program it by hand. The resulting formula may potentially be understandable, provided that it is not too complex, e.g., composed of too many terms or terms that in themselves are difficult to understand. Another approach proposes to use Reinforcement Learning RL (Wu and Lin, 2018). Reinforcement Learning relies on rewards to reinforce, or on contrary, to mitigate a given behaviour. Traditionally, rewards are computed based on the task we wish to solve. In the work of Wu and Lin (2018), an ethical component is added to the reward, in the form of a difference between the agent's behaviour, and the behaviour of an average human, obtained through a dataset of behaviours, and supposedly exhibiting ethical considerations. The final reward, which is sent to agents, is computed as the sum of the "task" reward, and the "ethical" reward. Agents thus learn to solve their task, while exhibiting the ethical considerations that are encoded in the human samples. One advantage of this approach is that the "ethical" part of the behaviour is mostly task-agnostic. Thus, only the task-specific component of the reward has to be crafted by designers for a new task. Nevertheless, one may wonder to which extent does this dataset really exhibit ethical considerations? We humans do not always respect laws or moral values, e.g., we sometimes drive too fast, risking others' lives, or we act out of spite, jealousy, etc. To determine whether this dataset is appropriate, an external observer, e.g., a regulator, an ethicist, or even a concerned citizen, has to look at its content, and understand the data points. These 2 approaches, although based on learning, have not considered the question of long-term adaptation to changing situations and ethical moves. Indeed, if the current society norms with regard to ethics change, these agents' behaviours will have to change as well. It will probably require to create a new dataset, and to learn the agents again, from scratch, on these new data. Moreover, Bottom-Up approaches are harder to interpret than Top-Down ones. For example, a human regulator or observer, willing to understand the expected behaviour, will have to look at the dataset, which might be a tedious task and difficult to apprehend, because of both its structure and the quantity of data. This is all the more true with Deep Learning approaches, which require an enormous amount of data (Marcus, 2018), making datasets exploration even more daunting. Finally, Hybrid approaches combine both Top-Down and Bottom-Up, such that agents are able to learn ethical behaviours by experience, while being guided by an existing ethical framework to enforce constraints and prevent them from diverging. As Dignum (2019) points out: By definition, hybrid approaches have the potential to exploit the positive aspects of the top-down and bottom-up approaches while avoiding their problems. As such, these may give a suitable way forward. (Dignum, 2019, 81) One of such hybrid works is the approach by Honarvar and Ghasem-Aghaee (2009) to combine BDI agents with Case-based Reasoning and an Artificial Neural Network. Faced with a given situation, the agent proposes an action to perform, and then searches its database of already known cases for similar situations and similar actions. If a close enough case is found, and the action was considered as ethical in this case, the action is taken. However, if in this close enough case, the action was considered as unethical, a new action is requested, and the agent repeats the same algorithm. If the agent does not have a sufficiently close case, it performs the action, and uses its neural network to evaluate the action's consequences and determine whether it was effectively aligned with the ethical considerations. This evaluation is memorized in the case database, to be potentially reused during the next decision step. This approach indeed combines both reasoning and learning capabilities; however, it may be difficult to apply. Case-based reasoning allows grouping close situations and actions, but requires to specify how to group them, i.e., what is the distance function, and how to adapt an evaluation when either the situation or the action differs. For example, let us assume that, in a situation \(s\), the agent's action was to consume \(500\)Wh of energy, and the action was evaluated as ethical. In a new situation, \(s^{\prime}\), which is deemed as similar to \(s\) by the case-based reasoner, another action is proposed, which is to consume \(600\)Wh. Is this action ethical? How can we translate the difference between \(600\) and \(500\) in terms of ethical impact? This requires specifying an "adaptation knowledge" that provides the necessary knowledge and tools. Still, Hybrid approaches offer the possibility of learning a behaviour, thus adapting to any change in the environment, while still guiding or constraining the agent through symbolic reasoning and knowledge, thus injecting domain expert knowledge, more easily understandable and modifiable than datasets of examples. ### Reinforcement Learning We propose to use Reinforcement Learning (RL) as a method to learn behaviours aligned with moral values, and provide here the background knowledge and concepts that are necessary to understand the rest of the paper. We detail motivations for using RL, definitions of core concepts, and equations. RL is a method to learn a behaviour, mainly by using trial-and-error. Sutton and Barto (2018) define it as follows: Reinforcement learning problems involve learning what to do -- how to map situations to actions -- so as to maximize a numerical reward signal. (Sutton and Barto, 2018, 2) To do so, learning agents are placed in a closed-loop with an environment, with which they interact. Through the environment, they have knowledge of which state they are in, and they take actions to change the state. One of the key points of RL is that learning agents are not told which action is the correct one; the feedback they receive, or _reward_, merely tells them to which degree the action was satisfying. Learning agents must discover the best action, i.e., the one that yields the highest reward, by accumulating enough experience, that is by repetitively trying each action in each situation, and observing the received rewards. As we mentioned, RL agents receive feedback, which differentiates them from the _unsupervised_ paradigm. However, unlike the _supervised_ paradigm, this feedback does not clearly indicate which was the correct answer. This removes the assumption that we know the correct answer to each input. Instead, we provide a reward function, and thus optimize the agent's output step by step, by improving the proposed action based on the reward. The goal of a RL algorithm is to learn a policy, or strategy, denoted \(\pi\), such that the agent knows which action to take in each situation. \(\pi\) is often defined as \(\pi:\mathbb{S}\to\mathbb{A}\) in the case of a _deterministic_ policy, where \(\mathbb{S}\) is the space of possible states, and \(\mathbb{A}\) the space of possible actions. To each state \(s\) is associated a single action \(\pi(s)=a\), which the agent should take in order to maximize its reward. Another formulation is \(\pi:\mathbb{S}\times\mathbb{A}\to[0,1]\), in the case of a _stochastic_ policy. For each combination of state-action \((s,a)\) is associated a probability \(\pi(s,a)\) of taking action \(a\) in the state \(s\), such that \(\forall s\in\mathbb{S}:\sum_{\forall a\in\mathbb{A}}\pi(s,a)=1\). There are several challenges in RL, of which one of the most known and perhaps important is the _exploration-exploitation trade-off_. In order to facilitate learning the policy function, RL researchers often rely on the notion of _values_1, in aptly-named _value-based_ methods, such as the well-known Q-Learning (Watkins and Dayan 1992). The value of a state, or a state-action pair, represents the long-term interest of being in this state, whereas the reward is short-term feedback. The agent could receive a high reward for taking an action \(a\) in a state \(s\), but ending up in a state \(s^{\prime}\) in which only low rewards can be obtained. In this case, we will say that the value of state \(s^{\prime}\), denoted as \(\mathbb{V}(s^{\prime})\) is low. By extension, the agent has little interest in performing action \(a\) while in state \(s\), since it will lead it to a low-interest state. Footnote 1: We use value here in a different sense than the moral value used earlier. To avoid confusion, we will always specify _moral_ value when referring to this first meaning. In the previous paragraph, we derived the interest of action \(a\), in a state \(s\), from the value \(\mathbb{V}(s^{\prime})\) which it leads to. It is also possible to learn directly the value of state-action pairs, which is the main idea of the Q-Learning algorithm. To retain the different interests of all state-action pairs, a table, named the _Q-Table_, is created, having the states as columns and actions as rows. The _Q-Value_\(\mathbb{Q}(s,a)\) is thus defined as the interest of the state-action pair \((s,a)\), i.e., the interest of taking action \(a\) in state \(s\). Additionally, the value of a state as a whole is defined as \(\mathbb{V}(s)=max_{a}\mathbb{Q}(s,a)\). Based on these definitions, the agent is able to learn the Q-Table by iteratively collecting experiences from the environment, in the form of \(\langle s,a,s^{\prime},r\rangle\) tuples, updating the interest \(\mathbb{Q}(s,a)\) based on both the short-term reward \(r\), and the long-term interest \(\mathbb{V}(s^{\prime})\) of arriving in state \(s^{\prime}\). Mathematically, this can be solved through dynamic programming, by applying the Bellman equation on the Q-Values (Bellman 1966): \[\mathbb{Q}_{t+1}(s_{t},a_{t})\leftarrow\alpha\left[r_{t}+\gamma\max_{a^{ \prime}}\mathbb{Q}_{t}(s_{t+1},a^{\prime})\right]+(1-\alpha)\mathbb{Q}_{t}(s_{ t},a_{t}) \tag{1}\] Where \(r_{t}\) was the reward received at step \(t\), \(s_{t}\) was the state at step \(t\), \(a_{t}\) was the action chosen by the agent, and \(s_{t+1}\) is the new state resulting from performing \(a_{t}\) in \(s_{t}\). As the values are updated by taking the difference between the old value and a new value, this type of methods is named the _Temporal Difference_ learning, or TD-Learning. ### Multi-Agent Reinforcement Learning Although Reinforcement Learning was originally concerned with the learning of a single agent, there are numerous cases where a multi-agent system can, or must, be considered. For example, let us consider a virtual agent dedicated to helping a human user in its day-to-day tasks, such as booking appointments. The diversity of human users implies a diversity of virtual agents, which will have to communicate and interact together, in order to solve the tasks of their users. In this example, the multiplicity of agents is a necessity that stems from the social system in which we live. One of the most important challenges that additionnally arises in multi-agent systems is the problem of "Multi-Agent Credit Assignment Problem" (MA-CAP). Several definitions of the MA-CAP have been given in the literature, which are all very similar. We particularly appreciate the formulation of Yliniemi and Tumer (2014) : Each agent seeks to maximize its own reward; with a properly designed reward signal, the whole system will attain desirable behaviors. This is the science of credit assignment: determining the contribution each agent had to the system as a whole. Clearly quantifying this contribution on a per-agent level is essential to multiagent learning. (Yliniemi and Tumer 2014, 2) The survey of Panait and Luke (2005, 8) summarizes several methods to assign rewards. The _Global reward_ approach considers the contribution of the whole team. Usually, the same reward is given to all agents, either by taking the sum of contributions, or by dividing the sum of contributions by the number of learners. In any case, a consequence is that _all_ learners' rewards depend on each agent. When an agent's contribution decreases (resp. increases), all learners see their reward decrease as well (resp. increase). This is a simple approach that intuitively fosters collaboration, since all agents need to perform well in order to receive a high reward. However, this approach does not send accurate feedback to the learners. Let us consider a situation in which most agents have exhibited a good behaviour, although another one has failed to learn correctly, and has exhibited a rather bad (or uninteresting) behaviour. As the individual reward depends on the team's efforts, the "bad" agent will still receive a praising reward. It will therefore have little incentive to change its behaviour. On the contrary, the "good" agents could have received a higher reward, if it were not for their "bad" colleague. Their behaviour does not necessarily need to change, however they will still try to improve it, since they expect to improve their received rewards. At the opposite extreme, the _Local reward_ approach considers solely the contribution of an individual agent to determine its reward. For example, if the agents' task is to take waste to the bin, an agent's reward will be the number of waste products that this specific agent brought. An advantage of this approach is to discourage laziness, as the agent cannot rely upon others to effectively achieve the task. By definition, agents receive a feedback that is truer to to their actual contribution. A problem of _local rewards_ is that they incentivize greedy behaviours and do not always foster collaboration. Indeed, as agents are rewarded based on their own contribution, without taking the others into account, they have no reason to help other agents, or even to let them do their task. In the waste example, an agent could develop a stealing behaviour to take out more waste products. Another common example is the one of a narrow bridge that two agents must cross to achieve their task. They both arrive at the bridge at the same time, and none of them is willing to let the other one cross first, since that would reduce their own reward, or, phrased differently, would prevent them from getting an even higher reward. Thus, they are both stuck in a non-interesting situation, both in the collective and individual sense, due to their maximizing of the individual interest only. Another method to determine an agent's contribution to the team is to imagine an environment in which the agent had not acted. This method is sometimes called _Difference Rewards_(Yliniemi and Tumer 2014). The idea of this method is to reward agents if their contribution was helpful for the team, and to force a high impact of an agent's action on its own reward. It is computed as follows: \[\texttt{D}_{i}(z)=\texttt{G}(z)-\texttt{G}(z_{-i}) \tag{2}\] where \(\texttt{D}_{i}(z)\) is the reward of an agent \(i\), based on the context \(z\), which is both the state and the joint-action of all agents in the environment; \(\texttt{G}(z)\) is the global reward for the context \(z\), and \(\texttt{G}(z_{-i})\) is an hypothetical reward, which would have been given to the team, if the agent \(i\) had not acted in the environment. In other words, if the current environment is better than the hypothetical one, this means the agent's action has improved the environment. It should be rewarded positively so as to reinforce its good behaviour. As \(\texttt{G}(z)>\texttt{G}(z_{-i})\), the reward will effectively be positive. Conversely, if the current environment is worse than the hypothetical one, this means the agent's action has deteriorated the environment, or contributed negatively. The agent should therefore receive a negative reward, or punishment, in order to improve its behaviour. In this case, as \(\texttt{G}(z)<\texttt{G}(z_{-i})\), the result will be negative. If the agent did not contribute much, its reward will be low, to encourage it to participate more, although without impairing the team's effort, as in the bridge example. Otherwise, the global reward \(\texttt{G}(z)\) would diminish, and the agent's reward would therefore decrease as well. Finally, it can be noted that the other agents' actions have a low impact on an agent reward. ## 3 The QSOM and QDSOM algorithms As stated in the State of the Art, the algorithms that we propose need to handle continuous and multi-dimensional state-action spaces. They are based on an existing work (Smith 2002a, 2002b) that we extend and evaluate in a more complex use-case. Smith's initial work proposed, in order to handle such domains, to associate Self-Organizing Maps (SOMs) to a Q-Table. We first briefly explain what is a Q-Table from the Q-Learning algorithm, and its limitations. We then present Self-Organizing Maps, the Dynamic Self-Organizing Map variation, and how we can use them to solve the Q-Table's limitations. We combine these components to propose an extension of Smith's algorithm that we name _QSOM_, which leverages a Q-Table and Self-Organizing Maps (SOMs), and a new algorithm named _QDSOM_, which leverages a Q-Table and Dynamic SOMs (DSOMs). Figure 1 presents a summarizing schema of our proposed algorithms. It includes multiple learning agents that live within a shared environment. This environment sends observations to agents, which represent the current state, so that agents may choose an action and perform it in the environment. In response, the environment changes its state, and sends them new observations, potentially different for each agent, corresponding to this new state, as well as a reward indicating how correct the performed action was. Learning agents leverage the new observations and the reward to update their internal model. This observation-action-reward cycle is then repeated so as to make learning agents improve their behaviour, with respect to the considerations embedded in the reward function. The decision process relies on 3 structures, a State (Dynamic) Self-Organizing Map, also named the State-(D)SOM, an Action (Dynamic) Self-Organizing Map, also named the Action-(D)SOM, and a Q-Table. They take observations as inputs and output an action, which are both vectors of continuous numbers. The learning process updates these same structures, and takes the reward as an input, in addition to observations. ### Q-Table The _Q-Table_ is the central component of the well-known Q-Learning algorithm (Watkins and Dayan 1992). It is tasked with learning the _interest_ of a state-action pair, i.e., the expected horizon of received rewards for taking an action in a given state. The Q-Table is a tabular structure, where rows correspond to possible states, and columns to possible actions, such that the row \(\texttt{Q}(s,\cdot)\) gives the interests of taking every possible action in state \(s\), and, more specifically, the cell \(\texttt{Q}(s,a)\) is the interest of taking action \(a\) in state \(s\). These cells, also named _Q-Values_, can be learned iteratively by collecting experiences of interactions, and by applying the Bellman equation. We recall that the interests take into account both the short-term immediate reward, but also the interest of the following state \(s^{\prime}\), resulting from the application of \(a\) in \(s\). Thus, an action that leads to a state where any action yields a low reward, or in other word an unattractive state, would have a low interest, regardless of its immediate reward. Assuming that the Q-Values have converged towards the "true" interests, the optimal policy can be easily obtained through the Q-Table, by selecting the action with the maximum interest in each state. By definition, this "best action" will lead to states with high interests as well, thus yielding, in the long-term, the maximum expected horizon of rewards. Figure 1: Architecture of the QSOM and QDSOM algorithms, which consist of a decision and learning processes. The processes rely on a State-(D)SOM, an Action-(D)SOM, and a Q-Table. An additional advantage of the Q-Table is the ability to directly have access to the interests, in comparison to other approaches, such as _Policy Gradient_, which typically manipulate actions' probabilities, increasing and decreasing them based on received rewards. These interests can be conveyed to humans to support or detail the algorithm's decision process, an advantage that could be exploited for explainability. Nevertheless, Q-Tables have an intrinsic limitation: they are defined as a tabular structure. This structure works flawlessly in simple environments, e.g., those with a few discrete states and actions. Yet, in more complex environments, especially those that require continuous representations of states and actions, it is not sufficient any more, as it would require an infinite number of rows and columns, and therefore an infinite amount of memory. Additionally, because of the continuous domains' nature, it would be almost impossible to obtain twice the exact same state: the cells, or Q-Values, would almost always get at most a single interaction, which does not allow for adequate learning and convergence towards the true interests. To counter this disadvantage, we rely on the use of Self-Organizing Maps (SOMs) that handle the continuous domains. The mechanisms of SOMs are explained in the next section, and we detail how they are used in conjunction with a Q-Table in Section 3.3. ### (Dynamic) Self-Organizing Maps A Self-Organizing Map (SOM) (Kohonen 1990) is an artificial neural network that can be used for unsupervised learning of representations for high-dimensional data. SOMs contain a fixed set of neurons, typically arranged in a rectangular 2D grid, which are associated to a unique identifier, e.g., neuron #1, neuron #2, etc., and a vector, named the _prototype vector_. Prototype vectors lie in the latent space, which is the highly dimensional space the SOM must learn to represent. The goal is to learn to represent as closely as possible the distribution of data within the latent space, based on the input data set. To do so, prototype vectors are incrementally updated and "moved" towards the different regions of the latent space that contain the most data points. Each time an input vector, or data point, is presented to the map, the neurons compete for attention: the one with the closest prototype vector to the input vector is named the _Best Matching Unit_ (BMU). Neurons' prototypes are then updated, based on their distance to the BMU and the input vector. By doing this, the neurons that are the closest to the input vector are moved towards it, whereas the farthest neurons receive little to no modification, and thus can focus on representing different parts of the latent space. As the number of presented data points increases, the distortion, i.e., the distance between each data point and its closest prototype, diminishes. In other words, neurons' prototypes are increasingly closer to the real (unknown) distribution of data. When the map is sufficiently learned, it can be used to perform a mapping of high dimensional data points into a space of lower dimension. Each neuron represents the data points that are closest to its prototype vector. Conversely, each data point is represented by the neuron whose prototype is the closest to its own vector. This property of SOMs allows us to handle continuous, and multi-dimensional state and action spaces. Figure 2 summarizes and illustrates the training of a SOM. The blue shape represents the data distribution that we wish to learn, from a 2D space for easier visualization. Typically, data would live in higher dimension spaces. Within the data distribution, a white disc shows the data point that is presented to the SOM at the current iteration step. SOM neurons, represented by black nodes, and connected to their neighbors by black edges, are updated towards the current data point. Among them, the Best Matching Unit, identified by an opaque yellow disc, is the closest to the current data point, and as such receives the most important update. The closest neighbors of the BMU, belonging to the larger yellow transparent disc, are also slightly updated. Farther neurons are almost not updated. The learned SOM is represented on the right side of the figure, in which neurons correctly cover the data distribution. The update received by a neuron is determined by Equation 3, with \(v\) being the index of the neuron, \(\mathbf{W_{v}}\) is the prototype vector of neuron \(v\), \(\mathbf{D_{t}}\) is the data point presented to the SOM at step \(t\). \(u\) is the index of the Best Matching Unit, i.e., the neuron that satisfies \(u=\operatorname*{argmin}_{\forall v}\|\mathbf{D_{t}}-\mathbf{W_{v}}\|\). \[\mathbf{W_{v}^{t+1}}\leftarrow\mathbf{W_{v}^{t}}+\theta(u,v,t)\alpha(t)\left( \mathbf{D_{t}}-\mathbf{W_{v}^{t}}\right) \tag{3}\] In this equation, \(\theta\) is the neighborhood function, which is typically a gaussian centered on the BMU (\(u\)), such that the BMU is the most updated, its closest neighbors are slightly updated, and farther neurons are not updated. The learning rate \(\alpha\), and the neighborhood function \(\theta\) both depend on the time step \(t\): they are often monotonically decreasing, in order to force neurons' convergence and stability. One of the numerous extensions of the Self-Organizing Map is the Dynamic Self-Organizing Map (DSOM) (Rougier and Boniface 2011). The idea behind DSOMs is that self-organization should offer both stability, when the input data does not change much, and dynamism, when there is a sudden change. This stems from neurological inspiration, since the human brain is able to both stabilize after the early years of development, and dynamically re-organize itself and adapt when lesions occur. As we mentioned, the SOM enforces stability through decreasing parameters (learning rate and neighborhood), however this also prevents dynamism. Indeed, as the parameters approach \(0\), the vectors' updates become negligible, and the system does not adapt any more, even when faced with an abrupt change in the data distribution. DSOMs propose to replace the time-dependent parameters by a time-invariant one, named the _elasticity_, which determines the coupling of neurons. Whereas SOMs and other similar algorithms try to learn the density of data, DSOMs focus on the structure of the data space, and the map will not try to place several neurons in a high-density region. In other words, if a neuron is considered as sufficiently close to the input data point, the DSOM will not update the other neurons, assuming that this region of the latent space is already quite well represented by this neuron. The "sufficiently close" is determined through the elasticity parameter: with high elasticity, neurons are tightly coupled with each other, whereas lower elasticity let neurons spread out over the whole latent space. DSOMs replace the update equation with the following: \[\mathbf{W_{i}^{t+1}}\leftarrow\alpha\left\|\mathbf{D_{t}}-\mathbf{W_{i}^{t}} \right\|h_{\eta}(i,u,\mathbf{D_{t}})\left(\mathbf{D_{t}}-\mathbf{W_{i}^{t}}\right) \tag{4}\] \[h_{\eta}(i,u,\mathbf{D_{t}})=\exp\left(-\frac{1}{\eta^{2}}\frac{\left\| \mathbb{P}(i)-\mathbb{P}(u)\right\|^{2}}{\left\|\mathbf{D_{t}}-\mathbf{W_{u}} \right\|^{2}}\right) \tag{5}\] where \(\alpha\) is the learning rate, \(i\) is the index of the currently updated neuron, \(\mathbf{D_{t}}\) is the current data point, \(u\) is the index of the best matching unit, \(\eta\) is the elasticity parameter, \(h_{\eta}\) is the neighborhood function, and \(\mathbb{P}(i),\mathbb{P}(u)\) are respectively the positions of neurons \(i\) and \(u\) in the grid (not in the latent space). Intuitively, the distance between \(\mathbb{P}(i)\) and \(\mathbb{P}(u)\) is the minimal number of consecutive neighbors that form a path between \(i\) and \(u\). ### The learning and decision algorithms We take inspiration from Decentralized Partially-Observable Markovian Decision Processes (DecPOMDPs) to formally describe our proposed algorithms. DecPOMDPs are an extension of the well-known Markovian Decision Process (MDP) that considers multiple agents taking repeated decisions in multiple states of an environment, by receiving only partial observations about the current state. In contrast with the original DecPOMDP as described by Bernstein (Bernstein et al., 2002), we explicitly define the set of learning agents, and we assume that agents receive (different) individual rewards, instead of a team reward. **Definition 1**.: _A Decentralized Partially-Observable Markovian Decision Process is a tuple \(\langle\mathcal{L},\mathbb{S},\mathbb{A},\mathcal{T},\mathbb{O},\mathcal{O}, \mathcal{R},\gamma\rangle\), where:_ * \(\mathcal{L}\) _is the set of learning agents, of size_ \(n=|\mathcal{L}|\)_._ * \(\mathbb{S}\) _is the state space, i.e., the set of states that the environment can possibly be in. States are not directly accessible to learning agents._ * \(\mathbb{A}_{l}\) _is the set of actions accessible to agent_ \(l\)_,_ \(\forall l\in\mathcal{L}\) _as all agents take individual actions. We consider multi-dimensional and continuous actions, thus we have_ \(\mathbb{A}_{l}\subseteq\mathbb{R}^{d}\)_, with_ \(d\) _the number of dimensions, which depends on the case of application._ Figure 2: Training of a SOM, illustrated on several steps. Image extracted from Wikipedia. * \(\mathbb{A}\) _is the action space, i.e., the set of joint-actions that can be taken at each time step. A joint-action is the combination of all agents' actions, i.e., \(\mathbb{A}=\mathbb{A}_{\mathbb{I}_{1}}\times\cdots\times\mathbb{A}_{\mathbb{I}_{ n}}\)._ * \(Tis\) _the transition function, defined as_ \(T:\mathbb{S}\times\mathbb{A}\times\mathbb{S}\to[0,1]\)_. In other words,_ \(T(s^{\prime}|s,\mathbf{a})\) _is the probability of obtaining state_ \(\mathbf{s}^{\prime}\) _after taking the action_ \(\mathbf{a}\) _in state_ \(\mathbf{s}\)_._ * \(\mathbb{O}\) _is the observation space, i.e., the set of possible observations that agents can receive. An observation is a partial information about the current state. Similarly to actions, we define_ \(\mathbb{O}_{l}\) _as the observation space for learning agent_ \(l\)_,_ \(\forall l\in\mathcal{L}\)_. As well as actions, observations are multi-dimensional and continuous, thus we have_ \(\mathbb{O}_{l}\subseteq\mathbb{R}^{g}\)_, with_ \(g\) _the number of dimensions, which depends on the use case._ * \(\mathcal{O}\) _is the observation probability function, defined as_ \(\mathcal{O}:\mathbb{O}\times\mathbb{S}\times\mathbb{A}\to[0,1]\)_, i.e.,_ \(\mathcal{O}(\mathbf{o}|s^{\prime},\mathbf{a})\) _is the probability of receiving the observations_ \(\omega\) _after taking the action_ \(\mathbf{a}\) _and arriving in state_ \(s^{\prime}\)_._ * \(R\) _is the reward function, defined as_ \(\forall l\in\mathcal{L}\quad R_{l}:\mathbb{S}\times\mathbb{A}_{l}\to\mathbb{R}\)_. Typically, the reward function itself will be the same for all agents, however, agents are rewarded individually, based on their own contribution to the environment through their action. In other words,_ \(R_{l}(s,\mathbf{a}_{l})\) _is the reward that learning agent_ \(l\) _receives for taking action_ \(\mathbf{a}_{l}\) _in state_ \(s\)_._ * \(\gamma\) _is the discount factor, to allow for potentially infinite horizon of time steps, with_ \(\gamma\in[0,1[\)_._ The RL algorithm must learn a stochastic strategy \(\pi_{l}\), defined as \(\pi_{l}:\mathbb{O}_{l}\times\mathbb{A}_{l}\to[0,1]\). In other words, given the observations \(\mathbf{o}_{l}\) received by an agent \(l\), \(\pi(\mathbf{o}_{l},\mathbf{a})\) is the probability that agent \(l\) will take action \(\mathbf{a}\). We recall that observations and actions are vectors of floating numbers, the RL algorithm must therefore handle this accordingly. However, it was mentioned in Section 3.1 that the Q-Table is not suitable for continuous data. To solve this, we take inspiration from an existing work (Smith 2002a, 2002b) and propose to use variants of Self-Organizing Maps (SOMs) (Kohonen 1990). We can leverage SOMs to learn to handle the observation and action spaces: neurons learn the topology of the latent space and create a discretization. By associating each neuron with a unique index, we are able to discretize the multi-dimensional data: each data point is recognized by the neuron with the closest prototype vector, and thus is represented by a discrete identifier, i.e., the neuron's index. The proposed algorithms are thus based on two (Dynamic) SOMs, a State-SOM, and an Action-SOM, which are associated to a Q-Table. To navigate the Q-Table and access the Q-Values, we use discrete identifiers obtained from the SOMs. The Q-Table's dimensions thus depend on the (D)SOMs' number of neurons: the Q-Table has exactly as many rows as the State-(D)SOM has neurons, and exactly as many columns as the Action-(D)SOM has neurons, such that each neuron is represented by a row or column, and reciprocally. Our algorithms are separated into two distinct parts: the _decision_ process, which chooses an action from received observations about the environment, and the _learning_ process, which updates the algorithms' data structures, so that the next decision step will yield a better action. We present in details these two parts below. #### 3.3.1 The decision process We now explain the decision process that allows an agent to choose an action from received observations, which is described formally in Algorithm 1 and represented in Figure 3. First, we need to obtain a discrete identifier from an observation \(\mathbf{o}\) that is a vector \(\in\mathbb{O}_{l}\subseteq\mathbb{R}^{g}\), in order to access the Q-Table. To do so, we look for the Best Matching Unit (BMU), i.e., the neuron whose prototype vector is the closest to the observations, from the State-SOM, which is the SOM tasked with learning the observation space. The unique index of the BMU is used as the state identifier \(s\) (line 9). We call this identifier a "state hypothesis", and we use it to navigate the Q-Table and obtain the expected interest of each action, assuming we have correctly identified the state. Knowing these interests \(\mathbb{Q}(s,.)\) for all actions, we can assign a probability of taking each one, using a Boltzmann distribution (line 10). Boltzmann is a well-known and used method in RL that helps with the exploration-exploitation dilemma. Indeed, agents should try to maximize their expectancy of received rewards, which means they should _exploit_ high-rewarding actions, i.e., those with a high interest. However, the true interest of the action is not known to agents: they have to discover it incrementally by trying actions into the environment, in various situations, and memorizing the associated reward. If they only choose the action with the maximum interest, they risk focusing on few actions, thus not exploring the others. By not sufficiently exploring, they maintain the phenomenon, as not explored actions will stay at a low interest, reducing their probability of being chosen, and so on. Using Boltzmann mitigates this problem, by giving similar probabilities to similar interests, and yet, a non-zero probability of being chosen even for actions with low interests. The Boltzmann probability of an action \(j\) being selected is computed based on the action's interest, in the current state, relatively to all other actions' interests, as follows: ``` 0:\(\mathcal{U}\) the neurons in the State-(D)SOM 1:\(\mathbf{U_{j}}\) the vector associated to neuron \(i\) in the State-(D)SOM 2:\(\mathcal{W}\) the neurons in the Action-(D)SOM 3:\(\mathbf{W_{j}}\) the vector associated to neuron \(j\) in the Action-(D)SOM 4:\(\mathbf{0}(s,a)\) the Q-value of action \(a\) in state \(s\) 5:\(\tau\) the Boltzmann's temperature 6:\(\epsilon\) Noise control parameter 7: 8:functionDecision(Observations o) 9:\(s\leftarrow\operatorname*{argmin}_{u\in\mathcal{U}}||\mathbf{o}-\mathbf{U_{i}}||\) 10: Let P be the Boltzmann distribution over the Q-Values. We draw a random variable X from P, and we denote the probability that X equals a given value \(\mathbf{j}:P(X=j)\). 11: Draw \(j\sim P(X=j)=\frac{\sum_{i=1}^{|W|}\frac{\operatorname*{arg}(\mathbf{o}(x,k))} {\tau}}{\sum_{i=1}^{|W|}\frac{\operatorname*{arg}(\mathbf{o}(x,k))}{\tau}}\) 12: Let \(\mathbf{W_{j}}\) be the chosen action \(\mathbf{s}\) parameters 13:for\(k\in\) all dimensions of \(\mathbf{W_{j}}\)do 14:\(noise\sim\texttt{random}(\epsilon)\) 15:\(W_{j,k}^{\prime}\gets W_{j,k}+noise\) 16:endfor 17: Return action \(\mathbf{a}\leftarrow\mathbf{W_{j}^{\prime}}\) 18:endfunction ``` **Algorithm 1** Decision algorithm Figure 3: Dataflow of the Q-(D)SOM decision process. \[P(X=j)=\frac{\frac{\exp(\mathbb{Q}(\{s,j\}))}{\tau}}{\sum_{k=1}^{|\mathcal{W}|} \frac{\exp(\mathbb{Q}(s,k))}{\tau}} \tag{6}\] Traditionally, the Boltzmann parameter \(\tau\) should be decreasing over the time steps, such that the probabilities of high-interest actions will rise, whereas low-interest actions will converge towards a probability of \(0\). This mechanism ensures the convergence of the agents' policy towards the optimal one, by reducing exploration in later steps, in favour of exploitation. However, and as we have already mentioned, we chose to disable the convergence mechanisms in our algorithms, because it prevents, by principle, continuous learning and adaptation. We draw an action identifier \(j\) from the list of possible actions, according to Boltzmann probabilities (line 11). From this discrete identifier, we get the action's parameters from the Action-SOM, which is tasked with learning the action space. We retrieve the neuron with identifier \(j\), and take its prototype vector as the proposed action's parameters (line 12). We can note that this is somewhat symmetrical to what is done with the State-SOM. To learn the State-SOM, we use the data points, i.e., the observations, that come from the environment; to obtain a discrete identifier, we take the neurone with the closest prototype. For the Action-SOM, we start with a discrete identifier, and we take the prototype of the neuron with this identifier. However, we need to learn what are those prototype vectors. We do not have data points as for the State-SOM, since we do not know what is the "correct" action in each situation. In order to learn better actions, we apply an exploration step after choosing an action: the action's parameters are perturbed by a random noise (lines 13-16). In the original work of Smith (2002a), the noise was taken from a uniform distribution \(\mathcal{U}_{[-e,+\epsilon]}\), which we will call the _epsilon_ method in our experiments. However, in our algorithms, we implemented a normal, or _gaussian_, random distribution \(\mathcal{N}(\mu,\sigma^{2})\), where \(\mu\) is the mean, which we set to \(0\) so that the distribution ranges over both negative and positive values, \(\sigma^{2}\) is the variance, and \(\sigma\) is the standard deviation. \(\epsilon\) and \(\sigma^{2}\) are the "noise control parameter" for their respective distribution. The advantage over the uniform distribution is to have a higher probability of a small noise, thus exploring very close actions, while still allowing for a few rare but longer "jumps" in the action space. These longer jumps may help to escape local extremas, but should be rare, so as to slowly converge towards optimal actions most of the time, without overshooting them. This was not permitted by the uniform distribution, as the probability is the same for each value in the range \([-\epsilon,+\epsilon]\). The noised action's parameters are considered as the chosen action by the decision process, and the agent executes this action in the environment (line 17). #### 3.3.2 The learning process After all agents executed their action, and the environment simulated the new state, agents receive a reward signal which indicates to which degree their action was a "good one". From this reward, agents should improve their behaviour so that their next choice will be better. The learning process that makes this possible is formally described in Algorithm 2, and we detail it below. First, we compute the Action-(D)SOM and State-(D)SOM neighborhoods (lines 11-13 and 14-16). Then, we update the Action-(D)SOM. Remember that we do not have the ground-truth for actions: we do not know which parameters yield the best rewards. Moreover, we explored the action space by randomly noising the proposed action; it is possible that the perturbed action is actually worse than the learned one. In this case, we do not want to update the Action-(D)SOM, as this would worsen the agent's performances. We thus determine whether the perturbed action is better than the proposed action by comparing the received reward with the memorized interest of the proposed action, using the following equation: \[r+\gamma\max_{j}\mathbb{Q}(s^{\prime},j^{\prime})\overset{?}{>}\mathbb{Q}(s,j) \tag{7}\] If the perturbed action is deemed better than the proposed one, we update the Action-(D)SOM towards the perturbed action (lines 17-21). To do so, we assume that the Best Matching Unit (BMU), i.e., the center of the neighborhood, is the neuron that was selected at the decision step, \(j\). We then apply the corresponding update equation, Equation 3 for a SOM, or Equation 4 for a DSOM, to move the neurons' prototypes towards the perturbed action. Secondly, we update the actions' interests, i.e., the Q-Table (line 22). To do so, we rely on the traditional Bellman's equation. However, Smith's algorithm introduces a difference in this equation to increase the learning speed. Indeed, the State- and Action-(D)SOMs offer additional knowledge about the states and actions: as they are discrete identifiers mapping to continuous vectors in a latent space, we can define a notion of _similarity_ between states (resp. actions) by measuring the distance between the states' vectors (resp. actions' vectors). Similar states and actions will most likely have a similar interest, and thus each Q-Value is updated at each time step, instead of only the current state-action pair, by taking into account the neighborhoods of the State- and Action-(D)SOMs (computed on lines 11-13 and 14-16). Equation 8 shows the resulting formula: \[\mathtt{q}_{t+1}(s,j)\leftarrow\alpha\psi_{U}(s)\psi_{W}(j)\left[r+\gamma \max_{j^{\prime}}\mathtt{Q}_{t}(s^{\prime},j^{\prime})\right]+(1-\alpha) \mathtt{q}_{t}(s,j) \tag{8}\] where \(s\) was the state hypothesis at step \(t\), \(j\) was the chosen action identifier, \(r\) is the received reward, \(s^{\prime}\) is the state hypothesis at step \(t+1\) (from the new observations). \(\psi_{U}(s)\) and \(\psi_{W}(j)\) represent, respectively, the neighborhood of the State- and Action-(D)SOMs, centered on the state \(s\) and the chosen action identifier \(j\). Intuitively, the equation takes into account the interest of arriving in this new state, based on the maximum interest of actions available in the new state. This means that an action could yield a medium reward by itself, but still be very interesting because it allows to take actions with higher interests. On the contrary, an action with a high reward, but leading to a state with only catastrophic actions would have a low interest. Finally, we learn the State-SOM, which is a very simple step (lines 23-25). Indeed, we have already mentioned that we know data points, i.e., observations, that have been sampled from the distribution of states by the environment. Therefore, we simply update the neurons' prototypes towards the received observation at the previous step. Prototype vectors are updated based on both their own distance to the data point, within the latent space, and the distance between their neuron and the best matching unit, within the 2D grid neighborhood (using the neighborhood computed on lines 11-13). This ensures that the State-SOM learns to represent states which appear in the environment. _Remark_.: In the presented algorithm, the neighborhood and update formulas correspond to a DSOM. When using the QSOM algorithm, these formulas must be replaced by their SOM equivalents. The general structure of the algorithm, i.e., the steps and the order in which they are taken, stays the same. _Remark_.: Compared to Smith's algorithm, our extensions differ in the following aspects: * DSOMs can be used in addition to SOMs. * Hyperparameters are not annealed, i.e., they are constant throughout the simulation, so that agents can continuously learn instead of slowly converging. * Actions are chosen through a Boltzmann distribution of probabilities based on their interests, instead of using the \(\epsilon\)-greedy method. * The random noise to explore the actions' space is drawn from a Gaussian distribution instead of a uniform one. * The neighborhood functions of the State- and Action-(D)SOMs is a gaussian instead of a linear one. * The number of dimensions of the actions' space in the following experiments is greater (6) than in Smith's original experiments (2). This particularly prompted the need to explore other ways to randomly noise actions, e.g., the gaussian distribution. Note that some other methods have been tried, such as applying a noise on a single dimension each step, or randomly determining for each dimension whether it should be noised at each step; they are not disclosed in the results as they performed slightly below the gaussian method. Searching for better hyperparameters could yield better results for these methods. ## 4 Experiments and results In order to validate our proposed algorithms, we ran some experiments on a Smart Grid use-case.. First, let us apply the algorithms and formal model on this specific use-case. The observation space, \(\mathbb{O}\), is composed of the information that agents receive: the time (hour), the available energy, their personal battery storage,... The full list of observations was defined in Section @ref(positioning-smartgrid-observations). These values range from 0 to 1, and we have 11 such values, thus we define \(\mathbb{O}_{l}=[0,1]^{11}\). Similarly, actions are defined by multiple parameters: consume energy from grid, consume from battery, sell,... These actions were presented in Section @ref(positioning-smartgrid-actions). To simplify the learning of actions, we constrain these parameters to the \([0,1]\) range; they are scaled to the true agent's action range outside the learning and decision processes. For example, let us imagine an agent with an action range of \(6,000\), and an action parameter, as outputted by the decision process, of \(0.5\), the scaled action parameter will be \(0.5\times 6,000=3,000\). We have 6 actions parameters, and thus define \(\mathbb{A}_{l}=[0,1]^{6}\). In the sequel, we present the reward functions that we implemented to test our algorithms, as well as the experiments' scenarii. Finally, we quickly describe the 2 algorithms that we chose as baselines: _DDPG_ and _MADDPG_. ### The Smart-Grid use-case We use, to evaluate the QSOM and QDSOM algorithms, a Smart-Grid use case in which multiple producer-consumer (_prosumer_) agents learn to consume energy to satisfy their needs. The use-case is represented in Figure 4. Learning agents receive _observations_\(\in\mathbb{R}^{11}\) that describe the current state of the environment: they consist of _shared_ observations \(\in\mathbb{R}^{8}\) that are shared among all agents, such as the current hour or amount of available energy, and _local_ observations \(\in\mathbb{R}^{3}\) that are individual to each agent, and not accessible to others, such as the agent's personal battery. Splitting between shared and local observations helps preserving the privacy of agents, by not sharing personal data. From these observations, agents must take _actions_, represented by vectors of parameters \(\in\mathbb{R}^{6}\), which govern the amounts of energy to exchange: how much to consume from the smart grid, how much to buy, etc. In practice, these observations and actions are interpolated to the \([0,1]\) domain, so as to facilitate the learning algorithms, and especially the Self-Organizing Maps. Indeed, a dimension with a higher range than another would have a greater importance and would risk biasing the learning. The simulated Smart Grid is connected to a national grid, which allows agents to buy and sell energy, although from more pollutant sources; it is also connected to an hydropower plant, which is considered to be local to the Smart Grid. This power plant generates the energy that is available to all agents at each time step of the simulation. Agents additionally produce a small quantity (e.g., from solar panels), which is kept in their personal battery. They may share this energy with other agents when necessary (e.g., to increase equity), consume it directly, or sell it to the national grid for some (monetary) profit. Different _profiles_ of prosumer agents are present in the grid, each representing a specific kind of building: a (small) Household, an (medium) Office, or a (large) School. Buildings' profiles determine several characteristics, such as the _needs_ that these buildings have, i.e., how much energy they would like to consume at each hour. These needs are taken from a public dataset of energy consumption in the United States (Ong and Clark 2014). Profiles also impact the range of action parameters: larger buildings may consume more energy than the smaller ones. In practice, the range was determined to be slightly higher than the maximum need over all hours, so that the buildings can decide to consume as much as they need (yet, during the simulation, this might be a bad idea due to the environment's state!). Similarly, the battery capacity depends on the profile, with larger buildings having access to higher capacities. Agents make decisions based on the rewards they receive, which drive them towards the respect of one or several ethical considerations. The reward functions are described in the next section, and concern some considerations that are classical for smart grid: consuming energy to satisfy their needs and increase their comfort, ensuring the equity of comforts among agents, avoiding to over-consume. ### Reward functions We implemented multiple reward functions that each focus on different ethical stakes. Most of them are based on the principle of Difference Reward (Yliniemi and Tumer 2014) to facilitate the Credit Assignment. Additionally, two functions focus on multiple objectives, but with a rather naive approach to scalarize, and another two focus on adaptation, i.e., the agents' capacity to adapt their behaviour to changing mores, by making the reward function artificially change at a fixed point in time. We give an intuitive definition and a mathematical formula for each of these reward functions below. * Determine the agent's contribution to the society's equity, by comparing the current equity with the equity if the agent did not act. The agent's goal is thus to maximize the society's equity. \[\mathtt{R}_{eq}(agent)=(1-\mathtt{Hoover}(Comforts))-(1-\mathtt{Hoover}( Comforts\setminus\{agent\}))\] * Determine the agent's contribution to over-consumption, by comparing the current over-consumed amount of energy, with the amount that would have been over-consumed if the agent did not act. The agent's goal is thus to minimize society's over-consumption. \[\mathtt{R}_{oc}(agent)=1-\frac{OC}{\sum_{\forall a}(\mathit{Consumed}_{a}+ \mathit{Stored}_{a})}-\frac{OC-(\mathit{Consumed}_{agent}+\mathit{Stored}_{ agent})}{\sum_{\forall a\neq agent}(\mathit{Consumed}_{a}+\mathit{Stored}_{a})}\] Figure 4: Illustration of the Smart Grid use-case. Multiple learning agents receive observations, and must decide to take actions, in order to exchange energy. * Simply return the agent's comfort, so that agents aim to maximize their comfort. This intuitively does not seem like an ethical stake, however it can be linked to Schwartz "hedonistic" value, and therefore is an ethical stake, focused on the individual aspect. We will mainly use this reward function in combination with others that focus on the societal aspect, to demonstrate the algorithms' capacity to learn opposed moral values. * A first and simple reward function that combines multiple objectives, namely limitation of over-consumption and comfort. The goal of agents is thus to both minimize the society's over-consumption while maximizing their own comfort. This may be a difficult task, because the simulation is designed so that there is a scarcity of energy most of the time, and agents will most likely over-consume if they all try to maximize their comfort. On the contrary, reducing the over-consumption means they need to diminish their comfort. There is thus a trade-off to be achieved between over-consumption and comfort. * A second, but also simple, multi-objective reward functions. Instead of using a weighted sum, we multiply the reward together. This function is more punitive than the sum, as a low reward cannot be "compensated". For example, let us consider a vector of reward components \([0.1,0.9]\). Using the weighted sum, the result depends on the weights: if the first component has a low coefficient, then the result may actually be high. On contrary, the product will return \(0.1\times 0.9=0.09\), i.e., a very low reward. Any low component will penalize the final result. * A reward function that simulates a change in its definition after 2000 time steps, as if society's ethical moves had changed. During the first 2000 steps, it behaves similarly as the Over-Consumption reward function, whereas for later steps it returns the mean of Over-Consumption and Equity rewards. * Similar to Adaptability1, this function simulates a change in its definition. We increase the difficulty by making 2 changes, one after 2000 time steps, and another after 6000 time steps, and by considering a combination of 3 rewards after the second change. As we can see, the various reward functions have different aims. Some simple functions, such as _equity_, _overconsumption_, or _comfort_, serve as a baseline and building blocks for other functions. Nevertheless, they may be easy to optimize: for example, by consuming absolutely nothing, the _overconsumption_ function can be satisifed. On the contrary, the _comfort_ function can be satisfied by consuming the maximum amount of energy, such that the comfort is guaranteed to be close to \(1\). The two _multi-objective_ functions thus try to force agents to learn several stakes at the same time, especially if they are contradictory, such as _overconsumption_ and _comfort_. The agent thus cannot learn a "trivial" behaviour and must find the optimal behaviour that manages to satisfy both as much as possible. Finally, the _adaptability_ functions go a step further and evaluate agents' ability to adapt when the considerations change. ### Seenarli In order to improve the richness of our experiments, we designed several scenarii. These scenarii are defined by two variables: the agents' consumption profile, and the environment's size, i.e., number of agents. The prosumer (learning) agents are instantiated with a profile, determining their battery capacity, their action range, and their needs, i.e., the quantity of energy they want to consume at each hour. These needs are extracted from real consumption profiles; we propose two different versions, the _daily_ and the _annual_ profiles. In the _daily_ version, needs are averaged over every day of the year, thus yielding a need for each hour of a day: this is illustrated in Figure 5. This is a simplified version, averaging the seasonal differences; its advantages are a reduced size, thus decreasing the required computational resources, a simpler learning, and an easier visualization for humans. On the other hand, the _annual_ version is more complete, contains seasonal differences, which improve the environment's richness and force agents to adapt to important changes. The second property is the environment size. We wanted to test our algorithms with different sets of agents, to ensure the scalability of the approach, in the sense that agents are able to learn a correct behaviour and adapt to many other agents in the same environment. This may be difficult as the number of agents increases, since there will most certainly be more conflicts. We propose a first, _small_ environment, containing \(20\) Households agents, \(5\) Office agents, and \(1\) School agent. The second environment, _medium_, contains roughly \(4\) times more agents than in the _small_ case: \(80\) Household agents, \(19\) Office agents, and \(1\) School. #### 4.3.1 DDPG and MADDPG baselines In order to prove our algorithms' advantages, we chose to compare them to the well-known DDPG (Lillicrap et al. 2015) and its multi-agent extension, MADDPG (Lowe et al. 2017). DDPG (_Deep Deterministic Policy Gradient_) is one of the algorithms that extended the success of Deep Reinforcement Learning to continuous domains (Lillicrap et al. 2015). It follows the quite popular Actor-Critic architecture, which uses two different Neural Networks: one for the Actor, i.e., to decide which action to perform at each time step, and another for the Critic, i.e., to evaluate whether an action is interesting. We chose it as a baseline since it focuses on problems with similar characteristics, e.g., continuous domains, and is a popular baseline in the community. MADDPG (_Multi-Agent Deep Deterministic Policy Gradient_), extends the idea of DDPG to the multi-agent setting (Lowe et al. 2017), by relying on the _Centralized Training - Decentralized Execution_ idea. It is one of the most used methods to improve multi-agent learning, by sharing data among agents during the learning phase. This helps agents make a model of other agents and adapt to their respective behaviours. However, during execution, sharing data in the same manner is often impracticable or undesirable, as it would impair privacy and require some sort of communication between agents; thus, data is not shared any more at this point (Decentralized Execution). As such, Centralized Training - Decentralized Execution makes a distinction between training and execution, and is thus inadequate for continuous learning, and constant adaptation to changes. On the other hand, if we were to make agents continuously learn with centralized data sharing, even in the execution phase, we would impair privacy of users that are represented or impacted by the agents. These reasons are why we chose not to use this setting for our own algorithms QSOM and QDSOM. While we do not use centralized training, we want to compare them to an algorithm that uses it, such as MADDPG, in order to determine whether there would be a performance gain, and what would be the trade-off between performance and privacy. In MADDPG, the Centralized Training is simply done by using a centralized Critic network, which receives observations, actions, and rewards from all agents, and evaluates all agents' actions. The Actor networks, however, are still individualized: each agent has its own network, which the other agents cannot access. During the training phase, Figure 5: The agents’ needs for every hour of the day in the _daily_ profile. the Critic network is updated thanks to the globally shared data, whereas Actor networks are updated through local data and the global Critic. Once the learning is done, the networks are frozen: the Critic does not require receiving global data any more, and the Actors do not rely on the Critic any more. Only the decision part, i.e., which action should we do, is kept, by using the trained Actor network as-is. ### Results Several sets of experiments were performed: * First, numerous experiments were launched to search for the best hyperparameters of each algorithm, to ensure a fair comparison later. Each set of hyperparameters was run \(10\) times to obtain average results, and a better statistical significance. In order to limit the number of runs and thus the computational resources required, we decided to focus on the _adaptability2_ reward for these experiments. This function is difficult enough so that the algorithms will not reach almost 100% immediately, which would make the hyperparameter search quite useless, and is one of the 2 that interest us the most, along with _adaptability1_, so it makes sense that our algorithms are optimized for this one. The _annual_ consumption profile was used to increase the richness, but the environment size, i.e., number of agents, was set to _small_ in order to once again reduce the computational power and time. * Then, the 4 algorithms, configured with their best hyperparameters, were compared on multiple settings: both _annual_ and _daily_ consumption profiles, both _small_ and _medium_ sizes of environment, and all the reward functions. This resulted in \(2\times 2\times 7\) scenarii, which we ran \(10\) times for each of the \(4\) algorithms. In the following results, we define a run's _score_ as the average of the global rewards per step. The global reward corresponds to the reward, without focusing on a specific agent. For example, the _equity_ reward compares the Hoover index of the whole environment to a hypothetical environment without the agent. The global reward, in this case, is simply the Hoover index of the entire environment. This represents, intuitively, how the society of agents performed, globally. Taking the average is one of the simplest methods to get a single score for a given run, which allows comparing runs easily. #### 4.4.1 Searching for hyperparameters Table 1, Table 2, Table 3, and Table 4 summarize the best hyperparameters that have been found for each algorithm, based on the average runs' score obtained when using these parameters. \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Value & Description \\ \hline State-SOM shape & 12x12 & Shape of the neurons’ grid \\ State-SOM learning rate & 0.5 & Update speed of State-SOM neurons \\ Action-SOM shape & 3x3 & Shape of the neurons’ grid \\ Action-SOM learning rate & 0.2 & Update speed of Action-SOM neurons \\ Q Learning rate & 0.6 & Update speed of Q-Values \\ Discount rate & 0.9 & Controls the horizon of rewards \\ Action perturbation & gaussian & Method to randomly explore actions \\ Action noise & 0.06 & Parameter for the random noise distribution \\ Boltzmann temperature & 0.4 & Controls the exploration-exploitation \\ \hline \hline \end{tabular} \end{table} Table 1: Best hyperparameters on 10 runs for the _QSOM_ algorithm, using the _annual small_ scenario and _adaptability2_ reward function. \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Value & Description \\ \hline State-DSOM shape & 12x12 & Shape of the neurons’ grid \\ State-DSOM learning rate & 0.8 & Update speed of State-DSOM neurons \\ State-DSOM elasticity & 1 & Coupling between State-DSOM neurons \\ Action-DSOM shape & 3x3 & Shape of the neurons’ grid \\ Action-DSOM learning rate & 0.7 & Update speed of Action-DSOM neurons \\ Action-DSOM elasticity & 1 & Coupling between Action-DSOM neurons \\ Q Learning rate & 0.8 & Update speed of Q-Values \\ Discount rate & 0.95 & Controls the horizon of rewards \\ Action perturbation & gaussian & Method to randomly explore actions \\ Action noise & 0.09 & Parameter for the random noise distribution \\ Boltzmann temperature & 0.6 & Controls the exploration-exploitation \\ \hline \hline \end{tabular} \end{table} Table 2: Best hyperparameters on 10 runs for the _QDSOM_ algorithm, using the _annual small_ scenario and _adaptability_2 reward function. \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Value & Description \\ \hline Batch size & 256 & Number of samples to use for training at each step \\ Learning rate & 5e-04 & Update speed of neural networks \\ Discount rate & 0.99 & Controls the horizon of rewards \\ Tau & 5e-04 & Target network update rate \\ Action perturbation & gaussian & Method to randomly explore actions \\ Action noise & 0.11 & Parameter for the random noise distribution \\ \hline \hline \end{tabular} \end{table} Table 3: Best hyperparameters on 10 runs for the _DDPG_ algorithm, using the _annual small_ scenario and _adaptability_2 reward function. \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Value & Description \\ \hline Batch size & 128 & Number of samples to use for training at each step \\ Buffer size & 50000 & Size of the replay memory. \\ Actor learning rate & 0.01 & Update speed of the Actor network \\ Critic learning rate & 0.001 & Update speed of the Critic network \\ Discount rate & 0.95 & Controls the horizon of rewards \\ Tau & 0.001 & Target network update rate \\ Noise & 0.02 & Controls a gaussian noise to explore actions \\ Epsilon & 0.05 & Controls the exploration-exploitation \\ \hline \hline \end{tabular} \end{table} Table 4: Best hyperparameters on 10 runs for the _MADDPG_ algorithm, using the _annual small_ scenario and _adaptability_2 reward function. Figure 6: Distribution of scores per learning algorithm, on every scenario, for 10 runs with each reward function. #### 4.4.2 Comparing algorithms The results presented in Figure 6 and Table 5 show that the QSOM algorithm performs better. We use the Wilcoxon statistical test, which is the non-parametric equivalent of the well-known T-test, to determine whether there is a statistically significant difference in the means of runs' scores between different algorithms. Wilcoxon's test, when used with the _greater_ alternative, assumes as a null hypothesis that the 2 algorithms have similar means, or that the observed difference is negligible and only due to chance. The Wilcoxon method returns the _p-value_, i.e, the likelihood of the null hypothesis being true. When \(p<\alpha=0.05\), we say that it is more likely that the null hypothesis can be refuted, and we assume that the alternative hypothesis is the correct one. The alternative hypothesis, in this case, is that \begin{table} \begin{tabular}{l l l l l} \hline \hline **RewardFunction** & **QSOM** & **QSDOM** & **DDPG** & **MADDPG** \\ \hline **Scenario: daily / small** & & & & \\ equity & 1.00 (+/- 0.00 ) & 0.99 (+/- 0.00 ) & 0.99 (+/- 0.01 ) & 0.56 (+/- 0.11 ) \\ overconsumption & 0.88 (+/- 0.05 ) & 0.78 (+/- 0.08 ) & 0.87 (+/- 0.04 ) & 0.52 (+/- 0.15 ) \\ multiobj-sum & 0.91 (+/- 0.01 ) & 0.87 (+/- 0.04 ) & 0.87 (+/- 0.03 ) & 0.76 (+/- 0.11 ) \\ multiobj-prod & 0.85 (+/- 0.01 ) & 0.82 (+/- 0.02 ) & 0.84 (+/- 0.01 ) & 0.70 (+/- 0.07 ) \\ adaptability1 & 0.90 (+/- 0.02 ) & 0.87 (+/- 0.05 ) & 0.79 (+/- 0.07 ) & 0.68 (+/- 0.09 ) \\ adaptability2 & 0.89 (+/- 0.02 ) & 0.86 (+/- 0.02 ) & 0.82 (+/- 0.04 ) & 0.72 (+/- 0.08 ) \\ **Scenario: daily / medium** & & & & \\ equity & 1.00 (+/- 0.00 ) & 0.99 (+/- 0.00 ) & 1.00 (+/- 0.00 ) & 0.54 (+/- 0.06 ) \\ overconsumption & 0.89 (+/- 0.02 ) & 0.70 (+/- 0.05 ) & 0.90 (+/- 0.05 ) & 0.41 (+/- 0.03 ) \\ multiobj-sum & 0.88 (+/- 0.01 ) & 0.82 (+/- 0.02 ) & 0.84 (+/- 0.02 ) & 0.68 (+/- 0.03 ) \\ multiobj-prod & 0.85 (+/- 0.01 ) & 0.81 (+/- 0.01 ) & 0.84 (+/- 0.01 ) & 0.69 (+/- 0.02 ) \\ adaptability1 & 0.87 (+/- 0.01 ) & 0.83 (+/- 0.03 ) & 0.81 (+/- 0.03 ) & 0.64 (+/- 0.04 ) \\ adaptability2 & 0.88 (+/- 0.01 ) & 0.84 (+/- 0.02 ) & 0.79 (+/- 0.02 ) & 0.68 (+/- 0.02 ) \\ **Scenario: annual / small** & & & & \\ equity & 1.00 (+/- 0.00 ) & 0.99 (+/- 0.00 ) & 0.99 (+/- 0.01 ) & 0.54 (+/- 0.06 ) \\ overconsumption & 0.87 (+/- 0.05 ) & 0.70 (+/- 0.08 ) & 0.68 (+/- 0.14 ) & 0.37 (+/- 0.11 ) \\ multiobj-sum & 0.89 (+/- 0.02 ) & 0.81 (+/- 0.02 ) & 0.85 (+/- 0.04 ) & 0.62 (+/- 0.05 ) \\ multiobj-prod & 0.81 (+/- 0.00 ) & 0.78 (+/- 0.03 ) & 0.79 (+/- 0.02 ) & 0.66 (+/- 0.08 ) \\ adaptability1 & 0.87 (+/- 0.03 ) & 0.80 (+/- 0.07 ) & 0.75 (+/- 0.04 ) & 0.60 (+/- 0.09 ) \\ adaptability2 & 0.89 (+/- 0.02 ) & 0.84 (+/- 0.03 ) & 0.77 (+/- 0.04 ) & 0.63 (+/- 0.09 ) \\ **Scenario: annual / medium** & & & & \\ equity & 1.00 (+/- 0.00 ) & 0.99 (+/- 0.00 ) & 1.00 (+/- 0.00 ) & 0.53 (+/- 0.05 ) \\ overconsumption & 0.80 (+/- 0.04 ) & 0.63 (+/- 0.06 ) & 0.78 (+/- 0.10 ) & 0.33 (+/- 0.02 ) \\ multiobj-sum & 0.84 (+/- 0.01 ) & 0.77 (+/- 0.02 ) & 0.79 (+/- 0.02 ) & 0.63 (+/- 0.01 ) \\ multiobj-prod & 0.81 (+/- 0.01 ) & 0.76 (+/- 0.02 ) & 0.80 (+/- 0.01 ) & 0.65 (+/- 0.03 ) \\ adaptability1 & 0.82 (+/- 0.02 ) & 0.76 (+/- 0.02 ) & 0.74 (+/- 0.06 ) & 0.58 (+/- 0.02 ) \\ adaptability2 & 0.83 (+/- 0.02 ) & 0.77 (+/- 0.02 ) & 0.71 (+/- 0.06 ) & 0.62 (+/- 0.01 ) \\ \hline \hline \end{tabular} \end{table} Table 5: Average score for 10 runs of each algorithm, on each reward function and each scenario. The standard deviation is shown inside parentheses. the QSOM algorithm obtains better results than its opposing algorithm. We thus compare algorithms 2-by-2, on each reward function and scenario. The statistics, presented in Table Table 6, prove that the QSOM algorithm statistically outperforms other algorithms, in particular DDPG and MADDPG, on most scenarii and reward functions, except a few cases, indicated by the absence of * next to the _p-value_. For example, DDPG obtains similar scores on the _daily / small overconsumption_ and _multiobj-prod_ cases, as well as _daily / medium overconsumption_, and _annual / medium overconsumption_. QDSOM is also quite on par with QSOM on the _daily / small adaptability1_ case. Yet, MADDPG is consistently outperformed by QSOM. \begin{table} \begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{**Wilcoxon’s p-value (QSOM vs...)**} \\ \hline **RewardFunction** & **QDSOM** & **DDPG** & **MADDPG** \\ \hline **Scenario: daily / small** & & & \\ equity & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ overconsumption & 0.005748 ** & 0.289371 & 0.000103*** \\ multiobj-sum & 0.011615 * & 0.001045 ** & 0.000525*** \\ multiobj-prod & 0.000162*** & 0.071570 & 0.000752*** \\ adaptability1 & 0.061503 & 6.50e-05*** & 6.50e-05*** \\ adaptability2 & 0.001440 ** & 0.000363*** & 5.41e-06*** \\ **Scenario: daily / medium** & & & \\ equity & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ overconsumption & 5.41e-06*** & 0.973787 & 5.41e-06*** \\ multiobj-sum & 3.79e-05*** & 0.000162*** & 5.41e-06*** \\ multiobj-prod & 5.41e-06*** & 0.005748 ** & 5.41e-06*** \\ adaptability1 & 0.000363*** & 5.41e-06*** & 5.41e-06*** \\ adaptability2 & 1.08e-05*** & 5.41e-06*** & 5.41e-06*** \\ **Scenario: annual / small** & & & \\ equity & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ overconsumption & 3.79e-05*** & 0.000363*** & 5.41e-06*** \\ multiobj-sum & 5.41e-06*** & 0.009272 ** & 5.41e-06*** \\ multiobj-prod & 0.000363*** & 0.000525*** & 0.000752*** \\ adaptability1 & 0.007345 ** & 2.17e-05*** & 5.41e-06*** \\ adaptability2 & 0.000363*** & 5.41e-06*** & 5.41e-06*** \\ **Scenario: annual / medium** & & & \\ equity & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ overconsumption & 5.41e-06*** & 0.342105 & 5.41e-06*** \\ multiobj-sum & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ multiobj-prod & 5.41e-06*** & 5.41e-06*** & 5.41e-06*** \\ adaptability1 & 5.41e-06*** & 0.000103*** & 5.41e-06*** \\ adaptability2 & 3.79e-05*** & 5.41e-06*** & 5.41e-06*** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of the _QSOM_ algorithm with the others, using a Wilcoxon statistical test, with the ‘greater’ alternative. Figure 7 shows the evolution of individual rewards received by agents over the time steps, in the _annual_ / _small_ scenario, using the _adaptability2_ reward function. We chose to focus on this combination of scenario and reward function as they are, arguably, the most interesting. _Daily_ scenarii are perhaps too easy for the agents as they do not include as many variations as the _annual_; additionally, _small_ scenarios are easier to visualize and explore, as they contain fewer agents than _medium_ scenarios. Finally, the _adaptability2_ is retained for the same arguments that made us choose it for the _same_ arguments. \begin{table} \begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{**Wilcoxon’s p-value (QDSOM vs...)**} \\ \hline **RewardFunction** & **QSOM** & **DDPG** & **MADDPG** \\ \hline **Scenario: daily / small** & & & \\ equity & 1.000 & 0.999475 & 5.41e-06*** \\ overconsumption & 0.996 & 0.995535 & 0.000752*** \\ multiobj-sum & 0.991 & 0.684736 & 0.014403 * \\ multiobj-prod & 1.000 & 0.998560 & 0.000752*** \\ adaptability1 & 0.947 & 0.007345 ** & 0.000363*** \\ adaptability2 & 0.999 & 0.007345 ** & 5.41e-06*** \\ **Scenario: daily / medium** & & & \\ equity & 1.000 & 0.999935 & 5.41e-06*** \\ overconsumption & 1.000 & 0.999989 & 5.41e-06*** \\ multiobj-sum & 1.000 & 0.982269 & 5.41e-06*** \\ multiobj-prod & 1.000 & 1.000000 & 5.41e-06*** \\ adaptability1 & 1.000 & 0.082747 & 5.41e-06*** \\ adaptability2 & 1.000 & 2.17e-05*** & 5.41e-06*** \\ **Scenario: annual / small** & & & \\ equity & 1.000 & 0.990728 & 5.41e-06*** \\ overconsumption & 1.000 & 0.369682 & 2.17e-05*** \\ multiobj-sum & 1.000 & 0.997402 & 5.41e-06*** \\ multiobj-prod & 1.000 & 0.938497 & 0.001943 ** \\ adaptability1 & 0.994 & 0.037628 * & 6.50e-05*** \\ adaptability2 & 1.000 & 0.000363*** & 0.000103*** \\ **Scenario: annual / medium** & & & \\ equity & 1.000 & 0.999475 & 5.41e-06*** \\ overconsumption & 1.000 & 0.999637 & 5.41e-06*** \\ multiobj-sum & 1.000 & 0.992655 & 5.41e-06*** \\ multiobj-prod & 1.000 & 1.000000 & 5.41e-06*** \\ adaptability1 & 1.000 & 0.369682 & 5.41e-06*** \\ adaptability2 & 1.000 & 0.001440 ** & 5.41e-06*** \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of the _QDSOM_ algorithm with the others, using a Wilcoxon statistical test, with the ‘greater’ alternative. hyperparameters search. We show a moving average of the rewards in order to erase the small and local variations to highlight the larger trend of the rewards' evolution. We can see from the results that the _small_ scenarii seem to yield a slightly better score than the _medium_ scenarii. Thus, agents are impacted by the increased number of other agents, and have difficulties learning the whole environment dynamics. Still, the results reported for the _medium_ scenarii are near the _small_ results, and very close to \(1\). Even though there is indeed an effect of the environment size on the score, this hints towards the scalability of our approach, as the agents managed to learn a "good" behaviour that yields high rewards. ## 5 Discussion In this article, we presented two new reinforcement learning algorithms, QSOM and QDSOM. We recall that the principal important aspects and limitations identified in the State of the Art were the following: * Using continuous and multi-dimensional domains to improve the environment's richness. * Continuously learning and adapting to changes in the environment, including in the reward function, i.e., the structure that encodes and captures the ethical considerations that agents should learn to exhibit. * Learning in a multi-agent setting, by taking into account the difficulties posed by the presence of other agents. The continuous and multi-dimensional aspect was solved by design, thanks to the SOMs and DSOMs that we use in our algorithms. They learn to handle the complex observations and actions domains, while advantageously offering a discrete representation that can be leveraged with the Q-Table, permitting a modular approach. This modular approach, and the use of Q-Tables, allow for example to compare different actions, which is not always possible in end-to-end Deep Neural Networks. The continuous adaptation was also handled by our design choices, notably by disabling traditional convergence mechanisms. The use of (D)SOMs also help, as the representation may shift over time by moving the neurons. Additionally, our experiments highlight the ability of our algorithms to adapt, especially when compared to other algorithms, through the specific _adaptability1_ and _adaptability2_ functions. Finally, challenges raised by the multi-agent aspect were partially answered by the use of Difference Rewards to create the reward functions. On the other hand, the agents themselves have no specific mechanism that help them learn a behaviour while taking account of the other agents in the shared environment, e.g., contrary to Centralized Training Figure 7: Agents Rewards algorithms such as MADDPG. Nevertheless, our algorithms managed to perform better than MADDPG on the proposed scenarii and reward functions, which means that this limitation is not crippling. Our algorithms still suffer from a few limitations that we highlight here. * As we already mentioned, the multi-agent aspect could be improved, for example by adding communication mechanisms between agents. Indeed, by being able to communicate, agents could coordinate their actions so that the joint-action could be even better. Let us assume that an agent, which approximately learned the environment dynamics, believes that there is not much consumption at 3AM, and chooses the strategy of replenishing its battery at this moment, so as to have a minimal impact on the grid. Another agent may, at some point, face an urgent situation that requires it to consume exceptionally at 3AM this day. Without coordination, the 2 agents will both consume an import amount of energy at the same time, thus impacting the grid and potentially over-consuming. On the other hand, if the agents could communicate, the second one may inform other agents of its urgency. The first one would perhaps choose to consume only at 4AM, or they would both negotiate an amount of energy to share, in the end proposing a better joint-action than the uncoordinated sum of their individual actions. However, such communication should be carefully designed in a privacy-respectful manner. * The algorithms have not been tested on other (baseline) environments. This limits their results and promises: it might happen that their success, compared to DDPG and MADDPG especially, is due to some specificities of the Smart Grid environment. In particular, recent Deep Reinforcement Learning algorithms target use-cases with huge state-action spaces, e.g., taking input directly from the screen (pixels), or emitting physical actions concerning dozens of joints. Although our use-case used more dimensions (11 for states and 6 for actions) than Smith's original experiments, it does not compare to such environments. The performance of QSOM and QSOM on them thus remains uncertain.
2306.17307
Interference mitigation with block diagonalization for IRS-aided MU-MIMO communications
This work investigates interference mitigation techniques in multi-user multiple input multiple output (MU-MIMO) Intelligent Reflecting Surface (IRS)-aided networks, focusing on the base station end. Two methods of precoder design based on block diagonalization are proposed. The first method does not consider the interference caused by the IRS, seeking to mitigate only the multi-user interference. The second method mitigates both the IRS-caused interference and the multi-user interference. A comparison between both methods within an no-IRS MU-MIMO network with strong direct links is provided. The results show that, although in some circumstances IRS interference can be neglected, treating it can improve system capacity and provide higher spectral efficiency
Wilker de O. Feitosa, Igor M. Guerreiro, Fco. Rodrigo P. Cavalcanti, Tarcisio F. Maciel, Maria Clara R. Lobão, Fazal-E-Asim, Behrooz Makki, Gábor Fodor
2023-06-29T21:31:19Z
http://arxiv.org/abs/2306.17307v1
# Interference mitigation with block diagonalization for IRS-aided MU-MIMO communications ###### Abstract This work investigates interference mitigation techniques in multi-user multiple input multiple output (MU-MIMO) Intelligent Reflecting Surface (IRS)-aided networks, focusing on the base station end. Two methods of precoder design based on block diagonalization are proposed. The first method does not consider the interference caused by the IRS, seeking to mitigate only the multi-user interference. The second method mitigates both the IRS-caused interference and the multi-user interference. A comparison between both methods within an no-IRS MU-MIMO network with strong direct links is provided. The results show that, although in some circumstances IRS interference can be neglected, treating it can improve system capacity and provide higher spectral efficiency. MU-MIMO, Interference Mitigation, Block Diagonalization, Intelligent Reflecting Surfaces. ## I Introduction The sixth generation (6G) of cellular networks is expected to present significant advances in terms of system capacity, energy efficiency, number of supported users and spectral efficiency (SE) compared to the fifth generation (5G) [1]. To accomplish this goal, new physical layer technologies are investigated to take more advantage of the propagation features of the environment, such as intelligent reflecting surface (IRS) [2], sub-Terahertz bands, distributed multiple input multiple output (MIMO), among others [3]. Multi-user MIMO (MU-MIMO), as a well-established key technology in mobile wireless systems due to its advantages in spatial diversity and multiplexing, will also play an important role in 6G, where its beamforming gains and improvements in SE are desired and enhanced when combined with the aforementioned technologies. One of the challenges in MU-MIMO systems is to deal with multi-user interference. Various methods for mitigating interference on the receiver end, as well as at the transmitter, have been developed over the past years, for instance, the design of robust decoding and precoding filters using methods like zero-forcing (ZF) [4] and block diagonalization (BD) [5]. In 5G and beyond systems, the use of millimeter wave (mmWave) bands is highly desirable due to the large bandwidth available in the spectrum. However, since high frequencies are bound to severe pathloss and penetration loss, communication in those bands is much more susceptible to blockage and poor link conditions without a line of sight (LOS) component. Such effects can even interrupt the connection, thus limiting the capacity of the system. In order to overcome these limitations, the use of mmWave is usually combined with other technologies like massive MIMO (mMIMO) and ultra-dense networks [6]. In these circumstances, the concept of smart radio environment (SRE) can be introduced, which states that the wireless environment can be partially turned into an optimization variable that, jointly with transmitter and receiver properties, can be used to maximize the overall network performance [7]. The concept of IRSs is a candidate enabler for SRE since it can well modify the environment. For instance, IRS can create a virtual LOS component, higher-rank channels, or attenuate undesired signals [8]. IRSs also actuate as antenna arrays, improving signal quality by applying beamforming to the desired signal. Thus, SRE aided by IRSs can be leveraged to diminish the effects of propagation losses, improve the coverage and increase the SE by optimizing the environment between transmitter and receiver to achieve better link conditions [9]. Considering the high pathloss and blockage probability at mmWave bands, the use of IRSs can provide beamforming gains and additional paths to users under poor propagation conditions, thus improving the capacity of MU-MIMO systems. Nonetheless, it is important to notice that even a fully passive IRS, i.e., an IRS without radio frequency (RF) chains, can introduce interference to untargeted users and base stations (BSs), e.g., other nodes nearby the intended user. This raises the question about the need to mitigate such interference and the means to do it. To manage the interference introduced by IRSs, the authors in [10] employ an orthogonalization scheme based on BD in a MU-MIMO scenario. However, their BD approach demands the use of least one IRS per user.The authors in [11] also address this problem by minimizing the symbol error rate (SER) using 1-bit analog-to-digital converters (ADCs) on the BS side. In this paper, we study the problem of interference management in IRS-assisted MU-MIMO networks with a single IRS. We propose two precoding methods based on BD. Compared to the solution in [10], our proposed methods mitigate in terference caused by IRSs not only by being less greedy in terms of computational complexity, but also using fewer RF chains at both the transmitter and the receiver. The first method considers the interference caused by the IRS as negligible and focuses only on multi-user interference mitigation. The second method takes both types of interference into account and also uses part of the IRS signal towards untargeted users as useful signal. The results show that interference can be mitigated with both methods, and in particular, the second method presents the highest SE in comparison with the other method and the state-of-the-art. ## II System model Consider the downlink of a system composed of a BS with \(M\) antennas serving simultaneously two co-channel user equipments (UEs), as shown in Fig. 1. UE1 is equipped with \(Q\) antennas and has no direct path to the BS due to, e.g., a strong blockage effect. Therefore, the BS serves UE1 _via_ an IRS with \(N\) reflecting elements. UE2, with \(P\) antennas, has a strong direct link to the BS; thus, the use of the IRS for UE2 is optional. The IRS phase-shift controller is assumed to work ideally with the BS. While serving UE1 _via_ IRS, UE2 receives from the BS an unintended signal due to beam leakage, whose intensity depends on the propagation conditions and on the BS transmit power. The received signal model is given as: \[\begin{bmatrix}\mathbf{y}_{1}\\ \mathbf{y}_{2}\end{bmatrix}=\begin{bmatrix}\mathbf{W}_{1}^{\text{H}}&\mathbf{0 }\\ \mathbf{0}&\mathbf{W}_{2}^{\text{H}}\end{bmatrix}\begin{bmatrix}\bar{\mathbf{H}}_ {1}\\ \bar{\mathbf{H}}_{2}\end{bmatrix}\begin{bmatrix}\mathbf{F}_{1}&\mathbf{F}_{2} \end{bmatrix}\begin{bmatrix}\mathbf{x}_{1}\\ \mathbf{x}_{2}\end{bmatrix}+\begin{bmatrix}\bar{\mathbf{n}}_{1}\\ \bar{\mathbf{n}}_{2}\end{bmatrix}, \tag{1}\] where \(\mathbf{W}_{1}\in\mathbb{C}^{Q\times N_{s}^{(1)}}\) represents the combiner for UE1, \(\mathbf{F}_{1}\in\mathbb{C}^{M\times N_{s}^{(1)}}\) is the digital baseband precoder for UE1, \(\mathbf{n}_{1}\sim\mathcal{CN}\left(\mathbf{0},\sigma_{1}^{2}\mathbf{I}_{Q}\right)\) is the circularly symmetric additive white Gaussian noise vector with variance \(\sigma_{1}^{2}\) and \(\mathbf{x}_{1}\in\mathbb{C}^{N_{s}^{(1)}\times 1}\) is the data vector for UE1. Similarly, \(\mathbf{W}_{2}\in\mathbb{C}^{P\times N_{s}^{(2)}}\) is the combiner for UE2, \(\mathbf{F}_{2}\in\mathbb{C}^{M\times N_{s}^{(2)}}\) is the digital baseband precoder for UE2 and \(\mathbf{n}_{2}\sim\mathcal{CN}\left(\mathbf{0},\sigma_{2}^{2}\mathbf{I}_{P}\right)\) is the circularly symmetric additive white Gaussian noise vector with variance \(\sigma_{2}^{2}\), and finally, \(\mathbf{x}_{2}\in\mathbb{C}^{N_{s}^{(2)}\times 1}\) is the data vector for UE2. At last, \(\mathbf{I}_{a}\) denotes the \(a\times a\) identity matrix and \(\mathbf{0}\) is a vector of zeros of proper size. ### _Signal Model_ Given propagation scenario depicted in Fig. 1, the channel for UE1 can be defined as \(\bar{\mathbf{H}}_{1}=\mathbf{G}_{1}\mathbf{\Omega J}\in\mathbb{C}^{Q\times M}\), hence, the complete equation of the received signal for UE1 is written as \[\mathbf{y}_{1}= \mathbf{W}_{1}^{\text{H}}\mathbf{G}_{1}\mathbf{\Omega J}\mathbf{ F}_{1}\mathbf{x}_{1}+\] \[\underbrace{\mathbf{W}_{1}^{\text{H}}\mathbf{G}_{1}\mathbf{\Omega J }\mathbf{F}_{2}\mathbf{x}_{2}}_{\text{interference}}+\underbrace{\mathbf{W}_{1}^ {\text{H}}\mathbf{n}_{1}}_{\bar{\mathbf{n}}_{1}}\in\mathbb{C}^{N_{s}^{(1)} \times 1}, \tag{2}\] where \(\mathbf{J}\in\mathbb{C}^{N\times M}\) is the channel between the BS and the IRS, \(\mathbf{G}_{1}\in\mathbb{C}^{Q\times N}\) is the channel between the IRS and the UE1 and \(\mathbf{\Omega}=\text{diag}\left\{\boldsymbol{\omega}\right\}\), \(\boldsymbol{\omega}\in\mathbb{C}^{N\times 1}\), is the IRS phase-shift vector. Based on (2), the signal-to-interference-plus-noise-ratio (SINR) \(\gamma_{1}\) for UE1 is given by \[\gamma_{1}=\operatorname{tr}\left[\mathbf{W}_{1}^{\text{H}}\mathbf{G}_{1} \mathbf{\Omega J}\mathbf{F}_{1}\mathbf{F}_{1}^{\text{H}}\mathbf{J}^{\text{H}} \mathbf{\Omega}^{\text{H}}\mathbf{G}_{1}^{\text{H}}\mathbf{W}_{1}\mathbf{R}_{1 }^{-1}\right]\,, \tag{3}\] where \(\mathbf{R}_{1}=\sigma_{1}^{2}\mathbf{I}_{N_{s}^{(1)}}+\mathbf{W}_{1}^{\text{H} }\bar{\mathbf{H}}_{1}\mathbf{F}_{2}\mathbf{F}_{2}^{\text{H}}\bar{\mathbf{H}}_{1 }^{\text{H}}\mathbf{W}_{1}\), and \(\operatorname{tr}[\cdot]\) denotes the trace operator. Likewise, the channel for UE2 can be defined as \(\bar{\mathbf{H}}_{2}=\mathbf{H}_{2}+\mathbf{G}_{2}\mathbf{J}\mathbf{J}\in \mathbb{C}^{P\times M}\). Therefore, the received signal for UE2 from (1) is given by \[\mathbf{y}_{2}=\mathbf{W}_{2}^{\text{H}}\mathbf{H}_{2}\mathbf{F}_ {2}\mathbf{x}_{2}+\mathbf{W}_{2}^{\text{H}}\mathbf{G}_{2}\mathbf{\Omega J} \mathbf{F}_{2}\mathbf{x}_{2}+\] \[\underbrace{\mathbf{W}_{2}^{\text{H}}\mathbf{H}_{2}\mathbf{F}_{1} \mathbf{x}_{1}+\mathbf{W}_{2}^{\text{H}}\mathbf{G}_{2}\mathbf{\Omega J}\mathbf{F }_{1}\mathbf{x}_{1}}_{\text{interference}}+\underbrace{\mathbf{W}_{2}^{\text{H}} \mathbf{n}_{2}}_{\bar{\mathbf{n}}_{2}}\in\mathbb{C}^{N_{s}^{(2)}\times 1}, \tag{4}\] where \(\mathbf{H}_{2}\in\mathbb{C}^{P\times M}\) is the direct channel between the BS and the UE2, \(\mathbf{G}_{2}\in\mathbb{C}^{P\times N}\) is the leakage channel between the IRS and UE2. Now let \(\mathbf{R}_{2}=\sigma_{2}^{2}\mathbf{I}_{N_{s}^{(2)}}+\mathbf{W}_{2}^{\text{H} }\bar{\mathbf{H}}_{2}\mathbf{F}_{1}\mathbf{F}_{1}^{\text{H}}\bar{\mathbf{H}}_{2 }^{\text{H}}\mathbf{W}_{2}\). Then, based on (4), the SINR for UE2 is calculated by (5), given on top of the next page. The key performance indicators used for comparing the techniques are the SE and the sum SE. The SE can be calculated as \(\epsilon_{j}=\log_{2}\det[\mathbf{I}_{N_{s}}+\gamma_{j}],j\in\{1,2\}\), and the sum SE is defined as \(\epsilon_{\text{sum}}=\sum_{j\in\{1,2\}}\epsilon_{j}\). It is also important to notice that mmWave and Terahertz systems tend to have fewer RF chains in their configurations, due to their massive number of antennas. In this paper, all hybrid beamforming is done considering that the number of RF chains is smaller than the number of antennas [12]. ### _Propagation Model_ Considering the scenario illustrated in Fig. 1, our adopted channel model [13] is given by \[\mathbf{H}_{r,t}=\sqrt{\frac{K}{K+1}}\,A_{0}\mathbf{a}_{r}\left( \theta_{r,0},\phi_{r,0}\right)\mathbf{a}_{t}^{\text{T}}\left(\theta_{t,0},\phi_{t,0}\right)+\] \[\sqrt{\frac{1}{K+1}}\Bigg{(}\frac{1}{\sqrt{S}}\sum_{s=1}^{S}A_{s} \mathbf{a}_{r}\left(\theta_{r,s},\phi_{r,s}\right)\mathbf{a}_{t}^{\text{T}} \left(\theta_{t,s},\phi_{t,s}\right)\Bigg{)}\,, \tag{6}\] in which \(K\) is the Rician K-factor; \(\mathbf{a}_{r}\left(\theta_{r,s},\phi_{r,s}\right)\) and \(\mathbf{a}_{t}\left(\theta_{t,s},\phi_{t,s}\right)\) represent the steering vectors at the receiver \(r\) and the transmitter \(t\), respectively, for the \(s\)-th ray, with \(s=0,\ldots,S\). The index \(s=0\) indicates the LOS component of the channel. The angles \(\theta\) and \(\phi\) correspond to the horizontal and vertical directions, respectively. The term \(A_{s}\) is the channel coefficient that contains the pathloss, shadowing and fast-fading. The channels between the receiver \(r\) and transmitter \(t\), \(\mathbf{H}_{r,t}\), are defined as \(\mathbf{H}_{\text{IRS,BS}}\) = \(\mathbf{J}\), \(\mathbf{H}_{\text{UE2,BS}}\) = \(\mathbf{H}_{2}\), \(\mathbf{H}_{\text{UE1,IRS}}\) = \(\mathbf{G}_{1}\) and \(\mathbf{H}_{\text{UE2,IRS}}\) = \(\mathbf{G}_{2}\). Fig. 1: Multi-user MIMO-IRS assisted systems For the modeling of the steering vectors, the IRS is designed as a uniform rectangular array (URA). The UEs and the BS are considered to be equipped with horizontal uniform linear arrays (ULAs), for which the angle \(\phi\) is disregarded. Due to the use of mmWave bands, the pathloss, shadowing and Rician K-factor for the considered system are modeled according to [14] considering the urban macro (UMa) scenario: \[\text{PL}=28+22\log_{10}\left(d_{3D}\right)+20\log_{10}\left(f_{c}\right), \tag{7}\] in which \(d_{3D}\) is the absolute distance between the transmitter and the receiver and \(f_{c}\) is the carrier frequency. The shadow fading is modeled according to a log-normal distribution with standard deviation \(\sigma=4\) dB. The Rician K-factor also follows a log-normal distribution, with \(K\sim\mathcal{N}\left(9,3.5\right)\) dB [14], for all the links in the fig 1. The scattering in a UMa scenario is considered to be rich, i.e., the channel has a large number of multi-paths. Therefore, all channels in the studied scenario are considered to have a full rank. ### _IRS Phase Shift Setting_ Seeing that the main use of the IRS in this work is to provide an alternative path for users under blockage, for the design of the IRS phase-shift, a singular value decomposition (SVD) is performed on the channels \(\mathbf{J}\) and \(\mathbf{G}_{1}\). The phase-shift vector \(\boldsymbol{\omega}\) is generated through the Hadamard product of the left singular vector of \(\mathbf{J}\) associated with its highest singular value, \(\mathbf{u}_{\mathbf{J}}\), and the right singular vector of \(\mathbf{G}_{1}\) associated with its highest singular value, \(\mathbf{v}_{\mathbf{G}_{1}}^{*}\): \[\boldsymbol{\omega}=-\angle(\mathbf{u}_{\mathbf{J}}\odot\mathbf{v}_{\mathbf{ G}_{1}}^{*})\in\mathbb{C}^{N\times 1}\,, \tag{8}\] where \(\odot\) denotes Hadamard product, and \(*\) is the conjugate of a vector. ## III Proposed Precoding Design Our proposed IRS-aided network operates in two stages: i) There is the IRS phase-shift design, also known as IRS passive beamforming, and then ii) the digital precoder and combiner are computed. To support multi-user communication, we propose two BD-based precoding schemes implemented at the BS. On the UEs side, a traditional ZF combiner matched to the intended channel is employed. In this study, all channel responses are assumed known at the BS by relying on the fact that practice they can be estimated, as, e.g., demonstrated in [15, 16, 17]. ### _General BD Framework_ When it comes to MU-MIMO interference mitigation techniques, BD is well-known for its efficiency in maximizing either the throughput or the fairness of the system [5]. BD-based precoders have the potential to cancel interference toward non-intended users by introducing the following constraint: \[\tilde{\mathbf{H}}_{u}\mathbf{F}_{k}=\mathbf{0}_{N_{M,u}\times N_{M,u}},\quad \forall u\neq k, \tag{9}\] where \(u,k=1,\ldots,L\), represent UE indexes, \(L\) is the total number of UEs in the system, and \(\tilde{\mathbf{H}}_{l}\) is the complementary channel of the \(l\)-th UE, defined as: \[\tilde{\mathbf{H}}_{l}=\begin{bmatrix}\mathbf{H}_{1}^{\mathsf{T}}&\ldots& \mathbf{H}_{l-1}^{\mathsf{T}}&\mathbf{H}_{l+1}^{\mathsf{T}}&\ldots&\mathbf{H }_{L}^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}. \tag{10}\] In order to achieve the result in (9) and mix the signal for the intended users coherently, it is necessary for the precoder \(\mathbf{F}_{k}\) to lie in the null-space of \(\tilde{\mathbf{H}}_{l}\) and on the signal-space of \(\mathbf{H}_{k}\). Both of which can be obtained by the SVD of \(\tilde{\mathbf{H}}_{l}\) and \(\mathbf{H}_{k}\), respectively. This technique must obey the restriction that the number of transmitting antennas should be greater than or equal to the total number of receiving antennas. The precoder \(\mathbf{F}_{k}\) is constructed as follows: \[\tilde{\mathbf{H}}_{l}=\tilde{\mathbf{U}}_{l}\tilde{\mathbf{ \Lambda}}_{l}\begin{bmatrix}\mathbf{\tilde{\mathbf{V}}}_{l}^{\text{non-zero}}& \tilde{\mathbf{V}}_{l}^{\text{zero}}\end{bmatrix}^{\mathsf{H}}, \tag{11}\] \[\mathbf{H}_{k}\tilde{\mathbf{V}}_{l}^{\text{zero}}=\mathbf{U}_{k} \mathbf{\Lambda}_{k}\begin{bmatrix}\mathbf{\mathbf{v}}_{k}^{\text{non-zero}}& \mathbf{\mathbf{v}}_{k}^{\text{zero}}\end{bmatrix}^{\mathsf{H}},\] \[\mathbf{F}_{k}=\tilde{\mathbf{V}}_{l}^{\text{zero}}\mathbf{V}_{k }^{\text{non-zero}}.\] As shown in [10], the classic BD technique is not feasible for IRS-aided scenarios, given that the UEs channels and the IRS phase-shift vectors are coupled, which leads to the right singular vectors of \(\tilde{\mathbf{H}}_{l}\) and \(\mathbf{H}_{k}\) not being disjoint. In this context, two techniques are proposed in the sequel to overcome this issue. ### _Partial IRS BD (PIB) precoder design_ In this first approach, the IRS precise beamforming [18] is taken into account and it is assumed that the beam leakage is minimal. Therefore, for the system presented in Fig. 1, the IRS leakage channel \(\mathbf{G}_{2}\mathbf{\Omega}\mathbf{J}\) is neglected in the precoder design and \(\tilde{\mathbf{H}}_{1}=\mathbf{G}_{1}\mathbf{\Omega}\mathbf{J}\) and \(\mathbf{H}_{2}\) are the only contemplated channels, which can be classically block diagonalized by considering the complementary channels as follows: \[\tilde{\mathbf{H}}_{1}=\mathbf{H}_{2}, \tag{12}\] \[\tilde{\mathbf{H}}_{2}=\mathbf{G}_{1}\mathbf{\Omega}\mathbf{J}.\] Given the complementary channels in (12), the precoder can be design using (11). The assumption that \(\mathbf{G}_{2}\mathbf{\Omega}\mathbf{J}\) is negligible compared to \(\mathbf{H}_{2}\) makes this method suitable for interference mitigation in the considered scenario. The veracity of this assumption will be further analyzed in Section IV. ### _Full IRS BD (FIB) precoder design_ Unlike the previous method, this second technique considers in its design the contributions of \(\mathbf{G}_{2}\mathbf{\Omega}\mathbf{J}\) on both useful signal and interference components of (4). To do that, instead of considering the channel \(\tilde{\mathbf{H}}_{2}=\mathbf{H}_{2}+\mathbf{G}_{2}\mathbf{\Omega}\mathbf{J}\) of (1), these components are treated as two separate independent channels of UE2. Based on this consideration, the complementary channels become \[\tilde{\mathbf{H}}_{1} =\left[\mathbf{H}_{2}^{\mathrm{T}}\quad\left(\mathbf{G}_{2}\mathbf{ \Omega J}\right)^{\mathrm{T}}\right]^{\mathrm{T}}, \tag{13}\] \[\tilde{\mathbf{H}}_{2} =\mathbf{G}_{1}\mathbf{\Omega J}.\] It is important to notice that, because the IRS channels are coupled, i.e., they depend on the IRS phase-shift vector, the BD technique is not able to fully cancel all the interfering signals [10]. Thus, in the configuration of (13), the signals traversing these coupled channels can be completely canceled after applying the precoder designed with (11) at the cost of receiving some residual interference from the direct channel \(\mathbf{H}_{2}\). ### _Combiner design_ For both precoding techniques, it is considered that the UEs are employing traditional ZF combining on the strongest channel, which can be expressed as: \[\mathbf{W}_{1} =\left[\mathbf{G}_{1}\mathbf{\Omega J}\mathbf{F}_{1}\right]^{ \dagger}, \tag{14}\] \[\mathbf{W}_{2} =\left[\mathbf{H}_{2}\mathbf{F}_{2}\right]^{\dagger},\] where \((\cdot)^{\dagger}\) represents the Moore-Penrose pseudo-inverse. ## IV Simulations Results In this section, the performance of each of the proposed methods will be evaluated in terms of SE and compared with i) the solution presented in [10], and with ii) a block-diagonalized MU-MIMO system without IRS that considers a (hypothetical) strong direct path between BS and UE1, then acting as a benchmark. The scenario illustrated in Fig. 1 was simulated considering \(M=32\) antennas at the BS, \(N=64\) reflecting elements at the IRS, \(P=Q=8\) antennas at the UEs, and the number of RF chains in both UEs was set to 2, which leads to \(N_{s}=N_{s}^{(1)}=N_{s}^{(2)}=2\) streams. The other parameters considered to simulate the studied scenario are exposed in Table I, where \(d_{2D}\) represents the horizontal distance. Fig. 2 presents the SE for each UE as a function of the transmit power employed at the BS when the PIB method is applied in the system.It can be noticed that the SE for UE1 grows linearly, which is expected given that both UEs are in the high signal-to-noise-ratio (SNR) regime and because this technique is able to fully cancel signals flowing through \(\mathbf{G}_{1}\mathbf{\Omega J}\mathbf{F}_{2}\). In contrast to it, the UE2 presents two behaviors: until 15 dBm, the SE of UE2 grows linearly, similarly to UE1, and after that it presents a still increasing behavior but with a lower slope. This is also expected since this technique does not cancel the interference arriving from channel \(\mathbf{G}_{2}\mathbf{\Omega J}\) and as the power of the BS increases so does the interfering signal coming through \(\mathbf{G}_{2}\mathbf{\Omega J}\). Considering the assumptions made in Section III-B and based on this result, we can conclude that if the interfering channel has low gain, the beam leakage can be considered minimal and be neglected in the precoder design. Fig. 3 exhibits the SE for each UE as a function of the transmit power employed at the BS when the FIB method is applied on the system. As expected, the FIB technique outperforms the PIB, since it is more capable of canceling interference arriving through \(\mathbf{G}_{2}\mathbf{\Omega J}\) and even thought the UE2 has some residual interference arriving through \(\mathbf{H}_{2}\), it does not impact significantly on its performance. In the following, a comparison between the no-IRS MU-MIMO and the IRS-aided MU-MIMO will be provided. In the no-IRS MU-MIMO, all system parameters are kept the same and the only modification occurs on UE1, whose IRS is replaced by a direct channel with strong LOS similarly to UE2. In this scenario the BS precoder is designed following the classical BD method. Fig. 4 shows the sum SE as a function of the transmit power at the BS for each method presented, as well as for an adapted version of the precoder from [10]. Therein, only one user performs interference cancellation, since its proposal needs one IRS per user. An Fig. 3: SE vs BS total transmit power for FIB precoder design. Fig. 2: SE vs BS total transmit power for PIB precoder design. MIMO without IRS is also provided, in which the IRS link is replaced by a direct link with the same propagation properties of \(\mathbf{H}_{2}\) for comparison purposes. It can be observed that the no-IRS setup presents the highest SE. This is due the considered direct link for UE1. However, in the cases where that link is not available, as defined in Section II, the IRS MU-MIMO with the FIB method presents the best results, provided that it is the most robust method. However, the overall performance of the three scenarios does not significantly differs from each other. In contrast, the method in [10] has the worst performance, since its solution need one IRS per user. Hence, one user can fully cancel the interference and the other has unmitigated interference arriving from the IRS. As for computational complexity, the precoder design method of [10] presents the highest complexity, since it cancels the interference on BS-IRS link instead of BS-IRS-UE link, requiring an SVD of higher dimension for the precoder design. It is followed by the FIB method, which uses the direct link and the BS-IRS-UE link. The PIB is the least complex method herein, since it uses only the BS-IRS-UE link in its formulations. It is worth mentioning that, in an IRS-aided network, as the PIB method performs similarly to the FIB one and outperforms the method of [10], but with reduced complexity. The PIB may be useful in scenarios in which high spatial multiplexing gains are achievable, directional antennas are employed at the receiver, the interfering channel suffers from blockage, and in many other cases where the interfering channel is negligible compared to the direct one. ## V Conclusion In this paper, the impact of multi-user interference in IRS-aided networks was studied. Two precoding methods based on BD were proposed to spatially orthogonalize users' signals. It was observed that both methods perform well when the interfering channels created by the IRS has low gain, especially PIB, which has the same limitations of the original BD technique. The FIB technique, even though being more demanding in computational and antenna resources, grants better interference cancellation and performs closely to the no-IRS MU-MIMO case, since the former not only better orthogonalizes the users but also takes advantage of the additional path provided by the IRS for the non-intended users. Future works include the study of multi-cell and multi-IRS scenarios, other mitigation interference methods, joint IRS phase shift and precoder optimization and IRS splitting for serving multiple users.
2305.09469
Logarithm of multivector in real 3D Clifford algebras
Closed form expressions for a logarithm of general multivector (MV) in base-free form in real geometric algebras (GAs) Cl(p,q) are presented for all n=p+q=3. In contrast to logarithm of complex numbers (isomorphic to Cl(0,1), 3D logarithmic functions, due to appearance of two double angle arc tangent functions, allow to include two sets of sheets characterized by discrete coefficients. Formulas for generic and special cases of individual blades and their combinations are provided.
A. Acus, A. Dargys
2023-04-01T17:37:23Z
http://arxiv.org/abs/2305.09469v1
[ ###### Abstract Closed form expressions for a logarithm of general multivector (MV) in base-free form in real geometric algebras (GAs) \(\mathit{Cl}_{p,q}\) are presented for all \(n=p+q=3\). In contrast to logarithm of complex numbers (isomorphic to \(\mathit{Cl}_{0,1}\)), 3D logarithmic functions, due to appearance of two double angle arc tangent functions, allow to include _two sets of sheets_ characterized by discrete coefficients. Formulas for generic and special cases of individual blades and their combinations are provided. Clifford (geometric) algebra, logarithms of Clifford numbers, computer-aided theory Logarithm of multivector in real 3D Clifford algebras]Logarithm of multivector in real 3D Clifford algebras A. Acus]A. Acus and A. Dargys Primary 15A18; Secondary 15A66 ## 1 Introduction Logarithm properties are well-known for real and complex numbers. Except the Hamilton quaternions which are isomorphic to \(\mathit{Cl}_{0,2}\), the properties of logarithm in other 2D algebras (some partial formulas for 2D GAs are provided in [1, 2, 3, 4]) and higher dimensional Clifford algebras remain uninvestigated as yet. In general, GA logarithm properties are simplest for anti-Euclidean algebras \(\mathit{Cl}_{0,n}\)[5, 6]. As in the complex algebra case we expect at least to have a principal logarithm and a part that makes the GA logarithm a multivalued function. Recently in papers [7, 8], which will be the starting point for the present article, we have performed a detailed investigation of 3D exponential functions in real GAs. However, the GA logarithm is more difficult to analyze since one must take into account the multi-valuedness and the fact that in 3D algebras (except \(\mathit{Cl}_{0,3}\)) the logarithm may not exist for all MVs. Here, we have treated the logarithm as an inverse problem using for this purpose the _Mathematica_ symbolic package, more precisely as an inverse GA function to exponential in separate 3D algebras ###### Abstract We present a new algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the \(\mbox{{Cl}}_{0,3}\) algorithm for the \(\mbox{{Cl}}_{1,2}\) algorithm for the The GA logarithm is a multivalued function. In [1] it was suggested that "The principal value of the logarithm can be defined as the MV \(\mathsf{M}=\log(\mathsf{Y})\) with the smallest norm", where \(Y\in\mathit{Cl}_{p,q}\). The natural norm for a MV is the determinant norm defined in subsection 2.2. The following properties hold for MV logarithm: \[\log(\mathsf{AB})=\log(\mathsf{A})+\log(\mathsf{B})\quad\text{ if } \mathsf{AB}=\mathsf{BA}, \tag{2.3}\] \[\mathsf{e}^{\log(\mathsf{A})}=\mathsf{A},\quad\mathsf{e}^{-\log( \mathsf{A})}=\mathsf{A}^{-1},\] \[\widehat{\log(\mathsf{A})}=\log(\widetilde{\mathsf{A}}),\quad \widehat{\log(\mathsf{A})}=\log(\widehat{\mathsf{A}}),\quad\widehat{\log( \mathsf{A})}=\log(\widetilde{\mathsf{A}}),\] \[\mathsf{V}\,\log(\mathsf{A})\mathsf{V}^{-1}=\log(\mathsf{V} \mathsf{A}\mathsf{V}^{-1}).\] In the last expression the transformation \(\mathsf{V}\), for example the rotor, is pushed inside the logarithm. ### GA logarithm series In analogy with a definition of logarithm in complex plane for GA logarithm we can write \[\log\mathsf{A}=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}(\mathsf{A}-1)^{k}}{k}, \quad\text{if}\quad|\mathsf{A}-1|<1, \tag{2.4}\] where \(|\mathsf{A}-1|\) denotes the determinant norm. For arbitrary MV the determinant norm is defined as an absolute value of determinant \(\mathrm{Det}(\mathsf{B})\) of MV \(\mathsf{B}\) raised to fractional power \(1/k\), where \(k=2^{\lceil n/2\rceil}\), i.e., \(|\mathsf{B}|=\bigl{(}\mathrm{Det}(\mathsf{B})\bigr{)}^{1/k}>0\). For algebras having negative determinant instead the semi-norm (aka pseudoscalar) is introduced \(|\mathsf{B}|=\bigl{(}\mathrm{abs}(\mathrm{Det}(\mathsf{B}))\bigr{)}^{1/k}\geq 0\). The equality sign means that in case of semi-norm the determinant may be zero although \(\mathsf{B}\neq 0\). In the following the same symbol will be used for both the norm and semi-norm. The (semi-)norm can be interpreted as a number of multipliers needed to define \(\mathrm{Det}(\mathsf{B})\). In 3D algebras (\(n=3\)) we have \(k=2^{\lceil 3/2\rceil}=2^{2}=4\), which is the degree of characteristic [11] polynomial \(\mathrm{Det}(\mathsf{B})\). In this way found integer \(k\) coincides with the number of multipliers in the 3D determinant: \(\mathrm{Det}(\mathsf{B})=\widetilde{\mathsf{B}}\widetilde{\mathsf{B}} \widetilde{\mathsf{B}}\). The determinant norm for MV \(\mathsf{B}\) in 3D algebras, therefore, is \(|\mathsf{B}|=\sqrt[4]{\mathrm{abs}(\mathrm{Det}(\mathsf{B}))}\,\). It can be shown that for any GA that holds a basis element with property \(\mathbf{e}_{i}^{2}=-1\) by adding a scalar one can construct a MV the norm of which may be identified with a module of a complex number. For example in \(\mathit{Cl}_{3,0}\) the norm of \(\mathsf{B}=1+\mathbf{e}_{12}\) is \(\sqrt{(1+\mathbf{e}_{12})(1-\mathbf{e}_{12})}=\sqrt{2}\) which coincides with \(|\mathsf{B}|=\sqrt[4]{\mathrm{abs}(\mathrm{Det}(\mathsf{B}))}=\sqrt{2}\,\). (also refer to Example 1 below). If the MV has a numerical form, to minimize the number of multiplications it is convenient to represent the logarithm in a nested form (aka Horner's rule). The logarithmic series [12] (also called Mercator series), if rewritten according to Horner's rule, assumes the following form, \[\log\mathsf{B}=\mathsf{B}(1+\mathsf{B}(-\tfrac{1}{2}+\mathsf{B}(\tfrac{1}{3}+ \mathsf{B}(-\tfrac{1}{4}+\mathsf{B}(\tfrac{1}{5}+\cdots))))),\quad\text{where }\mathsf{B}=\mathsf{A}-1. \tag{2.5}\] **Example 1**.: _MV equivalent to complex number._ Let's take the MV \(\mathsf{A}=\frac{9}{10}-\frac{1}{3}\mathbf{e}_{3}\) the determinant norm of which in \(\mathit{Cl}_{0,3}\) is \(|\mathsf{B}|=|\mathsf{A}-1|=\frac{\sqrt{109}}{30}\approx 0.34801<1\). Therefore, the standard series, Eq. (4), may be applied to find an approximate value (the result found by exact formula in Example 2 is \(\log\frac{\sqrt{829}}{30}-\mathbf{e}_{3}\arctan\frac{10}{27}\approx-0.0410873- \mathbf{e}_{3}0.354706\)). Since \(\mathbf{e}_{3}^{2}=-1\) and it is the only basis vector in the considered MV, one may replace the MV by complex number \(z=\frac{9}{10}-\mathrm{i}\frac{1}{3}\). The module is \(|z-1|=\frac{\sqrt{109}}{30}\) which coincides with the MV determinant norm. Then, \(\log z\approx-0.0410873-\mathrm{i}\,0.354706\). Now let's calculate the logarithm of \(\mathsf{A}^{\prime}=-\frac{9}{10}-\frac{1}{3}\mathbf{e}_{3}\) by the Horner series (5). Since \(|\mathsf{A}^{\prime}-1|=\frac{\sqrt{3349}}{30}\approx 1.92902>1\) the series diverges. As shown in Example 2, the logarithm can be easily computed if exact GA logarithm formula obtained in the present paper is used. After replacement of the MV by complex number we obtain that \(|z^{\prime}-1|=1.92902\) which again coincides with the module of \(z^{\prime}=-\frac{9}{10}-\frac{1}{3}\mathrm{i}\). Computing the value of logarithm by _Mathematica_ command FunctionExpand[Log\([z^{\prime}]\)] we obtain \(\log z^{\prime}=\mathrm{i}(-\pi+\arctan(10/27))-\log(30)+\log(829)/2\) which has the same numerical value as shown in the Example 2. ### Double-argument arc tangent function GA logarithm as we shall see, in its nature is a multi-valued function with period \(2\pi\). To account for quadrant sign in complex plane properly we shall need the double argument arc tangent function as given in the _Mathematica_, the properties of which are briefly mentioned below. Figure 1 shows the single and double argument arc tangent functions. The former has period \(\pi\) and its principal values lie in the interval \(\theta=[-\pi/2,\pi/2)\), while the double argument arc tangent has, respectively, \(2\pi\) period and principal values in \(\theta=[-\pi,\pi)\). The inset on the right side of Figure 1: Graphical representation of single \(\arctan(y/x)=\arctan(\sin\theta/\cos\theta)\) (dashed line) and double \(\arctan(x,y)=\arctan(\cos\theta,\sin\theta)\) (solid line) argument tangent functions used by _Mathematica_. \(\theta\) is an angle between \(x\) axis and vector (not shown) attached to the center of complex \(x-y\) plane. The vector may be rotated from \(x\)-axis anticlockwise, \(\theta=(0...\pi]\), or clockwise, \(\theta=[0...-\pi)\) as shown by arrows in the inset. In the inset also the numbering of the quadrants _1-4_ and the branching, represented by thick line on the negative part of \(x\) axis, are shown. Fig. 1 shows the quadrants _1-4_ in \(x-y\) plane. Note that the anticlockwise rotation is done from quadrant \(1\) to quadrant \(2\), while clockwise rotation in order _3\(\rightarrow\)4_, so that a jump in the double arc tangent value and associated branching occurs on the negative side of \(x\)-axis rather than on \(y\) axis as is in the standard single argument case. Also, in Fig. 1 note small points on vertical branching steps that indicate that respective arc tangent value on periodic line belong to upper rather than lower part, i.e., at \(\theta=\pi\) we have \(\arctan(\cos(\pi),\sin(\pi))=\arctan(-1,0)=\pi\), however, after addition of infinitesimal angle \(\arctan(\cos(\pi+0_{+}),\sin(\pi+0_{+}))=-\pi\). Similarly, at \(\theta=-\pi\) we have \(\arctan(\cos(-\pi),\sin(-\pi))=\pi\), and, \(\arctan(\cos(-\pi+0_{+}),\sin(-\pi+0_{+}))=-\pi\). If \(x,y\) were replaced by real numbers _Mathematica_ will switch automatically to a single argument arc tangent in the first quadrant and principal values, for example, \(\arctan(17,10)=\arctan(10/17)\), \(\arctan(-17,10)=\pi-\arctan(10/17)\), \(\arctan(17,-10)=-\arctan(10/17)\), \(\arctan(-17,-10)=-\pi+\arctan(10/17)\). In the terms of a standard arc tangent function the argument of which is in the range \((-\pi/2,\pi/2)\), the double tangent principal values now in the range \((-\pi,\pi)\) can be expressed as follows: \[\arctan(x,y)=\begin{cases}\arctan(\frac{y}{x})&\text{if $x>0$,}\\ \arctan(\frac{y}{x})-\pi&\text{if $x<0$ and $y\geq 0$,}\\ \arctan(\frac{y}{x})+\pi&\text{if $x<0$ and $y<0$,}\\ +\frac{\pi}{2}&\text{if $x=0$ and $y>0$,}\\ -\frac{\pi}{2}&\text{if $x=0$ and $y<0$,}\\ \text{undefined}&\text{if $x=0$ and $y=0$}\,.\end{cases} \tag{6}\] We will start from _Cl\({}_{0,3}\)_ where the expanded exponential in a coordinate-form has the simplest MV coefficients and the logarithm exists for all MVs. ## 3 MV logarithms in _Cl\({}_{0,3}\)_ ### Logarithm formula for generic MV The term "generic" here will be understood as "not creating the problems". If for a given set of MV coefficients the generic formula is not applicable, for example, due to nullification of a denominator, or due to appearance of an undefined subexpression like \(\arctan(0,0)\), we will refer to it as "special case". Special cases will be covered by more elaborate formula later. **Theorem 3.1** (Logarithm of multivector in _Cl\({}_{0,3}\)_).: _The generic logarithm of \(MV\,\mathsf{A}=a_{0}+\mathbf{a}+\mathcal{A}+a_{123}I\) is the MV given by_ \[\log(\mathsf{A})=\frac{1}{2}\big{(}\mathsf{A}_{0_{+}}+\mathsf{A}_{0_{-}}+ \mathsf{A}_{1,2_{+}}+\mathsf{A}_{1,2_{-}}+(\mathsf{A}_{0_{+}}-\mathsf{A}_{0_{- }})I\big{)}, \tag{7}\] _with_ \[\mathsf{A}_{0_{+}} =\frac{1}{2}\log\bigl{(}(a_{0}+a_{123})^{2}+a_{+}^{2}\bigr{)}, a_{+} \neq 0, \tag{3.2}\] \[\mathsf{A}_{0_{-}} =\frac{1}{2}\log\bigl{(}(a_{0}-a_{123})^{2}+a_{-}^{2}\bigr{)}, a_{-} \neq 0,\] (3.3) \[\mathsf{A}_{1,2_{+}} =\frac{1}{a_{+}}\bigl{(}\arctan(a_{0}+a_{123},a_{+})+2\pi c_{1+} \bigr{)}\bigl{(}1+I\bigr{)}\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)}, a_{+} \neq 0,\] (3.4) \[\mathsf{A}_{1,2_{-}} =\frac{1}{a_{-}}\bigl{(}\arctan(a_{0}-a_{123},a_{-})+2\pi c_{1-} \bigr{)}\bigl{(}1-I\bigr{)}\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)}, a_{-} \neq 0. \tag{3.5}\] _The MVs \(\mathsf{A}_{0_{\pm}},\mathsf{A}_{1,2_{\pm}}\) and \(\mathsf{A}_{0_{\pm}}I\) denote, respectively, the scalar, vector\(\pm\)bivector and the pseudoscalar components. \(c_{1_{\pm}},c_{2_{\pm}}\in\mathbb{Z}\) are arbitrary integers. The scalars \(a_{+}\geq 0\) and \(a_{-}\geq 0\) are given by expressions [7, 8],_ \[a_{-}= \sqrt{-(\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})+2I\mathbf{a} \wedge\mathcal{A}} \tag{3.6}\] \[= \sqrt{(a_{3}+a_{12})^{2}+(a_{2}-a_{13})^{2}+(a_{1}+a_{23})^{2}},\] \[a_{+}= \sqrt{-(\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})-2I\mathbf{ a}\wedge\mathcal{A}}\] (3.7) \[= \sqrt{(a_{3}-a_{12})^{2}+(a_{2}+a_{13})^{2}+(a_{1}-a_{23})^{2}}\,,\] _Proof._ It is enough to check that after substitution of (3.1) into GA exponential formula (1) of [7] one will get the initial MV \(\mathsf{A}\). The Theorem 3.1 gives the GA logarithm in a basis-free form. However, the derivation of above given generic logarithm formula at first was done in a coordinate form, from which the Theorem 3.1 follows (see Appendix A). The Theorem 3.1 ensures the existence of GA logarithm for all MVs with real coefficients in \(\mathit{Cl}_{0,3}\), because in the mentioned algebra the zero determinant of MV (\(\mathrm{Det}\,\mathsf{A}=0\)) occurs only if \(\mathsf{A}=0\). As we shall see this property does not hold for remaining algebras. ### Special cases In Theorem 3.1 it was presumed that the both scalars \(a_{-}\) and \(a_{+}\) do not vanish. This assumption is equivalent to the condition that the sum of vector and bivector must have non-zero-determinant1, \(\mathrm{Det}(\mathbf{a}+\mathcal{A})=a_{+}^{2}a_{-}^{2}\neq 0\). If either of scalars is zero then we have a special case. This situation is met in rare cases, for instance2, when \(a_{1}=a_{23}\), \(a_{2}=-a_{13}\), \(a_{3}=a_{12}\). In such and similar cases the MVs and \(\mathsf{A}_{0_{\pm}}I\) in the Theorem 3.1 must be supplemented by conditions: \[\mathsf{A}_{0_{+}}=\begin{cases}\log\bigl{(}a_{0}+a_{123}\bigr{)}+2\pi c_{2_{+} }\hat{\mathcal{U}},&a_{+}=0\quad\text{and}\quad(a_{0}+a_{123})>0\\ \log(0_{+}),&a_{+}=0\quad\text{and}\quad(a_{0}+a_{123})=0\\ \log\bigl{(}-(a_{0}+a_{123})\bigr{)}&\\ \qquad+(\pi+2\pi c_{2_{+}})\hat{\mathbf{u}},&a_{+}=0\quad\text{and}\quad(a_{0} +a_{123})<0,\end{cases} \tag{3.8}\] \[\mathsf{A}_{0_{-}}=\begin{cases}\log\bigl{(}a_{0}-a_{123}\bigr{)}+2\pi c_{2_{- }}\hat{\mathcal{U}},&a_{-}=0\quad\text{and}\quad(a_{0}-a_{123})>0\\ \log(0_{+}),&a_{-}=0\quad\text{and}\quad(a_{0}-a_{123})=0\\ \log\bigl{(}-(a_{0}-a_{123})\bigr{)}&\\ \qquad+(\pi+2\pi c_{2_{-}})\hat{\mathbf{u}},&a_{-}=0\quad\text{and}\quad(a_{0} -a_{123})<0,\end{cases} \tag{3.9}\] \[\mathsf{A}_{1,2_{+}}=\begin{cases}\bigl{(}\frac{1}{a_{0}+a_{123}}+2\pi c_{1_{ +}}\bigr{)}&\\ \qquad\times(1+I)\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)},&a_{+}=0\quad\text{and} \quad(a_{0}+a_{123})>0\\ 0,&a_{+}=0\quad\text{and}\quad(a_{0}+a_{123})=0,\\ (\pi+2\pi c_{1_{+}})(1+I)\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)},&a_{+}=0\quad \text{and}\quad(a_{0}+a_{123})<0,\end{cases} \tag{3.10}\] \[\mathsf{A}_{1,2_{-}}=\begin{cases}\bigl{(}\frac{1}{a_{0}-a_{123}}+2\pi c_{1_{ -}}\bigr{)}&\\ \qquad\times(1-I)\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)},&a_{-}=0\quad\text{and }\quad(a_{0}-a_{123})>0\\ 0,&a_{-}=0\quad\text{and}\quad(a_{0}-a_{123})=0\\ (\pi+2\pi c_{1_{-}})(1-I)\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)},&a_{-}=0\quad \text{and}\quad(a_{0}-a_{123})<0,\end{cases} \tag{3.11}\] Here \(c_{1_{\pm}},c_{2_{\pm}}\in\mathbb{Z}\) are the arbitrary integers. The conditions for \((a_{0}\pm a_{123})\) on the right-hand side take into account the case \(\mathrm{Det}(\mathbf{a}+\mathcal{A})=0\). In scalars3\(\mathsf{A}_{0_{+}}\) and \(\mathsf{A}_{0_{-}}\), the symbols \(\hat{\mathbf{u}}\) and \(\hat{\mathcal{U}}\) represent any free unit vector or bivector, respectively, \(\hat{\mathbf{u}}^{2}=\hat{\mathcal{U}}^{2}=-1\). For example, the unit vector can be parametrized as \(\hat{\mathbf{u}}=(u_{1}\mathbf{e}_{1}+u_{2}\mathbf{e}_{2}+u_{3}\mathbf{e}_{3 })/\sqrt{u_{1}^{2}+u_{2}^{2}+u_{2}^{3}}\). It should be noted that the term \(1/(a_{0}\pm a_{123})\) in Eqs (3.10) and (3.11) represents the limit \(\lim_{a_{\pm}\to 0}\arctan(a_{0}\pm a_{123},a_{\pm})/a_{\pm}=1/(a_{0}\pm a_{123})\) which is valid only when \(a_{0}\pm a_{123}>0\). The notation of \(\log(0_{+})\) in expressions for \(\mathsf{A}_{0_{+}}\) and \(\mathsf{A}_{0_{-}}\) is explained in Example 6. Footnote 3: The appearance of a free vector/bivector breaks the grade arrangement in the generic terms (3.2) and (3.3). The choice, however, results in a more simple final expression, since now it is enough to write only a single free \(\mathbf{u}\) or \(\mathcal{U}\) term (see Eq. (3.1)) instead of a pair \(\mathbf{u}\) and \(\mathbf{u}I\), or \(\mathcal{U}\) and \(\mathcal{U}\), if we would have chosen to move these terms to vector+bivector part by following up a strict grade notation convention. Interpretation of special conditions (3.8)-(3.11) in terms of the MV determinant [13, 14, 15] becomes more evident if one remembers that the determinant of MV \(\mathsf{A}\) in \(\mathit{Cl}_{0,3}\) can be expressed in a form \(\mathrm{Det}(\mathsf{A})=\bigl{(}a_{-}^{2}+(a_{0}-a_{123})^{2}\bigr{)}\bigl{(} a_{+}^{2}+(a_{0}+a_{123})^{2}\bigr{)}\), whereas the condition \(a_{\pm}=0\) is equivalent to \(\mathrm{Det}(\mathsf{A}_{12_{\pm}})=\mathrm{Det}(\mathbf{a}+\mathcal{A})=a_{+} ^{2}a_{-}^{2}\). All special cases therefore occur if \(\mathrm{Det}(\mathbf{a}+\mathcal{A})=0\) and the condition are described by \(a_{0}\pm a_{123}\underset{\geq}{\leqq}0\). In conclusion, the symbolic expression for logarithm, has three special pieces (branches) \(a_{0}\pm a_{123}\lessneqq 0\) provided the condition \(a_{\pm}=0\) is satisfied and the generic piece is characterized by \(a_{\pm}\neq 0\). ### Multivaluedness and free multivector To include multivaluedness in GA logarithm we introduce a free multivector \(\mathsf{F}\) by the following defining equation [4] \[\mathrm{e}^{\log(\mathsf{A})+\mathsf{F}}=\mathrm{e}^{\log(\mathsf{A})}\mathrm{ e}^{\mathsf{F}}=\mathrm{e}^{\log(\mathsf{A})}, \tag{3.12}\] which implies two conditions the MV \(\mathsf{F}\) must satisfy: the commutator \([\log(\mathsf{A}),\mathsf{F}]=0\) and \(\mathrm{e}^{\mathsf{F}}=1\). As we shall see, for remaining \(n=3\) algebras the free MV \(\mathsf{F}\) will play a similar role. One can check that the expression \[\mathsf{F}= \frac{\pi c_{1_{+}}}{a_{+}}(1+I)\big{(}\mathbf{a}+\mathcal{A} \big{)}+\frac{\pi c_{1_{-}}}{a_{-}}(1-I)\big{(}\mathbf{a}+\mathcal{A}\big{)} \tag{3.13}\] satisfies \(\mathrm{e}^{\mathsf{F}}=1\), and that for a generic MV \(\mathsf{A}\), Eq. 3.1, the free term (3.13) commutes with \(\log(\mathsf{A})\), i.e. \([\log(\mathsf{A}),\mathsf{F}]=0\). The integer constants \(c_{1_{+}},c_{1_{-}}\in\mathbb{Z}\) in Eq. (3.13) add two free (discrete) parameters that may be used to shift the coefficients of vector and bivector in \(\log(\mathsf{A})\) by some multiple of \(\pi\). The sum \((\mathbf{a}+\mathcal{A})\) in (3.13) constitute vector+bivector part4 of the original MV \(\mathsf{A}\), therefore \((\mathbf{a}+\mathcal{A})\) automatically commutes with \(\mathsf{A}\). As a result only discrete free coefficients are possible in the logarithm generic formula. In special cases (see Eqs (A.5a) and (A.5b) in the Appendix A) the free MV \(\mathsf{F}\) may also contain arbitrary unit vector \(\hat{\mathbf{u}}\) and/or unit bivector \(\hat{\mathcal{U}}\). In such cases one can include two additional continuous parameters interpreted as directions of \(\hat{\mathbf{u}}\) or \(\hat{\mathcal{U}}\). Footnote 4: In 3D algebras, the scalar and pseudoscalar belong to algebra center and as a result they commute with all elements. Since \(\arctan(x,y)\) has been defined in the range \((-\pi,\pi]\) (usually called the principal value or the main branch, Fig. 1), we can add to it any multiple of \(2\pi\). Therefore, the plus/minus instances of \(\arctan(a_{0}\pm a_{123},a_{\pm})\) (see Eqs (A.5a) and (A.5b)) were replaced by more general expressions \(\arctan(a_{0}+a_{123},a_{+})+2\pi c_{1_{+}}\) and \(\arctan(a_{0}-a_{123},a_{-})+2\pi c_{1_{-}}\) in Eqs (3.4) and (3.5), respectively, which takes into account the multivaluedness of the argument. This explains the rationale behind the construction of the free MVs for GA logarithm. In [16] the notion of principal logarithm (also called the principal value of logarithm) in case of matrices was introduced. In [1] it was suggested that the "logarithm principal value in GA can be defined as the MV \(\mathsf{M}=\log\mathsf{Y}\) with the smallest norm". Formulas (3.2)-(3.5) and (3.8)-(3.11) might suggest that we could obtain the principal logarithm values after equating discrete free constants \(c_{1_{\pm}},c_{2_{\pm}}\) to zero. Unfortunately, extensive numerical checks revealed that this is not always the case. **Example 2.**_Logarithm of simple MV in \(\mathit{Cl}_{0,3}\)_. For MV \(\mathsf{A}=\frac{9}{10}-\frac{1}{3}\mathbf{e}_{3}\) in the Example 1, Eqs (3.6) and (3.7) give \(a_{+}=a_{-}=1/3\). The MVs in (3.1) then are \(\mathsf{A}_{0+}=\mathsf{A}_{0-}=-\log(10/9)+\pi\mathbf{e}_{3}\), \(\mathsf{A}_{1,2\pm}=\frac{\pi}{3}(-\mathbf{e}_{3}\pm\mathbf{e}_{12})\). The logarithm calculated by exact formula (3.1) is \(\log(\frac{9}{10}-\frac{1}{3}\mathbf{e}_{3})=-\frac{1}{2}\log\frac{900}{829}- \arctan\left(\frac{10}{27}\right)\mathbf{e}_{3}\approx-0.0410873-0.354706 \mathbf{e}_{3}\) which coincides with result of Example 1. Now, let's calculate GA logarithm of \(\mathsf{A}^{\prime}=-\frac{9}{10}-\frac{1}{3}\mathbf{e}_{3}\) that diverges when the series (2.5) is used. With exact formulas (3.2)-(3.7) we find: \(a_{+}=a_{-}=\frac{1}{3}\), \(\mathsf{A}_{0+}=\mathsf{A}_{0-}=-\frac{1}{2}\log(900/829),\mathsf{A}_{1,2+}= \bigl{(}\pi-\arctan(10/27)\bigr{)}(\mathbf{e}_{12}-\mathbf{e}_{3}),\mathsf{A}_ {1,2-}=\bigl{(}\arctan(10/27)-\pi\bigr{)}(\mathbf{e}_{12}+\mathbf{e}_{3})\). Then Eq. (3.1) gives \(\log(\mathsf{A}^{\prime})=-\frac{1}{2}\log(\frac{900}{829})+(\arctan(\frac{10 }{27})-\pi)\mathbf{e}_{3}\approx-0.0410873-2.78689\mathbf{e}_{3}\). Exponentiation of the obtained logarithm gives initial MV, \(\exp\big{(}\log(\mathsf{A}^{\prime})\big{)}=\mathsf{A}^{\prime}\). The result also can be checked by complex logarithm, because the initial MV consist of scalar and basis vector \(\mathbf{e}_{3}^{2}=-1\) only. **Example 3.**_Logarithm of generic MV in Cl\({}_{0,3}\)_. Let's compute the logarithm of \(\mathsf{A}=-8-6\mathbf{e}_{2}-9\mathbf{e}_{3}+5\mathbf{e}_{12}-5\mathbf{e}_{13 }+6\mathbf{e}_{23}-4\mathbf{e}_{123}\). Then, \(a_{-}^{2}=53\) and \(a_{+}^{2}=353\). The Eqs (3.2)-(3.5) give \(\mathsf{A}_{0_{+}}=\frac{1}{2}\log(497),\mathsf{A}_{0_{-}}=\frac{1}{2}\log(69 ),\mathsf{A}_{1,2_{+}}=(353)^{-1/2}\bigl{(}\pi-\arctan\bigl{(}\frac{\sqrt{35 3}}{12}\bigr{)}+2\pi c_{1_{+}}\bigr{)}(1+I)\bigl{(}-6\mathbf{e}_{2}-9\mathbf{ e}_{3}+5\mathbf{e}_{12}-5\mathbf{e}_{13}+6\mathbf{e}_{23}\bigr{)}\) and \(\mathsf{A}_{1,2_{-}}=(53)^{-1/2}\bigl{(}\pi-\arctan\bigl{(}\frac{\sqrt{53}}{4 }\bigr{)}+2\pi c_{1_{-}}\bigr{)}(1-I)(-6\mathbf{e}_{2}-9\mathbf{e}_{3}+5 \mathbf{e}_{12}-5\mathbf{e}_{13}+6\mathbf{e}_{23})\), where the free term \(\mathsf{F}\), Eq. (3.13), has been included via \(c_{1_{\pm}}\). The logarithm is the sum of all above listed MVs: \(\log(\mathsf{A})=\frac{1}{2}\bigl{(}\mathsf{A}_{0_{+}}+\mathsf{A}_{0_{-}}+ \mathsf{A}_{1,2_{+}}+\mathsf{A}_{1,2_{-}}+(\mathsf{A}_{0_{+}}-\mathsf{A}_{0_{ -}})I\bigr{)}\). Using the exponential [7] one can check that the numerical logarithm \(\log(\mathsf{A})\) indeed yields the initial MV \(\mathsf{A}\) for arbitrary integer constants \(c_{1_{\pm}}\). **Example 4.**_Logarithm of MV when \(a_{+}=0\) and \(a_{0}+a_{123}>0\)_. The MV that satisfies these conditions is \(\mathsf{A}=1+(3\mathbf{e}_{1}-2\mathbf{e}_{2}+\mathbf{e}_{3})+(\mathbf{e}_{12 }+2\mathbf{e}_{13}+3\mathbf{e}_{23})+7\mathbf{e}_{123}=1+\mathbf{a}+\mathcal{ A}+7\mathbf{e}_{123}\). Equation (3.6) gives \(a_{-}=\sqrt{56}=2\sqrt{14}\) and \(a_{0}-a_{123}=-6\). Eqs (3.8), (3.10) give \(\mathsf{A}_{0_{+}}=\log 8+2\pi c_{2_{+}}\hat{\mathcal{U}}\), \(\mathsf{A}_{1,2_{+}}=\bigl{(}\frac{1}{8}+2\pi c_{1_{+}}\bigr{)}(1+I)(\mathbf{a} +\mathbf{e}_{3}+\mathcal{A})=0\). Then from (3.3) and (3.5) we have \(\mathsf{A}_{0_{-}}=\frac{1}{2}\log 92\) and \(\mathsf{A}_{1,2_{-}}=\frac{\pi-\arctan\bigl{(}\frac{\sqrt{14}}{3}\bigr{)}+2\pi c _{1_{-}}}{2\sqrt{14}}(1-I)(\mathbf{a}+\mathcal{A})\). Finally, from Eq. (3.1) \(\log(\mathsf{A})=\frac{1}{28}\Bigl{(}7\bigl{(}\log 5888-\log\frac{23}{16}\mathbf{e}_{123} \bigr{)}+\sqrt{14}\bigl{(}(2c_{1_{-}}+1)\pi-\arctan\frac{\sqrt{14}}{3}\bigr{)} (\mathbf{a}+\mathcal{A})\Bigr{)}+(1+I)\pi c_{2_{+}}\hat{\mathcal{U}}\). After exponentiation of \(\mathsf{A}\) the constants \(c_{1_{-}}\) and \(c_{2_{+}}\) and bivector \(\hat{\mathcal{U}}\) simplify out. **Example 5.**_Logarithm of MV when \(a_{-}=0\) and \(a_{0}-a_{123}<0\)_. These conditions are satisfied by \(\mathsf{A}=1+(-3\mathbf{e}_{1}+2\mathbf{e}_{2}-\mathbf{e}_{3})+(\mathbf{e}_{12 }+2\mathbf{e}_{13}+3\mathbf{e}_{23})+7\mathbf{e}_{123}=1+\mathbf{a}+\mathcal{ A}+7\mathbf{e}_{123}\). We have \(a_{+}^{2}=56\), \(a_{0}+a_{123}=9\) and \(a_{0}-a_{123}=-6\). Then, Eq. (3.9) gives \(\mathsf{A}_{0_{-}}=\log 6+(\pi+2\pi c_{2_{-}})\hat{\mathsf{u}}\). The Eqs (3.2) and (3.4) give \(\mathsf{A}_{0_{+}}=\frac{1}{2}\log 120\), \(\mathsf{A}_{1,2_{+}}=\frac{1}{2\sqrt{14}}\bigl{(}\arctan\bigl{(}\frac{1}{2} \sqrt{7/2}\bigr{)}+2\pi c_{1_{+}}\bigr{)}(1+I)(\mathbf{a}+\mathcal{A})\), and Eq. (3.11) \(\mathsf{A}_{1,2_{-}}=(\pi+2\pi c_{1_{-}})(1-I)(\mathbf{a}+\mathcal{A})=0\). Finally, \(\log(\mathsf{A})=\frac{1}{2}\bigl{(}\mathsf{A}_{0_{+}}+\mathsf{A}_{0_{-}}+ \mathsf{A}_{1,2_{+}}+(\mathsf{A}_{0_{+}}-\mathsf{A}_{0_{-}})I\bigr{)}\). **Example 6.**_Logarithm with infinite subparts: The case \(a_{+}=0\) and \(a_{0}+a_{123}=0\)_. The example exhibits unusual and the most interesting case. In _Cl\({}_{0,3}\)_, let's compute GA logarithm of \(\mathsf{A}=1+(-2\mathbf{e}_{1}-3\mathbf{e}_{2}+5\mathbf{e}_{3})+(5\mathbf{e}_{12 }+3\mathbf{e}_{13}-2\mathbf{e}_{23})-\mathbf{e}_{123}=1+\mathbf{a}+\mathcal{A}- \mathbf{e}_{123}\). The remaining scalar is \(a_{-}=2\sqrt{38}\), \((a_{0}-a_{123})=2\). Then, Eq. (3.8) gives \(\mathsf{A}_{0_{+}}=\log(0_{+})\); Eq. (3.3) gives \(\mathsf{A}_{0_{-}}=\frac{1}{2}\log 156\); Eq. (3.10) gives \(\mathsf{A}_{1,2_{+}}=0\); Eq. (3.5) gives \(\mathsf{A}_{1,2_{-}}=\frac{\arctan\bigl{(}\sqrt{38}\bigr{)}+2\pi c_{1_{-}}}{ \sqrt{38}}(\mathbf{a}+\mathcal{A})\). Finally, the logarithm of \(\mathsf{A}\) is \[\log(\mathsf{A})= \frac{\arctan\left(\sqrt{38}\right)+2\pi c_{1_{-}}}{2\sqrt{38}}\left( \mathbf{a}+\mathcal{A}\right) \tag{3.14}\] \[+\tfrac{1}{2}\big{(}\log(0_{+})\left(1+\mathbf{e}_{123}\right)+ \tfrac{1}{2}\log(156)\left(1-\mathbf{e}_{123}\right)\big{)}\,.\] Note the factor \(\log(0_{+})\) in front of \((1+\mathbf{e}_{123})\). If logarithm in this form is inserted into coordinate-free exponential [8] we will get \[\left(\tfrac{1}{2}\mathrm{e}^{\log(0_{+})}+1\right)+\mathbf{a}+\mathcal{A}+ \left(\tfrac{1}{2}\mathrm{e}^{\log(0_{+})}-1\right)\mathbf{e}_{123}\,, \tag{3.15}\] which coincides with the initial MV if we assume that5\(\log(0_{+})=-\infty\). Footnote 5: The statement can be made strict by considering the limit \(\lim_{x\to 0_{+}}\exp(\log(x))=0\), where \(x\to 0_{+}\) indicates that the limit is taken keeping \(x\) positive, i.e.”from above”. ### GA Logarithm of blades and their combinations in \(\mathit{Cl}_{0,3}\) In this subsection, the logarithms for individual blades and their combinations that follow from generic logarithm (Theorem 3.1), and may be useful in practice are collected. The norms listed below are positive scalars. _Vector norm_: \(|\mathbf{a}|=\sqrt{\mathbf{a}\mathbf{\widehat{a}}}=\sqrt{a_{1}^{2}+a_{2}^{2}+ a_{3}^{2}}\). _Paravector norm_: \(|a_{0}+\mathbf{a}|=|\mathsf{A}_{0,1}|=\big{(}\mathsf{A}_{0,1}\mathsf{A}_{0,1} \big{)}^{\frac{1}{2}}=\sqrt{a_{0}^{2}+a_{1}^{2}+a_{2}^{2}+a_{3}^{2}}\,.\) _Bivector norm_: \(|\mathcal{A}|=\big{(}\mathcal{A}\widetilde{\mathcal{A}}\big{)}^{\frac{1}{2}}= \sqrt{a_{12}^{2}+a_{13}^{2}+a_{23}^{2}}\,.\) _Rotor norm_: \(|a_{0}+\mathcal{A}|=|\mathsf{A}_{0,2}|=\big{(}\mathsf{A}_{0,2}\widetilde{ \mathsf{A}}_{0,2}\big{)}^{\frac{1}{2}}=\sqrt{a_{0}^{2}+a_{12}^{2}+a_{13}^{2} +a_{23}^{2}}\,.\) Logarithms of blades and their combinations. _Logarithm of vector_\(\mathbf{a}=a_{1}\mathbf{e}_{1}+a_{2}\mathbf{e}_{2}+a_{3}\mathbf{e}_{3}\), \(c_{i}\in\mathbb{Z}\), \[\log(\mathbf{a})= \frac{1}{2}\log(|\mathbf{a}|^{2})+\pi\frac{\mathbf{a}}{|\mathbf{ a}|}\big{(}\frac{1}{2}+c_{1}(1+I)+c_{2}(1-I)\big{)},\qquad|\mathbf{a}|^{2}\neq 0.\] (3.16) _Logarithm of paravector_\(\mathsf{A}_{0,1}=a_{0}+\mathbf{a}\); \(c_{i}\in\mathbb{Z}\) and \(\hat{\mathbf{u}}^{2}=-1\), \(\hat{\mathcal{U}}^{2}=-1\). \[\log\mathsf{A}_{0,1}= \frac{1}{2}\log(|\mathsf{A}_{0,1}|^{2})+\frac{\mathbf{a}}{| \mathbf{a}|}\big{(}\arctan(a_{0},|\mathbf{a}|)\qquad\qquad\qquad|\mathbf{a}| \neq 0.\] (3.17) \[\qquad\qquad+\pi(c_{1}(1+I)+c_{2}(1-I))\big{)},\] _Logarithm of bivector_\(\mathcal{A}=a_{12}\mathbf{e}_{12}+a_{13}\mathbf{e}_{13}+a_{23}\mathbf{e}_{23}\), \(c_{i}\in\mathbb{Z}\), \[\log(\mathcal{A})= \frac{1}{2}\log(|\mathcal{A}|^{2})+\pi\frac{\mathcal{A}}{| \mathcal{A}|}\big{(}\frac{1}{2}+c_{1}(1+I)+c_{2}(1-I)\big{)},\quad\ |\mathcal{A}|^{2}\neq 0.\] (3.18) _Logarithm of parabivector and rotor_\(\mathsf{A}_{0,2}=a_{0}+\mathcal{A}\), \[\log\mathsf{A}_{0,2}= \frac{1}{2}\log(|\mathsf{A}_{0,2}|^{2})+\frac{\mathcal{A}}{| \mathcal{A}|}\big{(}\arctan(a_{0},|\mathcal{A}|)\qquad\qquad\qquad|\mathcal{A}| \neq 0. \tag{3.19}\] \[\qquad\qquad+\pi(c_{1}(1+I)+c_{2}(1-I))\big{)},\] _Logarithm of center \(\mathsf{A}_{0,3}=a_{0}+a_{123}I\),_ \[\log\mathsf{A}_{0,3}=\begin{cases}\big{(}\frac{1}{2}\log(a_{0}-a_{123})+\pi c_{1 }\hat{\mathcal{U}}_{1}\big{)}\big{(}1-I\big{)}&(a_{0}-a_{123})>0\text{ and}\\ +\big{(}\frac{1}{2}\log(a_{0}+a_{123})+\pi c_{2}\hat{\mathcal{U}}_{2}\big{)} \big{(}1+I\big{)},&(a_{0}+a_{123})>0\\ \big{(}\frac{1}{2}\log(a_{0}-a_{123})+\pi c_{1}\hat{\mathcal{U}}_{1}\big{)} \big{(}1-I\big{)}&(a_{0}-a_{123})>0\text{ and}\\ +\big{(}\frac{1}{2}\log(-a_{0}-a_{123})+\pi(c_{2}+\frac{1}{2})\hat{\mathbf{u}} _{2}\big{)}\big{(}1+I\big{)},&(a_{0}+a_{123})<0\\ \big{(}\frac{1}{2}\log(-a_{0}+a_{123})+\pi(c_{1}+\frac{1}{2})\hat{\mathbf{u}} _{1}\big{)}\big{(}1-I\big{)}&(a_{0}-a_{123})<0\text{ and}\\ +\big{(}\frac{1}{2}\log(a_{0}+a_{123})+\pi c_{2}\hat{\mathcal{U}}_{2}\big{)} \big{(}1+I\big{)},&(a_{0}+a_{123})>0\\ \big{(}\frac{1}{2}\log(-a_{0}+a_{123})+\pi(c_{1}+\frac{1}{2})\hat{\mathbf{u}} _{1}\big{)}\big{(}1-I\big{)}&(a_{0}-a_{123})<0\text{ and}\\ +\big{(}\frac{1}{2}\log(-a_{0}-a_{123})+\pi(c_{2}+\frac{1}{2})\hat{\mathbf{u}} _{2}\big{)}\big{(}1+I\big{)},&(a_{0}+a_{123})<0\end{cases} \tag{3.20}\] where \(\hat{\mathbf{u}}_{i}\) and \(\hat{\mathcal{U}}_{j}\) are arbitrary non-commuting unit vector and bivector, respectively. If \((a_{0}-a_{123})=0\) or \((a_{0}+a_{123})=0\) some of subparts give \(\log(0_{+})\). ## 4 MV logarithms in \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\) \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\) algebras are isomorphic. Their multiplication tables coincide, for example, after the following exchange of basis elements: \[\mathit{Cl}_{3,0} \{1,\quad\mathbf{e}_{1},\quad\mathbf{e}_{2},\quad\mathbf{e}_{3},\quad\mathbf{e}_{12},\ \ \mathbf{e}_{13},\ \mathbf{e}_{23},\ \mathbf{e}_{123}\}\downarrow\] \[\mathit{Cl}_{1,2} \{1,\ -\mathbf{e}_{1},-\mathbf{e}_{12},-\mathbf{e}_{13},-\mathbf{e}_{2},-\mathbf{e}_{3},\ \mathbf{e}_{23},-\mathbf{e}_{123}\}.\] To find formulas for logarithm in coordinate form the same inverse solution method was used as described in the Appendix A for \(\mathit{Cl}_{0,3}\) algebra. The logarithm in \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\) exists for all MVs except of nonzero MVs of the form \(\mathsf{A}_{1,2}=\mathbf{a}+\mathcal{A}\) that satisfy the condition \(\operatorname{Det}(\mathsf{A}_{1,2})=(a_{+}^{2}+a_{-}^{2})^{2}=0\), i.e., for MVs that are the sums of vector and bivector and the determinant are equal to zero. These restrictions are the same as those for GA square root to exist (see [10] and Example 3 herein in case \(s=S=0\)). ### Logarithm formula for generic MV **Theorem 4.1** (Logarithm of multivector in \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\)).: _The logarithm of generic MV \(\mathsf{A}=a_{0}+\mathbf{a}+\mathcal{A}+a_{123}I\) is another MV_ \[\log(\mathsf{A})=\mathsf{A}_{0}+\mathsf{A}_{1,2_{\log}}+\mathsf{A}_{1,2_{ \arctan}}+\mathsf{A}_{I}, \tag{4.1}\] _where_ \[\mathsf{A}_{0} =\frac{1}{2}\big{(}\log k_{+}+\log k_{-}\big{)}, a_{+}^{2}+a_{-}^{2}\neq 0 \tag{4.2}\] \[\mathsf{A}_{1,2_{\log}} =\frac{1}{2}\frac{a_{+}-a_{-}I}{a_{-}^{2}+a_{+}^{2}}\big{(}\log k_ {+}-\log k_{-}\big{)}\big{(}\mathbf{a}+\mathcal{A}\big{)}, a_{+}^{2}+a_{-}^{2}\neq 0 \tag{4.3}\] \[\mathsf{A}_{1,2_{\rm arctan}}=I\frac{a_{+}-a_{-}I}{a_{-}^{2}+a_{+}^{2 }}\big{(}\mathbf{a}+\mathcal{A}\big{)}\Big{(}\frac{1}{2}\arctan\!\big{(}\!-(a_{+ }^{2}-a_{0}^{2})-(a_{-}^{2}-a_{123}^{2}),\] \[(a_{+}-a_{0})(a_{-}+a_{123})-(a_{+}+a_{0})(a_{-}-a_{123})\big{)}+2 \pi c_{1}\Big{)},\] \[\text{when}\quad(a_{+}^{2}+a_{-}^{2}\neq 0)\quad\text{and}\quad(k_{- }k_{+}\neq 0), \tag{4.4}\] \[\mathsf{A}_{I}=I\arctan\!\big{(}(a_{+}+a_{0})k_{-}-(a_{+}-a_{0})k _{+},(a_{-}+a_{123})k_{-}-(a_{-}-a_{123})k_{+}\big{)}\] \[+2\pi c_{2}I,\qquad\qquad\qquad\text{when}\quad(a_{+}^{2}+a_{-}^{ 2}\neq 0)\quad\text{and either}\] \[(a_{+}+a_{0})k_{-}-(a_{+}-a_{0})k_{+}\neq 0\quad\text{or} \quad(a_{-}+a_{123})k_{-}-(a_{-}-a_{123})k_{+}\neq 0 \tag{4.5}\] _where scalar coefficients are_ \[k_{-}^{2}=(a_{+}-a_{0})^{2}+(a_{-}-a_{123})^{2},\qquad k_{+}^{2}=(a_{+}+a_{0}) ^{2}+(a_{-}+a_{123})^{2}, \tag{4.6}\] _and_ \[a_{-} =\frac{-2I\mathbf{a}\wedge\mathcal{A}}{\sqrt{2}\sqrt{\mathbf{a} \mathbf{a}+\mathcal{A}\mathcal{A}}+\sqrt{(\mathbf{a}\mathbf{a}+\mathcal{A} \mathcal{A})^{2}-4(\mathbf{a}\wedge\mathcal{A})^{2}}},\] \[a_{+} =\frac{\sqrt{\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A}}+\sqrt{ (\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})^{2}-4(\mathbf{a}\wedge\mathcal{ A})^{2}}}{\sqrt{2}} \tag{4.7}\] \[\text{for }\mathbf{a}\wedge\mathcal{A}\neq 0,\quad\text{and}\] \[\begin{cases}a_{+}=\sqrt{\mathbf{a}\mathbf{a}+\mathcal{A} \mathcal{A}},\quad a_{-}=0,&\text{if}\quad\mathbf{a}\mathbf{a}+\mathcal{A} \mathcal{A}\geq 0\\ a_{+}=0,\quad a_{-}=\sqrt{-(\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})},& \text{if}\quad\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A}<0,\\ \text{when}\quad\mathbf{a}\wedge\mathcal{A}=0.\end{cases} \tag{4.8}\] _The constants \(c_{1},c_{2}\) are arbitrary integers._ Proof: It is enough to check that after substitution of \(\log\mathsf{A}\) expressions into exponential formula presented in [7] one gets the initial MV \(\mathsf{A}\). The factor \(\frac{a_{+}-a_{-}I}{a_{-}^{2}+a_{+}^{2}}\) in the above formulas alternatively may be written as \((a_{+}+a_{-}I)^{-1}\). ### Special cases When the conditions listed in Eqs (4.2)-(4.5) are not satisfied, we have special cases. In particular, the condition \(k_{\pm}=0\) means that the MV determinant is zero, \(\text{Det}(\mathsf{A})=k_{-}^{2}k_{+}^{2}=0\). Similarly, the condition \(a_{+}^{2}+a_{-}^{2}=0\) implies that determinant of vector+bivector part vanishes, \(\text{Det}(\mathsf{A}_{1,2})=(a_{+}^{2}+a_{-}^{2})^{2}=0\). The specific relations \((a_{+}+a_{0})k_{-}-(a_{+}-a_{0})k_{+}\neq 0\) and \((a_{-}+a_{123})k_{-}-(a_{-}-a_{123})k_{+}\neq 0\) in Eq. (4.5) as well as the relation \(k_{-}k_{+}\neq 0\) in Eq. (4.4) ensure that both arguments of \(\arctan(x,y)\) do not nullify simultaneously. When the generic formula is not applicable the expressions for \(\mathsf{A}_{0},\mathsf{A}_{1,2_{\rm log}},\mathsf{A}_{1,2_{\rm arctan}}\) and \(\mathsf{A}_{I}\) in Theorem 4.1 must be supplemented by following formulas \[\mathsf{A}_{0} =\begin{cases}\frac{1}{2}\log\bigl{(}a_{0}^{2}+a_{123}^{2}\bigr{)},&( a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}\neq 0),\\ \varnothing,&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}=0)\end{cases} \tag{4.8}\] \[\mathsf{A}_{1,2_{\log}} =\begin{cases}0,&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{ 2}\neq 0),\\ \varnothing,&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}=0)\end{cases}\] (4.9) \[\mathsf{A}_{1,2_{\arctan}} =\begin{cases}\pi(\frac{1}{2}+2c_{1})I\frac{a_{+}-a_{-}I}{a_{-}^{ 2}+a_{+}^{2}}\bigl{(}\mathbf{a}+\mathcal{A}\bigr{)},&(a_{+}^{2}+a_{-}^{2}\neq 0 )\wedge(k_{-}k_{+}=0),\\ \frac{a_{0}-a_{123}I}{a_{0}^{2}+a_{123}^{2}}\bigl{(}\mathbf{a}+\mathcal{A} \bigr{)}+\hat{\mathcal{F}},&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2 }\neq 0),\\ \varnothing,&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}=0)\end{cases}\] (4.10) \[\mathsf{A}_{I} =\begin{cases}I\bigl{(}\arctan(-a_{-},a_{+})+2\pi c_{2}\bigr{)},&(a_{+}^{2}+a_{-}^{2}\neq 0)\\ &\wedge((a_{+}+a_{0})k_{-}-(a_{+}-a_{0})k_{+}=0)\\ &\wedge((a_{-}+a_{123})k_{-}-(a_{-}-a_{123})k_{+}=0),\\ I\bigl{(}\arctan(a_{0},a_{123})+2\pi c_{2}\bigr{)},&(a_{+}^{2}+a_{-}^{2}=0) \wedge(a_{0}^{2}+a_{123}^{2}\neq 0),\\ \varnothing,&(a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}=0)\end{cases} \tag{4.11}\] Here the symbols \(\wedge\) and \(\vee\) in the conditions represent logical conjunction and disjunction, respectively. \(\hat{\mathcal{F}}=\begin{cases}2\pi c_{1}\hat{\mathcal{U}},&\text{if}\quad \mathbf{a}+\mathcal{A}=0\\ 0,&\text{if}\quad\mathbf{a}+\mathcal{A}\neq 0\end{cases}\), where the free unit bivector must satisfy \(\hat{\mathcal{U}}^{2}=-1\). After exponentiation it gives \(\exp(\hat{\mathcal{U}})=1\) and represents continuous degree of freedom (a direction) in (4.10) and (4.11), and can be parameterized as \[\hat{\mathcal{U}}=\begin{cases}\frac{d_{12}\mathbf{e}_{12}+d_{13}\mathbf{e}_{ 13}+d_{23}\mathbf{e}_{23}}{\sqrt{d_{12}^{2}+d_{13}^{2}+d_{23}^{2}}},&\text{for} \quad\mathit{Cl}_{3,0}\\ \frac{d_{12}\mathbf{e}_{12}+d_{13}\mathbf{e}_{13}+d_{23}\mathbf{e}_{23}}{ \sqrt{-d_{12}^{2}-d_{13}^{2}+d_{23}^{2}}},&\text{for}\quad\mathit{Cl}_{1,2}, \text{ when}\;-d_{12}^{2}-d_{13}^{2}+d_{23}^{2}>0\end{cases} \tag{4.12}\] The cases \(k_{\pm}=0\) that represent MV with a vanishing determinant, \(\operatorname{Det}(\mathsf{A})=k_{-}^{2}k_{+}^{2}=0\), yield MVs with infinite coefficients (see Example 9 for details). ### Multiveledness and free multivector In Eqs (4.4) and (4.5) we may add any multiple of \(2\pi\) to both arc tangent functions, i.e. \(\arctan(y_{1},y_{2})\to\arctan(y_{1},y_{2})+2\pi c_{i}\). After collecting terms in front of free coefficients \(c_{1},c_{2}\in\mathbb{Z}\), we obtain a free MV \(\mathsf{F}\) that satisfies \(\exp(\mathsf{F})=1\), \[\mathsf{F}= \frac{2\pi c_{1}}{\bigl{(}a_{-}^{2}+a_{+}^{2}\bigr{)}}\bigl{(}a_{-} (\mathbf{a}+\mathcal{A})+a_{+}(\mathbf{a}+\mathcal{A})I\bigr{)}+2\pi c_{2}I\,, \tag{4.13}\] where \(a_{\pm}\) are given by Eq. (4.7). **Example 7**.: _Logarithm of generic MV in \(\mathit{Cl}_{3,0}\)._ Let us take simple but representative MV: \(\mathsf{A}=-2+\mathbf{e}_{1}+\mathbf{e}_{23}-3\mathbf{e}_{123}\). From Eqs (4.6) and (4.7) we have \(k_{+}^{2}=5\), \(k_{-}^{2}=25\) and \(a_{+}=a_{-}=1\). Then (4.2) and (4.3) yield \(\mathsf{A}_{0}=\frac{3\log 5}{4}\) and \(\mathsf{A}_{1,2_{\log}}=-\frac{\log 5}{8}\bigl{(}\mathbf{e}_{1}+\mathbf{e}_{23} \bigr{)}(1-I)\). Next, the Eqs (4.4) and (4.5) give \(-\frac{1}{4}\big{(}-\arctan\frac{2}{11}\,+\,4\pi c_{2}\big{)}\big{(}\mathbf{e}_{1} +\mathbf{e}_{23}\big{)}(1+I)\) and \(\mathsf{A}_{I}=\Big{(}-\pi+\arctan\frac{-10-4\sqrt{5}}{-5-3\sqrt{5}}+2\pi c_{1} \Big{)}\mathbf{e}_{123}\). Finally, after summation of all terms in (4.1) we obtain \(\log(\mathsf{A})=\frac{\log 5}{4}\big{(}3-\mathbf{e}_{1}\big{)}+\frac{1}{2}\arctan \frac{2}{11}\mathbf{e}_{23}+\big{(}-\pi+\arctan(\frac{1}{2}(1+\sqrt{5}) \big{)}\mathbf{e}_{123}+\mathsf{F}\), where the free MV \(\mathsf{F}=2\pi\big{(}c_{1}\mathbf{e}_{123}-c_{2}\mathbf{e}_{23}\big{)}\). The coefficients \(c_{1},c_{2}\in\mathbb{Z}\) come from \(\mathsf{A}_{1,2_{\rm arctan}}\) and \(\mathsf{A}_{I}\) terms, respectively. Substitution of this result into exponential \(\exp(\log(\mathsf{A}))\) returns the initial MV. **Example 8.**_Logarithm of center of Cl\({}_{3,0}\)._\(\mathsf{A}=1-2\mathbf{e}_{123}\). Since \(\mathbf{e}_{123}^{2}=-1\) the MV is a counterpart of complex number logarithm. Eqs (4.6) and (4.7) give \(a_{+}=a_{-}=0\) and \(k_{+}^{2}=k_{-}^{2}=5\). Then, Eq. (4.8) gives \(\mathsf{A}_{0}=\frac{\log 5}{2}\); Eq. (4.9) gives \(\mathsf{A}_{1,2_{\rm log}}=0\); Eq. (4.10) gives \(\mathsf{A}_{1}=\big{(}-\arctan 2+2\pi c_{2}\big{)}\mathbf{e}_{123}\). Note that \(\hat{\mathcal{U}}\) is the same free MV for both \(\mathsf{A}_{1,2_{\rm arctan}}\). After summation of terms in (4.1) the final answer is \(\log(\mathsf{A})=\frac{\log 5}{2}+\big{(}-\arctan 2+2\pi c_{2}\big{)}\mathbf{e}_{123 }+2\pi c_{1}\hat{\mathcal{U}}\). On the other hand the complex number \(1-2\,\mathrm{i}\) gives \(\log(1-2\,\mathrm{i})=\frac{1}{2}\log 5-\arctan 2\), which coincides with \(\mathit{Cl}_{3,0}\) algebra result if \(c_{1}=c_{2}=0\). **Example 9.**_Logarithm of singular MV when_ Det(\(\mathsf{A})=0\). This is the most intriguing and complicated case in \(\mathit{Cl}_{3,0}\). Since Det(\(\mathsf{A}\)) \(=k_{-}^{2}k_{+}^{2}\) we may have either \(k_{-}^{2}=0\) or \(k_{+}^{2}=0\). The case when \(k_{-}^{2}=k_{+}^{2}=0\) is trivial since it requires all MV components to vanish. Let's analyze the case when \(k_{+}^{2}\neq 0\) and \(k_{-}^{2}=0\). It is represented, for example, by \(\mathsf{A}=6+(-8\mathbf{e}_{1}-2\mathbf{e}_{3})+(-\mathbf{e}_{12}+10\mathbf{ e}_{13}+10\mathbf{e}_{23})-13\mathbf{e}_{123}=6+\mathbf{a}+\mathcal{A}-13\mathbf{e}_{123}\). From Eq. (4.7) we find \(a_{+}=6\), \(a_{-}=-13\) and from (4.6) \(k_{+}^{2}=820\), \(k_{-}^{2}=0\). Then, Eq. (4.2) gives \(\mathsf{A}_{0}=\frac{1}{2}\big{(}\log(2\sqrt{205})+\log(0_{+})\big{)}\); Eq. (4.3) gives \(\mathsf{A}_{1,2_{\rm log}}=\frac{1}{410}\big{(}\log(2\sqrt{205})-\log(0_{+}) \big{)}\big{(}6+13\mathbf{e}_{123}\big{)}\big{(}\mathbf{a}+\mathcal{A}\big{)}+6 (\mathbf{a}+\mathcal{A})\big{)}\); Eq. (4.10) gives \(\mathsf{A}_{1,2_{\rm arctan}}=\frac{\pi}{205}(\frac{1}{2}+2c_{1})\big{(}-6+13 \mathbf{e}_{123}\big{)}\big{(}\mathbf{a}+\mathcal{A}\big{)}+6(\mathbf{a}+ \mathcal{A})\big{)}\); finally, Eq. (4.11) gives \(\mathsf{A}_{I}=\big{(}\arctan(\frac{6}{13})+2\pi c_{2}\big{)}\mathbf{e}_{123}\). Summing up all terms we obtain the answer \[\log\mathsf{A}= \frac{1}{2}\big{(}\log(2\sqrt{205})+\log(0_{+})\big{)}+\Big{(} \frac{1}{410}\big{(}\log(2\sqrt{205})-\log(0_{+})\big{)}\big{(}6+13\mathbf{e} _{123}\big{)}\] \[\quad+\frac{\pi}{205}(\frac{1}{2}+2c_{1})\big{(}-6+13\mathbf{e}_{ 123}\big{)}\Big{)}(\mathbf{a}+\mathcal{A})+\big{(}\arctan(\frac{6}{13})+2\pi c _{2}\big{)}\mathbf{e}_{123}.\] The result can be checked after replacement of \(\log(0_{+})\) by \(\log(x)\) and substitution into exponential formula (4.1) of paper [7]. After simplification one can take the limit \(\lim_{x\to 0_{+}}\exp\bigl{(}\log\mathsf{A}\big{)}\), which returns the initial MV. This example demonstrates that the logarithm of MV with specific finite coefficients may yield MV with some of coefficients in the answer being infinite and which have to be understood as the limit \(\lim_{x\to 0_{+}}\log(x)\). The answer, nevertheless, is meaningful since the substitution of the answer into exponential formula and computation of the limit reproduces the initial MV. ### Logarithms of individual blades and their combinations Below we use different norms for individual blades of \(\mathit{Cl}_{3,0}\), since a positive scalar for vectors and bivectors is calculated differently. In particular, for a vector we will use \(|\mathbf{a}|=\sqrt{\mathbf{a}\mathbf{a}}=\sqrt{a_{1}^{2}+a_{2}^{2}+a_{3}^{2}}\), whereas for a bivector \(\sqrt{a_{12}^{2}+a_{13}^{2}+a_{23}^{2}}\). For a rotor \(|a_{0}+\mathcal{A}|=|\mathsf{A}_{0,2}|=\big{(}\mathsf{A}_{0,2}\widetilde{\mathsf{ A}}_{0,2}\big{)}^{1/2}=\sqrt{a_{0}^{2}+a_{12}^{2}+a_{13}^{2}+a_{23}^{2}}\), and for an element of center \(|a_{0}+a_{123}I|=|\mathsf{A}_{0,3}|=\big{(}\mathsf{A}_{0,3}\widehat{\mathsf{ A}}_{0,3}\big{)}^{1/2}=\sqrt{a_{0}^{2}+a_{123}^{2}}\). Logarithm of _vector_: \(\mathbf{a}=a_{1}\mathbf{e}_{1}+a_{2}\mathbf{e}_{2}+a_{3}\mathbf{e}_{3}\), \[\log(\mathbf{a})= \frac{1}{2}\log(|\mathbf{a}|^{2})-\pi\big{(}\tfrac{1}{2}+2c_{2} \big{)}\frac{\mathbf{a}}{|\mathbf{a}|}I+\pi\big{(}\tfrac{1}{2}+2c_{1}\big{)}I, \qquad|\mathbf{a}|^{2}\neq 0\,.\] (4.14) Logarithm of _bivector_: \[\mathcal{A}=a_{12}\mathbf{e}_{12}+a_{13}\mathbf{e}_{13}+a_{23}\mathbf{e}_{23},\] \[\log(\mathcal{A})= \frac{1}{2}\log(|\mathcal{A}|^{2})-\pi\big{(}\tfrac{1}{2}+2c_{2} \big{)}\frac{\mathcal{A}}{|\mathcal{A}|}+\pi(1+2c_{1})I,\qquad|\mathcal{A}|^{2 }\neq 0\,.\] (4.15) Logarithm of _rotor_: \[\mathsf{A}_{0,2}=a_{0}+\mathcal{A},\] \[\log\mathsf{A}_{0,2}= \begin{cases}\frac{1}{2}\log(|\mathsf{A}_{0,2}|^{2})+\big{(} \arctan\big{(}a_{0},0\big{)}+2\pi c_{1}\big{)}I\\ \quad\quad+\frac{\mathcal{A}}{|\mathcal{A}|}\Big{(}2\pi c_{2}-\frac{1}{2} \arctan\big{(}a_{0}^{2}-|\mathcal{A}|^{2},-2a_{0}|\mathcal{A}|\big{)}\Big{)} \end{cases}, |\mathsf{A}_{0,2}|\neq 0 \tag{4.16}\] \[\log(\mathsf{A}_{0,3})=\begin{cases}\frac{1}{2}\log(|\mathsf{A}_ {0,3}|^{2})+2\pi c_{2}\hat{\mathcal{U}}+\big{(}\arctan\big{(}a_{0},a_{123} \big{)}+4\pi c_{1}\big{)}I,&|\mathsf{A}_{0,3}|^{2}\neq 0,\\ \log(0_{+})+2\pi c_{2}\hat{\mathcal{U}},&|\mathsf{A}_{0,3}|^{2}=0\,.\end{cases} \tag{4.17}\] The paravector \(\mathsf{A}_{0,1}=a_{0}+\mathbf{a}\) norm \(|a_{0}+\mathbf{a}|^{2}\equiv|\mathsf{A}_{0,1}|^{2}=\mathsf{A}_{0,1}\widehat{ \mathsf{A}}_{0,1}=a_{0}^{2}-a_{1}^{2}-a_{2}^{2}-a_{3}^{2}\), contains coefficients with opposite signs. The logarithm formula, therefore, splits into many subcases and is impractical. ## 5 MV logarithms in \(\mathit{Cl}_{2,1}\) Of all three algebras, the logarithm of \(\mathit{Cl}_{2,1}\) appeared the most hard to recover. The logarithms in \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\) algebras exist for almost all MVs except very small specific class of vectors and bivectors, \(\mathbf{a}+\mathcal{A}\neq 0\), with the vanishing determinant \(\mathrm{Det}\,(\mathbf{a}+\mathcal{A})=0\). In \(\mathit{Cl}_{2,1}\) algebra the logarithm does not exist for a large class of MVs. In contrast, in \(\mathit{Cl}_{0,3}\) algebra the logarithm exists for all MVs. **Theorem 5.1**.: _[Logarithm of multivector in \(\mathit{Cl}_{2,1}\)] The logarithm of multivector \(\mathsf{A}=a_{0}+(a_{1}\mathbf{e}_{1}+a_{2}\mathbf{e}_{2}+a_{3}\mathbf{e}_{3 })+(a_{12}\mathbf{e}_{12}+a_{13}\mathbf{e}_{13}+a_{23}\mathbf{e}_{23})+a_{123 }I=a_{0}+\mathbf{a}+\mathcal{A}+a_{123}I\) is the MV_ \[\log(\mathsf{A})=\begin{cases}\frac{1}{2}\big{(}\mathsf{A}_{0_{+}}+\mathsf{A}_ {0_{-}}+\mathsf{A}_{1,2_{+}}+\mathsf{A}_{1,2_{-}}+(\mathsf{A}_{0_{+}}-\mathsf{ A}_{0_{-}})I\big{)},&f_{\pm}\geq 0\\ \varnothing,&f_{\pm}<0\end{cases} \tag{5.1}\] _where_ \[f_{\pm}=(a_{0}\pm a_{123})^{2}+a_{\pm}^{2}, f_{\pm}\lessneqq 0,\] \[a_{-}^{(2)}=-(\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})+2I \mathbf{a}\wedge\mathcal{A}, a_{-}^{(2)}\lessneqq 0, \tag{5.2}\] \[a_{+}^{(2)}=-(\mathbf{a}\mathbf{a}+\mathcal{A}\mathcal{A})-2I \mathbf{a}\wedge\mathcal{A}, a_{+}^{(2)}\lessneqq 0,\] _and_ \[\mathsf{A}_{0_{\pm}}=\begin{cases}\frac{1}{2}\log(f_{\pm}),&(a_{\pm}^{(2)}>0) \\ \frac{1}{2}\log\Bigl{(}a_{0}\pm a_{123}+\sqrt{-a_{\pm}^{(2)}}\Bigr{)}&(a_{\pm}^ {(2)}<0)\wedge(a_{0}\pm a_{123}>0)\\ +\frac{1}{2}\log\Bigl{(}a_{0}\pm a_{123}-\sqrt{-a_{\pm}^{(2)}}\Bigr{)},&\\ \log(a_{0}\pm a_{123})+2\pi c_{2\pm}\hat{\mathcal{F}},&(a_{\pm}^{(2)}=0) \wedge(a_{0}\pm a_{123}>0)\\ \log\bigl{(}-(a_{0}\pm a_{123})\bigr{)}+(\pi+2\pi c_{2\pm})\hat{\mathcal{U}}, &(a_{\pm}^{(2)}=0)\wedge(a_{0}\pm a_{123}\leq 0)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \(\mp a_{12})\equiv\big{(}(a_{1}=a_{23})\wedge(a_{2}=-a_{13})\wedge(a_{3}=-a_{12}) \big{)}\vee\big{(}(a_{1}=-a_{23})\wedge(a_{2}=a_{13})\wedge(a_{3}=a_{12})\big{)}\) that should be applied to \(\mathsf{A}_{0_{\pm}}\) and \(\mathsf{A}_{1,2_{\pm}}\) terms without paying attention to \(\pm\) signs in their subscripts. Unit bivector in \(\mathsf{A}_{0_{\pm}}\) may be parameterized as \(\hat{\mathcal{U}}=\frac{d_{12}\mathbf{e}_{12}+d_{13}\mathbf{e}_{13}+d_{23} \mathbf{e}_{23}}{\sqrt{d_{12}^{2}-d_{13}^{2}-d_{23}^{2}}}\). The symbol \(\varnothing\) means that the solution set is empty. In all formulas the indices and conditions (except \(\mathfrak{D}\) as stated explicitly) must be included with either all upper or with all lower signs._ The case with \(f_{\pm}\neq 0\) and \(a_{\pm}^{(2)}>0\) represents a generic instance. When either \(f_{\pm}=0\) or \(a_{\pm}^{(2)}\leq 0\) we have the special case. Note that in Eq. (5.2) the condition \(f_{\pm}=0\) implies \(a_{\pm}^{(2)}\leq 0\). Also, observe that the condition \(f_{\pm}\geq 0\) ensures automatically that a less restrictive requirement \(\text{Det}(\mathsf{A})=f_{-}f_{+}\geq 0\) is fulfilled automatically. The equations (5.3)-(5.4) are similar to Eqs (3.8)-(3.11) in \(\mathit{Cl}_{0,3}\) (see Sec. 3.2). Also, in (5.2) the expressions for scalar coefficients \(a_{\pm}=\begin{cases}\sqrt{a_{\pm}^{(2)}},&a_{\pm}^{(2)}\geq 0\\ \sqrt{-a_{\pm}^{(2)}},&a_{\pm}^{(2)}<0\end{cases}\) are similar to Eqs (3.6) and (3.7). The differences mainly arise at the parameter boundaries that define the existence of MV logarithm for \(\mathit{Cl}_{2,1}\). From our earlier calculations [10] we know the conditions that ensure an existence of MV square roots in \(\mathit{Cl}_{2,1}\) algebra. Thus, we can rewrite and use here these conditions that limit the extent of the logarithm in Theorem 5.1. It appears that the quantities \(b_{S}\) and \(b_{I}\) in [10] may be expressed in terms of multipliers \(f_{+}\) and \(f_{-}\) in the determinant \(D=\text{Det}\,\mathsf{A}=f_{-}f_{+}\), where \(f_{\pm}=(a_{0}\pm a_{123})^{2}+a_{\pm}^{(2)}\), in a form \(b_{I}=\frac{1}{2}\big{(}f_{+}-f_{-}\big{)}\) and \(b_{S}=\frac{1}{2}\big{(}f_{+}+f_{-}\big{)}\). Now, note that \(f_{\pm}\) enter as arguments in log-functions of Theorem 5.1, Eq. (5.4). Therefore, the square root existence condition \(b_{S}-\sqrt{D}\geq 0\) in [10], in terms of the logarithm problem can be rewritten as a difference of the determinant factors, namely, \(b_{S}-\sqrt{D}\Leftrightarrow\frac{1}{2}\big{(}\sqrt{f_{-}}-\sqrt{f_{+}} \big{)}^{2}\). Now it becomes clear that this condition is always satisfied and therefore can be ignored, once we assume that the both factors satisfy \(f_{-}\geq 0\) and \(f_{+}\geq 0\). From all this we conclude that the requirement \(f_{\pm}>0\) constitutes one of the existence conditions of logarithm in Theorem 5.1. Also, \(b_{S}-\sqrt{D}=0\) is equivalent to \(f_{+}=f_{-}\). This restricts the maximal possible value of \(a_{\pm}^{(2)}\). In particular, \(|a_{\pm}^{(2)}|\leq(a_{0}\pm a_{123})^{2}\). Remember, that notation \(a_{\pm}^{(2)}\) (instead of \(a_{\pm}\)) was introduced to keep an analogy with \(\mathit{Cl}_{0,3}\) case. It may be negative \(a_{\pm}^{(2)}<0\) (see definition (5.2)) and therefore the notation, in general, can't be interpreted as a square of scalar unless \(a_{\pm}^{(2)}\geq 0\). When \(a_{\pm}^{(2)}=0\), an additional condition \(a_{0}\pm a_{123}\geq 0\) is required for logarithm to exist. Since \(\mathit{Cl}_{2,1}\) algebra is rarely used we will not provide explicit formulas for pure blades (they can be found in the notebook ElementaryFunctions.nb in [17]). Also, because generic formulas are similar to those in \(\mathit{Cl}_{0,3}\) the examples below are restricted to special cases only. **Example 10**.: _Logarithm in Cl\({}_{2,1}\) when \(a_{\pm}^{(2)}=0\) and \(a_{0}\pm a_{123}>0\)._ Let the MV be \(\mathsf{A}=7+(2\mathbf{e}_{1}+\mathbf{e}_{2}+3\mathbf{e}_{3})+(2\mathbf{e}_{12}+ 2\mathbf{e}_{13}-2\mathbf{e}_{23})+5I=7+\mathbf{a}+\mathcal{A}+5I\). From (5.2) we find \(a_{+}^{(2)}=0,f_{+}=144\) and \(a_{-}^{(2)}=0,f_{-}=144\). Since \(a_{0}\pm a_{123}=7\pm 5>0\) from (5.3) we have \(\mathsf{A}_{0_{-}}=\log 2\), \(\mathsf{A}_{0_{+}}=\log 12\) and from (5.4) \(\mathsf{A}_{1,2_{-}}=\frac{1}{2}(1-I)\big{(}\mathbf{a}+\mathcal{A}\big{)}\), \(\mathsf{A}_{1,2_{+}}=\frac{1}{12}(1+I)\big{(}\mathbf{a}+\mathcal{A}\big{)}\). Finally, \(\log\mathsf{A}=\frac{1}{24}\big{(}12\log(24)+24\mathbf{e}_{1}+17\mathbf{e}_{2 }+31\mathbf{e}_{3}+29\mathbf{e}_{12}+19\mathbf{e}_{13}-24\mathbf{e}_{23}+12 \log(6)I\big{)}\). Note, because MV coefficients \(a_{3}\neq\pm a_{12}\) the condition \(\mathfrak{D}\) is False, therefore the free MV in (5.3) is absent, \(\hat{\mathcal{F}}=0\). **Example 11**.: _Logarithm when \(a_{-}^{(2)}=0\), \(a_{0}-a_{123}=0\) and \(a_{+}^{(2)}>0\), \(a_{0}+a_{123}<0\)._ In \(\mathit{Cl}_{2,1}\) these properties are satisfied by \(\mathrm{MV}\,\mathsf{A}=-2+(7\mathbf{e}_{1}+4\mathbf{e}_{2}+10\mathbf{e}_{3})+ (-10\mathbf{e}_{12}-4\mathbf{e}_{13}+7\mathbf{e}_{23})-2I=-2+\mathbf{a}+ \mathcal{A}-2I\)._ From (5.2) we find \(a_{+}^{(2)}=140,f_{+}=156\) and \(a_{-}^{(2)}=0,f_{-}=0\). Then, because \(a_{0}-a_{123}=-2-(-2)=0\) and \(a_{0}+a_{123}=-2-2<0\), from (5.3) we have \(\mathsf{A}_{0_{-}}=\log(0_{+})+(\pi+2\pi c_{2_{-}})\hat{\mathcal{U}},\mathsf{ A}_{0_{+}}=\frac{1}{2}\log(156)\). From (5.4) \(\mathsf{A}_{1,2_{-}}=0,\mathsf{A}_{1,2_{+}}=\frac{2}{\sqrt{35}}\big{(}\pi+2 \pi c_{1_{+}}-\arctan(\sqrt{35}/2)\big{)}(1+I)\big{(}\mathbf{a}+\mathcal{A} \big{)}\). Then using (5.1) we obtain the final answer \(\log\mathsf{A}=\alpha_{0}+\alpha_{1}\mathbf{e}_{1}+\alpha_{2}\mathbf{e}_{2}+ \alpha_{3}\mathbf{e}_{3}+\alpha_{12}\mathbf{e}_{12}+\alpha_{13}\mathbf{e}_{13 }+\alpha_{23}\mathbf{e}_{23}+\alpha_{123}I\), where \(\beta=\arctan(\sqrt{35}/2)\) and, \[\alpha_{0}=\tfrac{1}{2}(\log 0_{+}+\log\sqrt{156}),\quad\alpha_{123}=- \tfrac{1}{4}(2\log 0_{+}-\log 156),\] \[\alpha_{1}=-\tfrac{1}{20}\big{(}5\sqrt{3}\,\pi-3\sqrt{35}\,\pi+2 \sqrt{35}\,\beta\big{)},\quad\alpha_{2}=\big{(}\tfrac{\pi}{2\sqrt{3}}+\tfrac{2 }{\sqrt{35}}\,\pi-\tfrac{2}{\sqrt{35}}\beta\big{)},\] \[\alpha_{3}=\big{(}\sqrt{\tfrac{5}{7}}\,\pi+\tfrac{5\pi}{4\sqrt{3 }}-\sqrt{\tfrac{5}{7}}\,\beta\big{)},\quad\alpha_{12}=\big{(}-\sqrt{\tfrac{5}{7 }}\,\pi+\tfrac{5\pi}{4\sqrt{3}}+\sqrt{\tfrac{5}{7}}\,\beta\big{)},\] \[\alpha_{13}=\big{(}\tfrac{\pi}{2\sqrt{3}}-\tfrac{2}{\sqrt{35}}\, \pi+\tfrac{2}{\sqrt{35}}\beta\big{)},\quad\alpha_{23}=\tfrac{1}{20}\big{(}5 \sqrt{3}\,\pi+2\sqrt{35}\,\pi-2\sqrt{35}\,\beta\big{)}.\] For simplicity the constants \(c_{i\pm}\) and \(\hat{\mathcal{U}}\) were equated to zero. One can check that after replacement of \(\log(0_{+})\) by \(\log(x)\) and substituting the final result into exponenial formula (23) in [8] and then computing the limit \(x\to 0\) we recover the initial MV. To make the verification simple when \(c_{i\pm}\) and \(\hat{\mathcal{U}}\) are included, one may choose concrete values for arbitrary free constants \(c_{i\pm}\) and arbitrary unit bivector \(\hat{\mathcal{U}}^{2}=-1\). ## 6 Roots and arbitrary powers of MV If GA logarithm is known, the powers of a MV may be computed with \(\mathsf{A}^{r}=\exp\bigl{(}r\log\mathsf{A}\bigr{)}\), i.e., by multiplying logarithm by power value \(r\),which may be either an integer or a rational number, and then computing the exponential. In the preprint [10] we provided the algorithm how to obtain all possible square roots (\(r=1/2\)) of MV for all \(n=3\) Clifford algebras. Here we want to show that the roots presented in [10] as numerical examples of algorithm are consistent with the above exp-log formula, thus actually we perform a cross check of 3D GA logarithm formulas by different methods. It should be stressed that the logarithm formula allows to find only a single6 square root, although there may exist, as shown in [10], many (up to 16 in case of \(\mathit{Cl}_{2,1}\) algebra) roots. Thus, the GA logarithm function is not universal enough, although it may be sometimes useful if only a single fractional root, \(r=1/n\) and \(n\in\mathbb{N}\), is needed. **Example 12**.: _Cl\({}_{3,0}\). Example 1 from [10]._ Theorem 4.1 is used to calculate the root of MV \(\mathsf{A}=\mathbf{e}_{1}-2\mathbf{e}_{12}\). In _Cl\({}_{3,0}\)_ algebra the logarithm is \(\log\mathsf{A}=\frac{\log 5}{2}-\frac{1}{2}\pi\mathbf{e}_{23}+\arctan(\frac{1}{2})I\). Then the square root \(\sqrt{\mathsf{A}}\) is \[\exp\bigl{(}\tfrac{1}{2}\log\mathsf{A}\bigr{)}= \frac{\sqrt[4]{5}}{\sqrt{2}}\bigl{(}\cos(\tfrac{1}{2}\arctan \tfrac{1}{2})\bigl{(}1-\mathbf{e}_{23}\bigr{)}+\sin(\tfrac{1}{2}\arctan \tfrac{1}{2})\bigl{(}\mathbf{e}_{1}+I\bigr{)}\bigr{)},\] which after simplification coincides with root \(A_{3}\) in Example 1 in [10]. **Example 13**.: _Cl\({}_{3,0}\). Example 2 from [10]._ Logarithm of MV \(\mathsf{A}=-1+\mathbf{e}_{3}-\mathbf{e}_{12}+\frac{1}{2}I\) in _Cl\({}_{3,0}\)_ is \[\log\mathsf{A}= \log\bigl{(}\tfrac{\sqrt{5}}{2}\bigr{)}-\tfrac{\log 5}{2} \mathbf{e}_{3}+\tfrac{1}{2}\bigl{(}\pi-\arctan\tfrac{4}{3}\bigr{)}\mathbf{e}_{ 12}+\bigl{(}-\pi+\arctan\tfrac{1}{2}\bigr{)}I,\] Multiplication by \(\frac{1}{2}\) and exponentiation gives the root \(A_{3}=\sqrt{\mathsf{A}}=\frac{1}{2}\bigl{(}\mathbf{e}_{3}+\mathbf{e}_{12} \bigr{)}-I\) which coincides with [10]. **Example 14**.: _Cl\({}_{3,0}\). Example 3 from [10]._ Similarly, the logarithm of MV \(\mathsf{A}=-1+\mathbf{e}_{123}\) in _Cl\({}_{3,0}\)_ is found to be \(\log\mathsf{A}=\frac{\log 2}{2}+\frac{3}{4}\pi I\). Multiplication by \(\frac{1}{2}\) and exponentiation gives the root \(A_{1}\) of Example 3 [10], \(\sqrt{\mathsf{A}}=2^{1/4}\bigl{(}\cos\frac{3\pi}{8}+I\sin\frac{3\pi}{8}\bigr{)} =\sqrt{-\frac{1}{2}+\frac{1}{\sqrt{2}}}+I\sqrt{\frac{1}{2}+\frac{1}{\sqrt{2}}}\). Likewise, an attempt to compute the logarithm of \(\mathsf{A}=\mathbf{e}_{1}+\mathbf{e}_{12}\) yields empty set, i.e., the logarithm and as a result the square root do not exist. **Example 15**.: _Cl\({}_{3,0}\). Quaternion. Example 4 from [10]._ In _Cl\({}_{3,0}\)_ algebra the even MV \(\mathsf{A}=1+\mathbf{e}_{12}-\mathbf{e}_{13}+\mathbf{e}_{23}\) is equivalent to Hamilton quaternion. The logarithm is \(\log\mathsf{A}=\log 2+\frac{\pi}{3\sqrt{3}}\bigl{(}\mathbf{e}_{12}-\mathbf{e}_{13}+ \mathbf{e}_{23}\bigr{)}\). Multiplication by \(\frac{1}{2}\) and exponentiation give the root \(A_{3}\) in Example 4 [10], \(\sqrt{\mathsf{A}}=\frac{1}{\sqrt{6}}(3+\mathbf{e}_{12}-\mathbf{e}_{13}+ \mathbf{e}_{23})\). **Example 16**.: _Cl\({}_{0,3}\). Example 6 from [10]._ To compute the logarithm of MV \(\mathsf{A}=\mathbf{e}_{1}-2\mathbf{e}_{23}\) in _Cl\({}_{0,3}\)_ the Theorem 3.1 was applied which gives \(\log\mathsf{A}=\frac{\log 3}{2}-\frac{\pi}{2}\mathbf{e}_{23}+\frac{\log 3}{2}I\). Multiplication by \(\frac{1}{2}\) and exponentiation gives the root \(A_{3}\) of Example 6 in [10], \(\sqrt{\mathsf{A}}=\frac{1}{2}(d_{2}+d_{1}\mathbf{e}_{1}-d_{2}\mathbf{e}_{23}+I /d_{2})\), where \(d_{1}=\sqrt{2-\sqrt{3}}\) and \(d_{2}=\sqrt{2+\sqrt{3}}\). **Example 17**.: _Cl\({}_{0,3}\). Example 7 from [10]._ The logarithm of MV \(\mathsf{A}=-\mathbf{e}_{3}+\mathbf{e}_{12}+4I\) in _Cl\({}_{0,3}\)_ algebra is computed by Theorem 3.1. The result is \(\log\mathsf{A}=\frac{\log 320}{4}-\frac{1}{2}\arctan(\frac{1}{2})\mathbf{e}_{3}+ \frac{1}{2}\arctan(\frac{1}{2})\mathbf{e}_{12}+\frac{\pi}{2}\bigl{(}1-I\bigr{)} \hat{\mathbf{u}}+\frac{1}{4}\log\frac{5}{4}I\). We have assumed that the discrete free constants are equal to zero and retained only free unit vector \(\hat{\mathbf{u}}\) that satisfies \(\hat{\mathbf{u}}^{2}=-1\). Multiplication by \(\frac{1}{2}\) and exponentiation then gives \(\sqrt{\mathsf{A}}=\frac{1}{2}c_{2}+\frac{1}{2}c_{1}\bigl{(}\mathbf{e}_{12}- \mathbf{e}_{3}\bigr{)}+\frac{1}{2}c_{2}I+\bigl{(}1-I\bigr{)}\hat{\mathbf{u}}\), where \(c_{1}=\sqrt{\sqrt{5}-2}\) and \(c_{2}=\sqrt{2+\sqrt{5}}\), and corresponds to \(A_{3}\) root in Example 7 in [10]. In particular, in order to obtain the numerical value corresponding to \(V_{2}=\frac{1}{2},V_{3}=0\) of [10] we have to take \(\hat{\mathbf{u}}=-\frac{1}{2}\sqrt{5-\sqrt{5}-2c_{1}}\mathbf{e}_{1}+\frac{1} {2}(-1-c_{1})\mathbf{e}_{3}\). **Example 18**.: \(\mathit{Cl}_{2,1}\)_. Example 8 from [10]. With the Theorem 5.1 one may ascertain that the logarithm of \(\mathrm{MV}\ \mathsf{A}=\mathbf{e}_{1}-2\mathbf{e}_{23}\) in \(\mathit{Cl}_{2,1}\) algebra does not exist what is in agreement with square root absence of this \(\mathrm{MV}\) in \(\mathit{Cl}_{2,1}\). On the other hand, the logarithm of \(\mathrm{MV}\ \mathsf{A}=2+\mathbf{e}_{1}+\mathbf{e}_{13}\) is \(\log\mathsf{A}=\frac{\log 2}{2}-\frac{1}{\sqrt{2}}\operatorname{artanh}\bigl{(} \frac{1}{\sqrt{2}}\bigr{)}\bigl{(}\mathbf{e}_{1}+\mathbf{e}_{13}\bigr{)}\). Multiplication by \(\frac{1}{2}\) and exponentiation gives the root \(A_{5}\) of example 8 in [10], \(\sqrt{\mathsf{A}}=\frac{1}{2}\left(\sqrt{2-\sqrt{2}}\left(\mathbf{e}_{1}+ \mathbf{e}_{13}\right)+\sqrt{2\left(2+\sqrt{2}\right)}\right)\)._ Of course, after multiplication of logarithm by any integer or rational number and subsequent exponentiation we can obtain a corresponding power of the \(\mathrm{MV}\). For example, in \(\mathit{Cl}_{3,0}\) the logarithm of \(\mathsf{A}=\mathbf{e}_{1}\) is \(\log\mathsf{A}=\frac{\pi}{2}\mathbf{e}_{1}\). Then, it is easy to check that after multiplication by \(\frac{1}{3}\) and exponentiation we obtain the cubic root \(\sqrt[3]{\mathbf{e}_{1}}=\frac{1}{2}\bigl{(}\sqrt{3}+\mathbf{e}_{1}\bigr{)}\). ## 7 Relations of the logarithm to GA inverse trigonometric and hyperbolic functions Just like trigonometric and hyperbolic functions can be expressed by exponentials (Euler and de Moivre formulas), the inverse hyperbolic functions may be defined in terms of logarithms. Therefore, in GA we can use the following definitions to compute inverse hyperbolic and trigonometric functions of \(\mathrm{MV}\) argument \(\mathsf{A}\). For hyperbolic inverse functions: \[\operatorname{artanh}\mathsf{A}= \frac{1}{2}\bigl{(}\log(1+\mathsf{A})-\log(1-\mathsf{A})\bigr{)}, \tag{7.1}\] \[\operatorname{arcoth}\mathsf{A}= \begin{cases}\frac{1}{2}\bigl{(}\log(1+\mathsf{A}^{-1})-\log(1- \mathsf{A}^{-1})\bigr{)},&\mathsf{A}\neq 0,\\ \frac{\pi}{2}I,&\mathsf{A}=0,\end{cases}\] (7.2) \[\operatorname{arcosh}\mathsf{A}= \log\bigl{(}\mathsf{A}+\sqrt{\mathsf{A}-1}\,\sqrt{\mathsf{A}+1 }\bigr{)},\] (7.3) \[\operatorname{arsinh}\mathsf{A}= \log\bigl{(}\mathsf{A}+\sqrt{\mathsf{A}^{2}+1}\bigr{)}. \tag{7.4}\] For inverse trigonometric functions: \[\operatorname{arcsin}\mathsf{A}= -I\log\bigl{(}\mathsf{A}I+\sqrt{1-\mathsf{A}^{2}}\bigr{)}, \tag{7.5}\] \[\operatorname{arccos}\mathsf{A}= \frac{\pi}{2}+I\log\bigl{(}\mathsf{A}I+\sqrt{1-\mathsf{A}^{2}} \bigr{)},\] (7.6) \[\operatorname{arctan}\mathsf{A}= \frac{I}{2}\bigl{(}\log(1-I\mathsf{A})-\log(1+I\mathsf{A}) \bigr{)},\] (7.7) \[\operatorname{arccot}\mathsf{A}= \begin{cases}\frac{1}{2}I\bigl{(}\log(1-I\mathsf{A}^{-1})-\log(1+ I\mathsf{A}^{-1})\bigr{)},&\mathsf{A}\neq 0,\\ \frac{\pi}{2},&\mathsf{A}=0.\end{cases} \tag{7.8}\] These formulas are similar to those in the theory of real and complex functions except that instead of the imaginary unit the pseudoscalar appears in trigonometric functions. However, earlier we have found [10] that in GA the functions with the square root, in general, are multi-valued. Thus at a first sight it may appear that the listed above equations with square root are not valid in all circumstances. Nonetheless, our preliminary numerical experiments show that they, in fact, are satisfied for all possible individual plus/minus pairs of square roots7 (see Example 8 in [8] and Example 19 below in this section). Footnote 7: This property does not allow us to write the equality sign between GA general expression \(\log\sqrt{\mathsf{B}}\) and \(\frac{1}{2}\log\mathsf{B}\). With the above formulas for hyperbolic and trigonometric functions one can construct the following identities for generic MVs:8 Footnote 8: Trigonometric functions are defined only for algebras where the pseudoscalar 1) belongs to a center of an algebra, i.e. commutes commutative with remaining elements and 2) satisfy \(I^{2}=-1\). In the considered 3D algebras only for \(\mathit{Cl}_{3,0}\) and \(\mathit{Cl}_{1,2}\). \[\sinh\mathsf{A}= \tfrac{1}{2}\bigl{(}\exp(\mathsf{A})-\exp(-\mathsf{A})\bigr{)}, \tag{7.9}\] \[\cosh\mathsf{A}= \tfrac{1}{2}\bigl{(}\exp(\mathsf{A})+\exp(-\mathsf{A})\bigr{)},\] (7.10) \[\tanh\mathsf{A}= \sinh\mathsf{A}(\cosh\mathsf{A})^{-1}=\bigl{(}\exp(\mathsf{A})- \exp(-\mathsf{A})\bigr{)}\bigl{(}\exp(\mathsf{A})+\exp(-\mathsf{A})\bigr{)}^{ -1},\] (7.11) \[\coth\mathsf{A}= \cosh\mathsf{A}(\sinh\mathsf{A})^{-1}=\bigl{(}\exp(\mathsf{A})+ \exp(-\mathsf{A})\bigr{)}\bigl{(}\exp(\mathsf{A})-\exp(-\mathsf{A})\bigr{)}^{ -1}.\] (7.12) \[\sin\mathsf{A}= \tfrac{1}{2}I^{-1}\bigl{(}\exp(I\mathsf{A})-\exp(-I\mathsf{A}) \bigr{)},\] (7.13) \[\cos\mathsf{A}= \tfrac{1}{2}\bigl{(}\exp(I\mathsf{A})+\exp(-I\mathsf{A})\bigr{)},\] (7.14) \[\tan\mathsf{A}= \sin\mathsf{A}(\cos\mathsf{A})^{-1}=-I\bigl{(}\exp(I\mathsf{A})- \exp(-I\mathsf{A})\bigr{)}\bigl{(}\exp(I\mathsf{A})+\exp(-I\mathsf{A})\bigr{)} ^{-1},\] (7.15) \[\cot\mathsf{A}= \cos\mathsf{A}(\sin\mathsf{A})^{-1}=I\bigl{(}\exp(I\mathsf{A})+ \exp(-I\mathsf{A})\bigr{)}\bigl{(}\exp(I\mathsf{A})-\exp(-I\mathsf{A})\bigr{)} ^{-1}. \tag{7.16}\] We have not investigated how the presented formulas work in case when the MV square root or logarithm allows answer that depends on non-discrete free parameters and when the inverse MVs can't be be computed. Also, we have not considered MV logarithms that allow infinite coefficients at some of basis MVs. **Example 19**.: _Inverse MV hyperbolic functions._ To save space we will restrict ourselves to numerical examples only for \(\mathit{Cl}_{3,0}\) generic MV \(\mathsf{A}=-1-5\mathbf{e}_{1}+7\mathbf{e}_{2}-9\mathbf{e}_{3}+7\mathbf{e}_{1 2}-5\mathbf{e}_{13}+9\mathbf{e}_{23}+9I\). Then we find the following inverse hyperbolic functions, \[\operatorname{artanh}\mathsf{A}= \phantom{-}0.0544776\phantom{-}-0.0683983\mathbf{e}_{1}-0.003417 \mathbf{9e}_{2}+0.0712752\mathbf{e}_{3}\] \[-0.0259578\mathbf{e}_{12}-0.0571283\mathbf{e}_{13}+0.0036554 \mathbf{e}_{23}+1.5447402I,\] \[\operatorname{arcoth}\mathsf{A}= \phantom{-}0.0544776\phantom{-}-0.0683983\mathbf{e}_{1}-0.003417 \mathbf{9e}_{2}-0.0712752\mathbf{e}_{3}\] \[-0.0259578\mathbf{e}_{12}-0.0571283\mathbf{e}_{13}+0.0036555 \mathbf{e}_{23}-0.0260523I,\] \[\operatorname{arcosh}\mathsf{A}= \phantom{-}3.1995844\phantom{-}+0.6349751\mathbf{e}_{1}+0.647769 \mathbf{e}_{2}+0.3396621\mathbf{e}_{3}\] \[+0.9970274\mathbf{e}_{12}+0.4603461\mathbf{e}_{13}+0.7081115 \mathbf{e}_{23}+1.0647020I.\] For identities that contain square roots \(\sqrt{\mathsf{A}\pm 1}\), for example \(\operatorname{arcosh}\mathsf{A}\) or \(\operatorname{arsinh}\mathsf{A}\), all four roots are valid. Below they have been calculated by algorithm described in [10], _Root 1 and 2_ : \[\sqrt{\mathsf{A}-1}=\pm(-2.3936546\quad-0.3144420\mathbf{e}_{1}\ -1.3708134 \mathbf{e}_{2}\ +0.3806804\mathbf{e}_{3}\] \[\qquad\qquad-1.7824116\mathbf{e}_{12}-0.1086429\mathbf{e}_{13}-1.615 4750\mathbf{e}_{23}-2.0134421I),\] _Root 3 and 4_ : \[\sqrt{\mathsf{A}-1}=\pm(-0.1660207\quad+2.4324037\mathbf{e}_{1}\ +1.1337774 \mathbf{e}_{2}\ +2.0055931\mathbf{e}_{3}\] \[\qquad\qquad+2.1654007\mathbf{e}_{12}+1.9165921\mathbf{e}_{13}+1. 0892769\mathbf{e}_{23}+1.9243691I).\] And similarly for \[\sqrt{\mathsf{A}+1}=\pm\{-2.6330243\quad-0.1908183\mathbf{e}_{1} \ -1.3218252\mathbf{e}_{2}\ +0.4871255\mathbf{e}_{3}\] \[\qquad\qquad-1.6829550\mathbf{e}_{12}-0.0102534\mathbf{e}_{13}-1. 5705147\mathbf{e}_{23}-1.9117486I\},\] \[\sqrt{\mathsf{A}+1}=\pm\{-0.2910283\quad+2.6047118\mathbf{e}_{1} \ +1.0343473\mathbf{e}_{2}\ +2.2416242\mathbf{e}_{3}\] \[\qquad\qquad+2.0981982\mathbf{e}_{12}+2.0727864\mathbf{e}_{13}+0. 9499284\mathbf{e}_{23}+1.8337753I\}.\] It is important to stress that, in general, the individual formulas (\(\arccos\mathsf{A}\), \(\arcsin\mathsf{A}\) and their hyperbolic analogues) that contain the sets of roots yield different function values for four different roots in the above listed sets. \[\arcsinh\mathsf{A}=\pm( 3.2035891\quad+0.6313828\mathbf{e}_{1}\ +0.6490577\mathbf{e}_{2}\ +0.3351515\mathbf{e}_{3}\] \[+0.9974654\mathbf{e}_{12}+0.4571790\mathbf{e}_{13}+0.7100715 \mathbf{e}_{23}+1.0647010I).\] \[\arcsinh\mathsf{A}=\pm( 0.4835482\quad+2.5061943\mathbf{e}_{1}\ -0.7303556\mathbf{e}_{2}\ +3.0588480 \mathbf{e}_{3}\] \[-0.0989201\mathbf{e}_{12}+2.1904765\mathbf{e}_{13}-1.1645414 \mathbf{e}_{23}+2.5756463I).\] **Example 20.**_Inverse trigonometric functions of MV_. Numerical answers for \(\mathit{Cl}_{3,0}\) generic MV \(\mathsf{A}=-1-5\mathbf{e}_{1}+7\mathbf{e}_{2}-9\mathbf{e}_{3}+7\mathbf{e}_{12} -5\mathbf{e}_{13}+9\mathbf{e}_{23}+9I\) in a form of list for roots 1-4, \[\arcsin\mathsf{A}=\{ 2.5745928\quad+0.1233316\mathbf{e}_{1}\ -2.3715122\mathbf{e}_{2}\ +1.3713947\mathbf{e}_{3}\] \[-2.8712504\mathbf{e}_{12}+0.3732007\mathbf{e}_{13}-2.8706092 \mathbf{e}_{23}-0.4882339I,\] \[2.6354984\quad+0.7081116\mathbf{e}_{1}\ -0.4603462\mathbf{e}_{2}\ +0.9970274 \mathbf{e}_{3}\] \[-0.3396621\mathbf{e}_{12}+0.6477695\mathbf{e}_{13}-0.6349751 \mathbf{e}_{23}-3.1995845I,\] \[+0.5669998\quad-0.1233316\mathbf{e}_{1}\ +2.3715122\mathbf{e}_{2}\ -1.3713947 \mathbf{e}_{3}\] \[+2.8712504\mathbf{e}_{12}-0.3732007\mathbf{e}_{13}+2.8706092 \mathbf{e}_{23}+0.4882339I,\] \[0.5060943\quad-0.7081116\mathbf{e}_{1}\ +0.4603462\mathbf{e}_{2}\ -0.9970274 \mathbf{e}_{3}\] \[+0.3396621\mathbf{e}_{12}-0.6477695\mathbf{e}_{13}+0.6349751 \mathbf{e}_{23}+3.1995845I\}.\] Since formulas for arc sine and cosine also include square roots we obtain four different values for these functions too, \[\begin{array}{ll}\arccos{\sf A}=\pm\{-1.0037965&-0.1233316{\bf e}_{1}&+2.371512 2{\bf e}_{2}&-1.3713947{\bf e}_{3}\\ &+2.8712504{\bf e}_{12}&-0.3732007{\bf e}_{13}&+2.8706092{\bf e}_{23}&+0.4882339I,\\ &-1.0647021&-0.7081116{\bf e}_{1}&+0.4603462{\bf e}_{2}&-0.9970274{\bf e}_{3}\\ &+0.3396621{\bf e}_{12}&-0.6477695{\bf e}_{13}&+0.6349751{\bf e}_{23}&+3.1995845I \}.\end{array}\] On the other hand the trigonometric tangent and cotangent have a single value since the square root here is absent, Eqs (7.7) and (7.8), \[\begin{array}{ll}\arctan{\sf A}=&1.5171201&+0.0678435{\bf e}_{1}&+0.0036019{ \bf e}_{2}&+0.0705863{\bf e}_{3}\\ &+0.0260071{\bf e}_{12}&+0.0566409{\bf e}_{13}&-0.0033708{\bf e}_{23}&+0.025916 4I,\\ \arccos{\sf A}=&0.0536762&-0.0678435{\bf e}_{1}&-0.0036019{\bf e}_{2}&-0.070586 3{\bf e}_{3}\\ &-0.0260071{\bf e}_{12}&-0.0566409{\bf e}_{13}&+0.0033708{\bf e}_{23}&-0.025916 4I.\end{array}\] ## 8 Discussion and conclusions The logarithm together with the exponential [8, 7] and square root [10] are the most important functions in Clifford geometric algebra (GA). Starting from the respective exponential functions we presented here, as far as we know, for the first time the basis-free formulas for logarithms in all 3D GAs. The formulas for both the generic and special cases may be directly applied in GA programming. They were cross-checked using the basis-free GA exponential functions found in [8]. The derived formulas were implemented in _Mathematica_ and tested with thousands of randomly generated multivectors [17]. In all cases the exponentiation of the logarithm was found to simplify to the initial MV. Using numerical experiments [17] we observed that, in accord with the suggestion in [1], the principal value of the logarithm can be defined as a GA logarithm having the smallest determinant norm. In almost all cases the principal MV logarithm is attained by setting arbitrary integer parameters \(c_{i}\) in generic logarithms (Theorems 3.1, 4.1, 5.1) to zero. Exceptions from this rule, however, may occur in the case of simple specific MVs, for which commuting MVs may exist ( Secs. 3.3 and 4.3), and therefore not restricted by free MVs (Eqs (3.13) and (4.13)). Apart from discrete parameters \(c_{i}\), we have also found that continuous parameters represented by free unit vectors \(\hat{\bf u}\) or bivectors \(\hat{\cal U}\) may be included in special cases as well. The parameters vanish after exponentiation of the logarithm and do not contribute to the MV norm. Recently we have found that such free parameters may be also introduced into lower dimensional, quaternionic-type Clifford algebras [4]. However, more investigations are needed in this direction. Also, the relation between the GA logarithm and square root of MV was investigated. The known formula \(\sqrt{\sf A}=\exp\bigl{(}\frac{1}{2}\log({\sf A})\bigr{)}\) served as an additional check correctness of GA logarithms. Unfortunately, the formula allows to compute only a single square root from many possible roots that may exist in GA [10]. Nevertheless, such a comparison was found to be very useful for testing purposes. Indeed, a test of square root of a MV is an algebraic problem since it reduces to a solution of system of algebraic equations. On the other hand, inversion of exponential used in finding the GA logarithm in the present paper requires solving a system of transcendental equations (Appendix A), a problem which is much more difficult (but at the same time more general) task. The mentioned exp-log relation also allows to check the condition whether the MV logarithm exists at all. Indeed, since we know how to calculate GA exponential [10] of arbitrary MV multiplied by factor \(\frac{1}{2}\), from this follows that it is \(\log(\mathsf{A})\) function which determines the existence condition for \(\sqrt{\mathsf{A}}\) to exist. As a test, we have checked using our algorithm [10] that for each MV indeed there exists a single square root that is in agreement with the identity \(\exp\bigl{(}\frac{1}{2}\log(\mathsf{A})\bigr{)}=\sqrt{\mathsf{A}}\). In conclusion, in the present paper the basis-free expressions have been found for GA logarithms in all 3D real algebras. The logarithm was found to exist for all MVs in case of real \(\mathit{Cl}_{0,3}\) algebra. In Clifford algebra \(\mathit{Cl}_{3,0}\) (and \(\mathit{Cl}_{1,2}\)) the logarithm exists for almost all MVs, except very small MV class which satisfies the condition \((a_{+}^{2}+a_{-}^{2}=0)\wedge(a_{0}^{2}+a_{123}^{2}=0)\). For example, the logarithm of MV \(\mathbf{e}_{1}\pm\mathbf{e}_{12}\) cannot be computed in Euclidean \(\mathit{Cl}_{3,0}\) algebra. On the other hand in \(\mathit{Cl}_{2,1}\) algebra the GA logarithm is absent in large sectors of a real coefficient space.
2307.10741
Aggressive saliency-aware point cloud compression
The increasing demand for accurate representations of 3D scenes, combined with immersive technologies has led point clouds to extensive popularity. However, quality point clouds require a large amount of data and therefore the need for compression methods is imperative. In this paper, we present a novel, geometry-based, end-to-end compression scheme, that combines information on the geometrical features of the point cloud and the user's position, achieving remarkable results for aggressive compression schemes demanding very small bit rates. After separating visible and non-visible points, four saliency maps are calculated, utilizing the point cloud's geometry and distance from the user, the visibility information, and the user's focus point. A combination of these maps results in a final saliency map, indicating the overall significance of each point and therefore quantizing different regions with a different number of bits during the encoding process. The decoder reconstructs the point cloud making use of delta coordinates and solving a sparse linear system. Evaluation studies and comparisons with the geometry-based point cloud compression (G-PCC) algorithm by the Moving Picture Experts Group (MPEG), carried out for a variety of point clouds, demonstrate that the proposed method achieves significantly better results for small bit rates.
Eleftheria Psatha, Dimitrios Laskos, Gerasimos Arvanitis, Konstantinos Moustakas
2023-07-20T10:12:44Z
http://arxiv.org/abs/2307.10741v1
# Aggressive saliency-aware point cloud compression ###### Abstract The increasing demand for accurate representations of 3D scenes, combined with immersive technologies has led point clouds to extensive popularity. However, quality point clouds require a large amount of data and therefore the need for compression methods is imperative. In this paper, we present a novel, geometry-based, end-to-end compression scheme, that combines information on the geometrical features of the point cloud and the user's position, achieving remarkable results for aggressive compression schemes demanding very small bit rates. After separating visible and non-visible points, four saliency maps are calculated, utilizing the point cloud's geometry and distance from the user, the visibility information, and the user's focus point. A combination of these maps results in a final saliency map, indicating the overall significance of each point and therefore quantizing different regions with a different number of bits during the encoding process. The decoder reconstructs the point cloud making use of delta coordinates and solving a sparse linear system. Evaluation studies and comparisons with the geometry-based point cloud compression (G-PCC) algorithm by the Moving Picture Experts Group (MPEG), carried out for a variety of point clouds, demonstrate that the proposed method achieves significantly better results for small bit rates. Point clouds compression, Saliency-aware quantization, Multi-saliency mapping ## I Introduction In recent years, 3D point clouds (PCs) have become a very popular immersive multimedia representation of static and dynamic 3D objects and scenes. They are widely used in various fields such as 3D scanning and modeling [1], industrial applications [2], bio-medical imagery [3], surveillance systems [4], autonomous navigation [5], and Virtual/Augmented Reality (VR/AR) [6]. These types of representations can enable users to freely navigate in a fully immersive 3D environment interacting with different 3D objects. Unfortunately, such dense representations require a large amount of data, which are not feasible for transmission on today's networks [7]. PCs, in comparison with 3D polygon meshes, provide a simpler, denser and more compact form that does not require topology information, and therefore, they are suitable for storage/transmission, as well as in low-level processing such as registration [8], segmentation [9], object detection [10]. However, realistic PCs should be dense and so, they require a huge amount of memory or bandwidth for transmission. As a result, efficient compression methods for 3D PCs are mandatory in order to achieve high performance and visual accuracy in such cases. Although many 3D PC compression methods have been proposed over the years, there is still a lot of effort to be made in order to achieve sufficient compression ratios, especially in applications which focus on user interactive and realistic viewing experiences and thus demand relatively high resolution of PCs. The main idea behind this research is to compress a simplified version of the original 3D PC, that consists of different levels of resolution, based on the viewpoint and the geometric characteristics of the PC. We propose a method that highlights the most visually significant parts of the PC and compresses the position of each point based on its "extended saliency" that combines the viewer's relative position and geometric saliency. The main contributions and the innovative parts of our approach are summarized below: * We propose a novel end-to-end geometry compression scheme that consists of a visibility-aware simplification module, a multi saliency estimation module, a compression module and a post-processing module for PC reconstruction. * We use an extended saliency metric of visual importance based on PC's geometry features and user's position. * The visible points are compressed with a bit rate proportional to their saliency, while non-visible points are not considered during encoding and are decoded solely based on their connectivity information. * Extensive evaluation studies to examine important aspects of our approach in order to highlight its applicability for practical scenarios. Experimental evaluation, carried out using a collection of different datasets, shows that the proposed saliency-aware quantization approach achieves significantly high compression ratios, preserving at the same time geometrically meaningful perceptible areas. These special characteristics make the method ideal for using it in applications that require extremely high compression rates and good perceptual accuracy at the same time. The rest of this paper is organized as follows: Section 2 presents previous and related works in the area of PC compression. Section 3 describes in detail the proposed method. Section 4 presents the experimental results of our approach and we compare them with other State-of-the-Art (SoA) methods of the literature. Section 5 draws the conclusions, limitations of the method and future work. ## II Previous work A PC is a set of points in three-dimensional space. Apart from its positional data, each point can also be associated with extra information including colors, normals, etc. Compression of 3D PCs has received significant attention during the past few years. Many approaches have been proposed in the literature to support efficient compression of PCs, including geometry coding [11] (such as octree [12, 13], kd-tree [14], spinning tree [15], quadtree and binary-tree approaches [16] etc.), attribute coding [17] (such as Graph Fourier Transform (GFT) [18], Karhunen-Loeve transform (KLT) [19], Region-Adaptive Haar Transform (RAHT) [20, 21], structured dictionary learning [22], etc.), or a combination of both [23]. The current SoA belongs to the Moving Pictures Expert Group (MPEG). There are 2 MPEG-PCC standards that are used in different scenarios, video-based PCC (V-PCC) for dynamic PCs and geometry-based PCC (G-PCC) for static PCs. The reference software for the aforementioned standards is TMC2 and TMC13 respectively [24]. Both V-PCC and G-PCC were based on traditional models such as octree decomposition, triangulated surface model, region-adaptive hierarchical transform, and 3D-to-2D projection [25]. V-PCC projects 3D points onto 2D planes and then uses existing video codec such as H.265/High-Efficiency Video Coding standard (HEVC) to compress the 2D planes. TMC13 is an octree-based geometry compression codec that can use trisup surface approximations. It is equivalent to the combination of L-PCC (LIDAR PC compression for dynamic PCs) codec and S-PCC (Surface PC compression for static PCs) codec, previously used by MPEG. The compression is applied directly in the original 3D space, leading to acceptable quality results in both lossy and lossless intra-frame modes. Geometry and attribute information are encoded separately. There are 2 codecs available for the geometry analysis, an octree decomposition scheme and a trisup ("triangle soup") surface approximation scheme. For attribute encoding, there are 3 codecs available: RAHT, Predicting Transform, and Lifting Transform. More details about V-PCC and G-PCC can be found at [26, 27, 28]. Zhu et al. [29] proposed a view-dependent DPC compression method specialized for networked applications that belong to the category of 3D-to-2D dimensional reduction and HEVC-based 2D video coding. Gu et al. [30] proposed a compression scheme for the attributes of voxelized 3D PCs. This method takes into consideration some special characteristics of the 3D PC, by voxelizing the 3D PC into equal size blocks and looking for irregular structures. Sun et al. [31] proposed a lossless compression scheme based on PC clustering, taking advantage of a prediction technique, which takes into consideration the correlation of the distance information of points in order to remove spatial redundancies. Li et al. [32] take into account only the rate instead of the rate-distortion cost, trying to overcome the problematic situations where some unoccupied pixels between different patches are compressed using almost the same quality as the occupied pixels, leading to waste of lots of bits since the unoccupied pixels are useless for the reconstructed PC. Dricot and Ascenso [33] proposed a hybrid PC compression approach that combines octrees with plane surfaces. More specifically, the octree partitioning is adaptive and also includes plane coding mode for leaf nodes at different layers of the octree. Tang et al. [12] presented an approach of the octree coding algorithm that improves the stop condition so that the segmentation process stops dividing at the right depth, ensuring in this way an appropriate voxel size. Liu et al. [34] proposed a coarse-to-fine rate control algorithm for region-based 3D PC compression. First, allocating the target bitrate between the geometry and color information, and then, optimizing, in turn, the geometry and color quantization steps. Mekuria et al. [35, 36] developed an octree-based intra- and inter-coding system. While Garcia and de Queiroz [37] extended these approaches by designing a lossless intra-frame compression method applied to the PC geometry, where each octant in the octree is entropy-coded according to its father octant. Shao et al. [38] introduced a binary tree-based PC partition in order to achieve better energy compaction and compression efficiency, by using graph signal processing tools, like the graph transform with optimized Laplacian sparsity. PC visibility has gained popularity over the last two decades. Many works distinguish visible and non-visible points from a viewpoint by reconstructing the surface, although they often require dense PCs and information about the normals. Katz et al. [39] estimated the visibility directly from point sets by spherical inverting every point and by calculating the convex hull of the inverted point set. Based on this approach, only the points that are lying on the convex hull are visible. Mehra et al. [40] enhanced this method to function well with noisy PCs as well. Nevertheless, despite the very good reconstruction results that the existing compression methods provide, they focus more on the coding of the whole 3D object and less on areas clearly perceptible by the observer so as to optimize compression and observable reconstruction quality. Additionally, for aggressive bit rates, the quality of the reconstructed PCs in these methods is relatively low. Fig. 1: Compression pipeline of the proposed methodology. The Multi-Saliency estimation module assigns a value to each visible point of the 3D scene indicating its perceptual significance. The encoding module compresses each point with a bit rate proportional to these values. ## III Saliency-aware Compression Initially, we introduce the basic definitions and preliminaries related to 3D static PC processing. Then, we explain the proposed saliency-aware compression scheme in detail. A brief representation of the pipeline is illustrated in Figure 1. In a nutshell, the process starts by estimating the non-visible vertices of the PC based on the user's position and viewpoint. Then, the hidden vertices are simplified and four separate saliency maps are estimated, depending on: 1) Geometrical features, 2) Visibility, 3) Proximity between the user and the PC, 4) User's focus point. Based on a combination of these maps, each point of the simplified PC is associated with a value that indicates its significance. Afterward, the point-cloud delta coordinates, which are calculated using an approximate connectivity graph, are scaled, quantized and eventually entropy coded. Each point is encoded with a different number of bits depending on its extended saliency. Finally, on the decoder side, we reconstruct the PC using the decoded delta coordinates. ### _Basic Definitions_ In this work, we focus on PCs \(\mathbf{P}\) consisting of \(n\) vertices \(\mathbf{v}\). A \(\mathbf{P}\) can represent a 3D object or a scanned scene, consisting of different visible 3D objects. The \(i\)-th vertex \(\mathbf{v}_{i}\) is represented by the Cartesian coordinates, denoted \(\mathbf{v}_{i}=[x_{i},\ y_{i},\ z_{i}]^{T},\ \forall\ i=1,\cdots,n\). Thus, all the vertices can be represented by the matrix \(\mathbf{V}=[\mathbf{v}_{1},\ \mathbf{v}_{2},\ \cdots,\mathbf{v}_{n}]\in\mathbb{R}^{3 \times n}\). Each vertex \(\mathbf{v}_{i}\in\ \mathbf{V}\) is also represented by an outward unit normal \(\mathbf{n}_{i}\ i=1,\cdots,n\). Since connectivity information is not available, the computation of normals is achieved by local surface estimators. The \(k\) nearest neighbors of point \(i\) are denoted by \(\Psi_{i}^{k}\). Throughout the paper each neighboring point \(j\) can be indicated through its vertex coordinates (\(\mathbf{v}_{j}\in\Psi_{i}\)) or, for simplicity, only through its index (\(j\in\Psi_{i}\)). The position of the user (or camera) in the 3D space of the input PC is symbolized by the point \(\mathbf{e}=[e_{x}\quad e_{y}\quad e_{z}]^{T},\) while the viewing direction by the vector \(\mathbf{r}=[r_{x}\quad r_{y}\quad r_{z}]^{T}\). ### _Offline Processing_ These paragraphs refer to offline processes that run only once for each PC, since they are independent of the location of the users and the direction of view. #### Iii-B1 Estimation of \(\delta\) Coordinates The Laplacian matrix \(\mathbf{L}\in\mathbb{R}^{n\times n}\) can be defined as: \(\mathbf{L}=\mathbf{D}-\mathbf{C}\) where \(\mathbf{C}\in\mathbb{R}^{n\times n}\) is an approximate adjacency (connectivity) matrix of the PC, with elements: \[\mathbf{C}_{(i,j)}=\left\{\begin{array}{ll}1&\text{if }(i,j)\in\Psi_{i}^{k}\\ 0&\text{otherwise}\end{array}\right. \tag{1}\] and \(\mathbf{D}\) is a diagonal matrix with \(\mathbf{D}_{(i,i)}=\left|\Psi_{i}^{k}\right|\). To mention here that we use the k-nearest neighbors (k-NN) algorithm to approximately estimate the connectivity between the neighboring points since we do not have explicit knowledge of the underlying manifold. So, the local neighborhoods around each point act like the PC's estimated faces. For that reason, the selected number of neighboring points \(k_{n}\) is small, typically ranging from 5 to 10. The differential or \(\boldsymbol{\delta}\) coordinates of a PC are calculated as the difference between the coordinates of each vertex \(\mathbf{v}_{i}\) and the barycenter of its \(k\) nearest neighbours, according to the following equation [41]: \[\boldsymbol{\delta}_{i}=[\delta_{x},\ \delta_{y},\ \delta_{z}]^{T}=\mathbf{v}_{i} -\frac{1}{\left|\Psi_{i}^{k}\right|}\sum_{j\in\Psi_{i}^{k}}\mathbf{v}_{j},\ \ \Rightarrow\ \ \boldsymbol{\delta}=\mathbf{L}\mathbf{V} \tag{2}\] #### Iii-B2 Estimation of Point Cloud Normals Typically, the normal at a certain point is calculated as the perpendicular vector to the surface at that point. However, since the input vertices represent a set of point samples on the actual surface, we estimate the surface normals directly from the PC. More precisely, we use a rather classic method [42], which specifies the neighbors of each point within a certain scale, and then applies PCA regression to estimate a tangent plane. For more robust estimations, a larger scale is usually preferred. In our experiments, we use a neighborhood of \(k_{n}\) points for the local plane fitting. ### _Visibility-aware Simplification_ This paragraph briefly presents the visibility estimation method and we propose a visibility aware simplification process of the PC. #### Iii-B1 Projection to Screen Space Given the location \(\mathbf{e}\) of the user and the view direction \(\mathbf{r}\), each \(i\) vertex \(\mathbf{v}_{i}\) of the PC is projected into a two-dimensional pixel \(u_{i}=\left[x_{i}\quad y_{i}\right]^{T}\). To that end, we apply a series of transformations to the PC's vertices, known as the geometric pipeline. These transformations are presented in detail in [43]. All parameters, such as the near and far planes of the view frustum, have been chosen to be compatible with the PC's dimensions in order to ensure that the entire PC is within the viewing frustum. #### Iii-B2 Calculation of Visibility and Point Cloud Simplification The used visibility estimation algorithm [44] does not require dense PCs, uniform sampling or information about the normals and it achieves accurate results without reconstructing the surface. Having the screen-space projection, we calculate the depth of each point. The depth of point \(i\), notated as \(d_{i}\), is the Euclidean distance between point \(i\) and the user's position \(\mathbf{e}\). Using the k-NN algorithm, sets of \(k_{a}\) nearest points with Euclidean distance in screen-space are computed. An operator \(a\in\ [0,1]\) is calculated for each point according to: \[a_{i}=\exp\biggl{(}-\frac{(d_{i}-{d_{i}}^{min})^{2}}{({d_{i}}^{max}-{d_{i}}^{ min})^{2}}\biggr{)},\ \ \forall\ i=1,\cdots,n \tag{3}\] where \({d_{i}}^{min}\) and \({d_{i}}^{max}\) are the minimum and maximum depths in \(i\)'s neighborhood respectively. Finally, the visibility of each point is determined by setting a threshold to the value of \(a_{i}\). The threshold that we used is \(a_{threshold}=a_{mean}\). Points having \(a_{i}\) below the value \(a_{threshold}\), are regarded as non-visible from the point of view of the observer, and thus their simplification does not make a significant perceptually difference for the user. So, only the visible part of the PC will be assigned quantization bits in the next steps. ### _Multi-saliency Estimation_ #### Iii-D1 Geometric Saliency Saliency detection, which aims to detect geometric features in 3D scenes, has been a hot topic in recent years and several feature descriptors have been created [45]. Motivated by the fact that geometric features, like high curvature regions, corners, and edges, usually convey important visual information, we estimate a saliency map based on the geometric importance of each point. More specifically, we assume that areas with high-frequency spatial information are more perceptually significant and they must be preserved in contrast to flat areas. In our proposed geometric saliency scheme, we combine an eigenvalue-based step which extracts saliency features by decomposing local covariance matrices defined in small regions around each point of \(\mathbf{P}_{v}\)[46], and a step that uses the so-called Darboux frame as a local curvature metric for each region. Consider that for each point \(i\) of the \(\mathbf{P}_{v}\) we can create a matrix \(\mathbf{E}_{i}\ \in\ \mathbb{R}^{3\times(k_{g}+1)}\) that consists of the normals of its corresponding \(k_{g}\) nearest neighbors that were estimated during the offline processing. This matrix is formulated according to: \[\mathbf{E}_{i}=\begin{bmatrix}n_{ix}&n_{ix1}&n_{ix2}&\ldots&n_{ix_{g}}\\ n_{iy}&n_{iy1}&n_{iy2}&\ldots&n_{iy_{g}}\\ n_{ix}&n_{ix_{1}}&n_{ix_{2}}&\ldots&n_{ix_{g}}\end{bmatrix},\ \ \forall\ i=1,\cdots,n_{v} \tag{4}\] Then, the matrix \(\mathbf{E}_{i}\) is used for the estimation of the local covariance matrices \(\mathbf{R}_{i}=\mathbf{E}_{i}\mathbf{E}_{i}^{T}\in\mathbb{R}^{3\times 3}\) and next, the matrix \(\mathbf{R}_{i}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}\) is decomposed to the matrix \(\mathbf{U}\) of the eigenvectors, and the diagonal matrix \(\mathbf{\Lambda}=\text{diag}(\lambda_{i1},\lambda_{i2},\lambda_{i3})\) of the eigenvalues \(\lambda_{ij},\ \forall\ j=\{1,2,3\}\). The saliency \(s_{11i}\) based on the geometry of each vertex is denoted as the value given by the inverse \(l^{2}\)-norm of the corresponding eigenvalues [47]: \[s_{11i}=\frac{1}{\sqrt{\lambda_{i1}^{2}+\lambda_{i2}^{2}+\lambda_{i3}^{2}}}, \ \ \forall\ i=1,\cdots,n_{v} \tag{5}\] From Eq. (5), we can observe that large values of the term \(\sqrt{\lambda_{i1}^{2}+\lambda_{i2}^{2}+\lambda_{i3}^{2}}\) correspond to small saliency features indicating that the point lies in a flat area, while small values correspond to large saliency values, characterizing the specific point as a feature. In order to further highlight geometrical significance in neighborhoods around the the most salient points previously detected, we extract features of high curvature values from such neighbourhoods. To be more specific, we exploit the most salient vertices of Eq. (11), which are denoted by \(\mathbf{v}_{i}\in\mathbf{P}_{s}:\overline{s}_{11i}>s_{0}\), where \(\mathbf{P}_{s}\subseteq\mathbf{P}_{v}\). The threshold value of \(s_{0}\) is set such that only the largest saliency values are preserved. To efficiently obtain more informative features, we propose the computation of the Darboux frame for local regions defined by each point \(\mathbf{v}_{i}\) of \(\mathbf{P}_{s}\) and its \(k_{g}\) closest neighbors. The Darboux frame is a canonical moving frame that consists of three orthonormal vectors \(\mathbf{g}_{1}\), \(\mathbf{g}_{2}\), \(\mathbf{g}_{3}\) based at a point \(\mathbf{v}\) and it is considered to be a local representation of the surface. Consider that for every pair of points \(\mathbf{v}_{i}\in\mathbf{P}_{s}\) and \(\Psi_{i}^{k}\) in the k-neighbourhood of \(\mathbf{v}_{i}\), we select a source \(\mathbf{v}_{si}\) and a target \(\mathbf{v}_{ti}\) point. The source is the one having the smaller angle between the associated normal vector and the line connecting the points [48]. The vectors of the Darboux frame constructed at point \(\mathbf{v}_{si}\) are computed as: \[\mathbf{g}_{ii}=\mathbf{n}_{si},\ \ \mathbf{g}_{zi}=\mathbf{g}_{1i}\times \frac{(\mathbf{v}_{ti}-\mathbf{v}_{si})}{\left\|\mathbf{v}_{ti}-\mathbf{v}_{si }\right\|_{2}},\ \ \mathbf{g}_{3i}=\mathbf{g}_{1i}\times\mathbf{g}_{2i} \tag{6}\] The aforementioned vectors give us additional information about every local surface region, so we create for each one of them a matrix \(\mathbf{G}_{ji}\ \in\ \mathbb{R}^{3\times(k+1)}\) that consists of the Darboux frame's vectors between the point i and its corresponding \(k_{g}\) nearest neighbours, according to: \[\mathbf{G}_{ji}=\begin{bmatrix}g_{i_{x}}&g_{i_{x1}}&g_{i_{x2}}&\ldots&g_{i_{ x}}\\ g_{i_{y}}&g_{i_{y1}}&g_{i_{y2}}&\ldots&g_{i_{y}}\\ g_{i_{z}}&g_{i_{x1}}&g_{i_{y2}}&\ldots&g_{i_{x}}\\ \end{bmatrix} \tag{7}\] \(\forall\ \ j=1,2,3\) and \(i=1,\cdots,n_{s}\). We continue by estimating in a similar way the local covariance matrices and corresponding eigenvalues for each orthonormal vector of the Darboux frames. For each point \(\mathbf{v}_{i}\in\mathbf{P}_{s}\), we define a local curvature metric \(c_{i}\) around its region using the eigenvalues of the vectors \(\mathbf{g}_{1}\), \(\mathbf{g}_{2}\), \(\mathbf{g}_{3}\): \[c_{i}=\sum_{j=1}^{3}\frac{1}{\sqrt{\lambda_{\mathbf{g}_{ji_{1}}}^{2}+\lambda_{ \mathbf{g}_{ji_{2}}}^{2}+\lambda_{\mathbf{g}_{ji_{3}}}^{2}}},\ \ \ \forall\ \ i=1,\cdots,n_{s} \tag{8}\] Flat regions correspond to small values of \(c_{i}\), while high curvature regions correspond to larger values, it has been observed. The high-curvature points are considered geometrically significant and must therefore be preserved. The saliency \(s_{12i}\) based on the aforementioned metric is defined below: \[s_{12i}=\text{max}(c_{i})-\frac{1}{1-e^{c_{i}}},\ \ \forall\ i=1,\cdots,n_{s} \tag{9}\] \(s_{12i}\) amplifies the importance of geometric features and also leads to more salient features being detected around each neighbourhood of interest that was extracted by the first step (meaning that they transition into a more salient category). The above equation presents the final saliency map that detects features based on local curvature and geometric saliency: \[s_{1i}=\begin{cases}s_{12i},&\text{if}\ \ \ \ \ \ \ \ \ \ \ s_{11i}>s_{0}\,\\ s_{11i},&\text{otherwise}\end{cases},\ \ \forall\ \ i=1,\cdots,n_{v} \tag{10}\] The final geometry saliency is normalized in the range \([0,1]\): \[\overline{s}_{1i}=\frac{s_{1i}-\text{min}(s_{1i})}{\text{max}(s_{1i})-\text{ min}(s_{1i})},\ \ \forall\ i=1,\cdots,n_{v} \tag{11}\] Fig. 2 illustrates an example of a heatmap that visualizes the saliency map of the vertices based on their geometry and curvature. As we can see, high-frequency spatial areas, like sharp corners, are represented by deep red color while vertices lying in local flat areas represented by deep blue color. #### Iii-B2 Visibility Saliency The visibility operator \(a\), presented in Eq. (3), is a metric defining the confidence that a point i is not occluded. More specifically, the closer to 1 is \(a_{i}\) the more certain we are that the point i is visible. We create a saliency map based on this operator, assuming that points with high \(a\) values are visually more salient. So, the saliency \(s_{2}i\) of the Fig. 2: Heatmap visualization of the saliency map based on vertices geometry. point \(i\in\ P_{v}\) that is based on the visibility operator will be denoted as: \[s_{zi}=\ a_{i},\ \ \forall\ i=1,\cdots,n_{v} \tag{12}\] The geometry of the less salient points of the visible part (where \(a\) is close to \(a_{threshold}\)) will be encoded with fewer bits. Fig. 3 presents a heatmap visualization of the visibility based saliency map. The visible points are represented by dark red colour, while the fully occluded points by dark blue color. #### Iii-B3 Depth from Viewpoint Saliency Due to the restricted depth of field of the human eye, objects that are far from the viewpoint tend to appear blurred. In detail, visual acuity is reduced at the parts of the field of view with higher depth values. Motivated by this fact, we calculate a saliency map based on depth difference from the point of view. The user will not notice the visual difference in quality reduction. The depth \(d_{i}\) of each \(i\) point has already been calculated during the simplification step. We propose using a modified version of a depth map that was initiated by [49]. The saliency \(s_{3i}\) of the point \(\mathbf{v}_{i}\in\ \mathbf{P}_{v}\) is derived from normalising and transforming the depth values as: \[s_{3i}=1-\frac{d_{i}-z_{near}}{z_{far}-z_{near}},\ \forall\ i=1,\cdots,n_{v} \tag{13}\] where \(z_{near}\) and \(z_{far}\) are the distances of the near and far clip planes of the viewing frustum. This transformation is applied so that the most salient points would be those that are closest to the point of view. In Fig. 4, we present a heatmap visualization of the depth-based saliency map. The points closer to the user's position are presented in dark red color, while the points far from the user are presented in dark blue color. #### Iii-B4 Focus Point Saliency The point that the users' eyes are looking at is called as "focus" point and can be derived from their position and viewing direction. It is known that humans perceive more details close to the focus point than at its periphery. In this section, we estimate a saliency map by incorporating a peripheral blur effect that progressively reduces the quality of the scene at points located at a certain distance focus point. This effect is based on [50] and it is independent of the depth-of-field. The quality reduction in the periphery adds a realistic sensation and is not noticeable by the user. The saliency \(s_{4i}\) of the point \(\mathbf{v}_{i}\in\ \mathbf{P}_{v}\), is defined as: \[s_{4i}=1-\sqrt{\frac{1}{\mathbf{r}\cdot p_{i}}}^{m},\forall\ i=1,\cdots,n_{v} \tag{14}\] where \(\mathbf{r}\) is the user's viewing direction and \(p_{i}\) is the normalised vector from point \(\mathbf{v}_{i}\) towards the user's position. The power \(m\) defines the size of the area around the focus point where the quality is almost unaffected. For our experiments we use \(m=1\). Finally, \(s_{4i}\) is normalized in the range \([0,1]\) according to: \[\overline{s}_{4i}=\frac{s_{4i}-\text{min}(s_{4i})}{\text{max}(s_{4i})-\text{ min}(s_{4i})},\ \ \forall\ i=1,\cdots,n_{v} \tag{15}\] The most salient points that will be preserved with more geometrical accuracy are the ones with the smallest distance from the users' focus point. In Fig. 5, we present a heatmap visualization of the focus point based saliency map. The focus point is located at the center of the red circles. #### Iii-B5 Extended Saliency Metric The aforementioned saliency maps, based on a different strategy, indicate which points will be encoded with more or less geometric precision. An extended saliency metric that combines the above maps is calculated as the sum of 2 functions: \[s_{i}=f(\overline{s_{1i}},s_{2i})+g(s_{3i},\overline{s_{4i}}),\ \ \forall\ i=1, \cdots,n_{v} \tag{16}\] where \(f\) is a function that calculates the visibility and geometry contribution to the final saliency map and the function \(g\) calculates the depth and focus contribution. Visibility and geometry are objective measures of saliency and their contribution to the final map should therefore be greater compared to focus and depth. The last two are subjective, as the amount of visual acuity loss varies from user to user in both situations. Choosing carefully the aforementioned functions is necessary because too much emphasis on focus and depth maps could lead to perceptually significant quality reductions. The final salience map is an indicator of the overall visual significance of each point in \(\mathbf{P}_{v}\). The points with higher saliency values are preserved without any loss of precision as they are considered visually important, while the detail of the points with lower saliency is decreased without the user perceiving the visual difference. In our experiments, we chose a linear combination of the individual saliency maps. To be more precise the extended saliency metric will be calculated as the weighted average of those maps: \[s_{i}=\frac{w_{1}\overline{s}_{1i}+w_{2}s_{2i}+w_{3}s_{3i}+w_{4}\overline{s} _{1i}}{w_{1}+w_{2}+w_{3}+w_{4}},\ \ \forall\ i=1,\cdots,n_{v} \tag{17}\] Each saliency has been normalized within a range of \([0,1]\), except for the visibility-based saliency that was already within Fig. 4: Heatmap visualization of the depth saliency map. Fig. 5: Heatmap visualization of the saliency map based on the users’ focus. Fig. 3: Heatmap visualization of the saliency map based on vertices visibility. that range. The corresponding weights \(w_{j},\;\;\forall\;j=1,\cdots,4\) can be tuned to emphasize one approach or the other. Further experiments will be held in order to study the contribution of each saliency to the quality of the decompressed PC and to optimally estimate these weights. In Fig. 6, we present a heatmap visualization of the final saliency map. In this particular visualization the effect of each different method to is equal on the final saliency map. The visually important points are represented with deep red colour, while the visually unimportant points are represented by dark blue color. ### _Saliency-aware Point Cloud Encoding_ In the proposed saliency-aware PC compression scheme, we are able to progressively encode geometry information allowing the generation of a sequence of levels of detail. The main idea is that we allocate a different number of bits for the encoding, according to the overall extended saliency of each point in \(\mathbf{P}_{v}\). The non-visible points are encoded with zero bits which means that none of their geometrical accuracy is preserved. We choose to apply our compression scheme to the model's \(\boldsymbol{\delta}\) coordinates because it is proven that their quantization yields a small visual error, in contrast to the standard Cartesian coordinate quantization [51]. Based on the previously calculated \(\delta\) coordinates, we set to zero the coordinates of the non visible part and we generate the matrix \(\boldsymbol{\delta}=[\delta_{1},\ldots,\delta_{n}]\): \[\boldsymbol{\delta}_{i}=\left\{\begin{array}{ll}0,&\text{if $\mathbf{v}_{i}$ is non visible}\\ \delta_{i},&\text{otherwise}\end{array}\right.,\;\;\forall\;i=1,\cdots,n \tag{18}\] For the quantization of the above matrix we use a rather simple but effective quantization function. To be more precise, we downscale the \(\delta\)-coordinates by multiplying each \(\delta_{i}\) with a scaling factor that is based on the overall saliency value of each point and then we round the result to the closest integer coordinates, according to: \[\boldsymbol{\tilde{\delta}}_{i}=\left\{\begin{array}{ll}0,&\text{if $ \mathbf{v}_{i}$ is non visible}\\ \text{round}(s_{thresh}s_{i}\delta_{i}),&\text{otherwise}\end{array}\right. \tag{19}\] , \(\forall\;i=1,\cdots,n\) and \(s_{thresh}>0\). The scaling threshold, which is multiplied with each \(\boldsymbol{\delta}_{i}\), is a constant whose value determines the number of bits used to encode each point. Greater values of \(s_{thresh}\) allow higher percentages of geometric accuracy to be preserved, while very small values lead to zero \(\boldsymbol{\delta}_{i}\) coordinates after rounding. For a given \(s_{thresh}\), by multiplying each point with its overall saliency value we allocate more bits for the encoding of the visually important points. In order to reduce the magnitude of the transformed quantization errors, we quantize a set of known anchor points along with the input PC. The anchor points are uniformly distributed on the model surface and denoted by \(\mathbf{v}_{c}=\mathcal{Q}([\mathbf{v}_{i_{1}},\cdots,\mathbf{v}_{i_{k_{c}}}])\) where \(i_{k_{c}}\) is the vertex index and \(k_{c}\) corresponds to the \(1\%\) of the total number of vertices \(n\). After quantization, the matrix \(\widetilde{\boldsymbol{\delta}}=\left[\widetilde{\delta}_{1},\ldots,\widetilde {\delta}_{n+k_{c}}\right]\) is compressed using an entropy encoder. For our experiments, we used an arithmetic encoder. The compression ratio of the anchor encoding is insignificant since they correspond to a very small percentage of the PC. We also compress each additional matrix that is needed on the decoder side for the reconstruction of the PC. The saliency values \(s=[s_{thresh}s_{1},\ldots,s_{thresh}s_{n}]\;\forall\;i=1,\cdots,n\) are arithmetically encoded and the Laplacian matrix is encoded using an efficient connectivity compression method [52]. ``` Input : Unorganized 3D point cloud \(\mathbf{P}\in\mathbb{R}^{n\times 3}\); Output : Reconstructed 3D point cloud \(\mathbf{\bar{P}}\in\mathbb{R}^{n\times 3}\); 1 Find the \(n_{v}\) visible vertices based on \(a\) via Eq. (3); 2for\(i=1,\cdots,n_{v}\)do 3 Estimation of \(s_{1i}\) (geometry-based) via Eqs. (4)-(11); 4 Estimation of \(s_{2i}\) (visibility-based), via Eq. (12); 5 Estimation of \(s_{3i}\) (depth-based), via Eq. (13); 6 Estimation of \(s_{4i}\) (user's focus-based), via Eq. (14); 7 8 end for 9 Estimate an extended saliency metric via Eq. (17); 10 Estimate the delta coordinates via Eq. (2); 11 Compression giving different bits per vertices based on their saliency via Eq. (19); 12 Reconstruction by solving the linear system of Eq. (20); ``` **Algorithm 1**Saliency-aware compression of PCs ### _Decoding and Reconstruction_ The decoder decompresses the geometry, connectivity and saliency data and the \(\delta\) coordinates are scaled back to their original size by: \(\widetilde{\boldsymbol{\delta}}_{ri}=\frac{\widetilde{\boldsymbol{\delta}}_ {i}}{\delta_{i}},\;\;\;\forall\;i=1,\cdots,n\). Finally, reconstruction of the 3D PC is performed by solving the following sparse linear system: \[\left[\begin{array}{c}\mathbf{L}\\ \mathbf{I}_{k_{c}}\end{array}\right]\mathbf{v}=\left[\begin{array}{c} \widetilde{\boldsymbol{\delta}}_{r}\\ \mathbf{v}_{c}\end{array}\right] \tag{20}\] where \(\mathbf{I}_{k_{c}}\in\mathbb{R}^{k_{c}\times n}\) is a sparse matrix with ones at the \(i_{k_{c}}\) indices where the vertices \(\mathbf{v}_{c}\) lie and zeros anywhere else, so that \(\mathbf{v}_{c}=\mathbf{I}_{k_{c}}\mathbf{v}\) (anchor points). Since the non-visible points were assigned zero bits during encoding, their reconstruction is solely based on the PC's Laplacian matrix. Algorithm 1 summarizes the most important steps of our approach. ## IV Experimental Results and Analysis In this section, we will evaluate the quality of our proposed compression scheme for static PCs and compare its performance with the MPEG'S G-PCC, a SoA compression standardization as stated above. ### _Datasets_ We used multiple static PCs of different structures, complexities and properties in order to assess the quality of Fig. 6: Heatmap visualization of the final saliency map. our proposed compression scheme. We used some frames from the dynamically acquired PCs from 8i Voxelized full Bodies dataset [53], which have smooth and complete surfaces. These PCs were also chosen by MPEG for the current PCC standardization efforts according to [54]. We also tested VCL/ITI's datasets ([https://vcl.iti.gr/dataset/reconstruction](https://vcl.iti.gr/dataset/reconstruction)) of multiple Kinect-based 3D reconstructed meshes based on [55]. We only used the 3D coordinates of this dataset and ignored the connectivity information that was also included. These models are affected by noise and consist of many holes and irregularities. We also chose 2 inanimate objects with a sparse but more precise voxel distribution from the MPEG database. For testing purposes, we have obtained these models by combining the non-overlapping patches of each object that have been generated and made available at [56]. Finally, we constructed static scenes consisting of multiple models by using several PCs from [57] dataset that is available publicly online ([https://mmspg.epfl.ch/reconstructed-point-clouds-results](https://mmspg.epfl.ch/reconstructed-point-clouds-results)). The aforementioned PCs of our experimental test are shown in Fig. 7 and detailed information about each one of them is presented in Table I. Most of these PCs were already in a voxelized form so we voxelized the remaining in order to be consistent with the required input form of G-PCC. This is necessary since G-PCC expects geometry to be expressed with integer precision, so all content must be quantized or voxelized before encoding. ### _Evaluation Metrics_ This section presents the SoA PSNR metrics used for the geometry quality evaluation defined by the MPEG [58]. The original PC \(V_{or}=\{(v_{i}):i=0,..,K-1\}\) is a set of \(K\) points without any relevant order. The decoded PC \(V_{deg}=\{(v_{i}):i=0,..,N-1\) consists of N points, typically \(N<K\). The root mean square of the distances between the points of the two PCs (point-to-point metric [59]) \(d_{rms}\) is defined by the following equation where \(v_{deg}^{nn}\) is the nearest point of \(V_{deg}\) to the points of \(V_{or}\). \[d_{rms}(V_{or},V_{deg})=\sqrt{\frac{1}{K}\sum_{v_{l}\in V_{or}}[v_{l}-v_{deg }^{nn}]^{2}} \tag{21}\] In a similar way the point-to-plane metric is defined by calculating the projection between each point's error vector \(E\) along the normal vector \(N\) of their underlying surface. \[d_{p2plane}(V_{or},V_{deg})=\sqrt{\frac{1}{K}\sum_{v_{l}\in V_{or}}(E(V_{l},V _{or})\cdot N_{or})^{2}} \tag{22}\] For both of the metrics, we calculate their symmetric distances: \[d_{rms}^{sym}(V_{or},V_{deg})=max(d_{rms}(V_{or},V_{deg}),d_{rms}(V_{deg},V_{ or})) \tag{23}\] \[d_{p2plane}^{sym}(V_{or},V_{deg})=max(d_{p2plane}(V_{or},V_{deg}),d_{p2plane }(V_{deg},V_{or})) \tag{24}\] The geometry PSNR ratio can be computed both for the point-to-point and point-to-plane metrics [59] using \(d_{rms}^{sym}\) or \(d_{p2plane}^{sym}\) respectively. \[bandwidth=max((x_{max}-x_{min}),(y_{max}-y_{min}),(z_{max}-z_{ min})) \tag{25}\] \[psnr_{geom}=10\log_{10}\frac{\|bandwidth_{V_{deg}}\|_{2}^{2}}{(d _{rms/p2plane}^{sym}(V_{or},V_{deg}))^{2}} \tag{26}\] Metric (26) is referred to as D1 and D2 when calculated with point-to-point distance (\(d_{rms}^{sym}\)) and point-to-plane distance (\(d_{p2plane}^{sym}\)) respectively. Bjontegaard model [60, 61] was also used to calculate the Bjontegaard delta PSNR (BD-PSNR) which corresponds to the average PSNR difference for the same bit rate. ### _Parameter Adjustment_ In this paragraph, we will present and justify the selection of parameter values that are fixed through the steps of the proposed methodology in order to provide reproducible results. For the local surface fitting that is conducted for both the estimations of the PC's connectivity and normals, \(k_{n}\) is a small number that depends on the density of the PC. For very small values of \(k_{n}\), the estimated surface is affected by data noise and for large values, the k-neighbors become less localized and the resulting surface is not precise. To find the optimal number of \(k_{n}\) for each model, we performed a grid search in the range \([3-15]\) using the aforementioned D1 and D2 metrics for the evaluation of the reconstructed PC. The optimal value is \(k_{n}=6\) for the entire dataset except for the Egyptian mask and the Klimt statue, which are sparser, and the best reconstruction is achieved by \(k_{n}=5\). Although users can be placed anywhere in the 3D scene of points \(v_{i}=[x_{i},\ y_{i},\ z_{i}]^{T}\ \forall\ i=1,\cdots,n\), we have chosen their position and viewing direction so that the whole scene is within the field of view. To be more specific, for each PC centered at (0,0,0) the user was located at \(e=[0\ \ \ 0\ \ \ 2maxz]^{T}\) and the viewing direction was \(r=[0\ \ \ \ 0\ \ -2maxz]^{T}\). The \(k_{a}\) neighboring points in screen space used for visibility estimation in Eq. (3) and the \(k_{g}\) neighboring vertices, lying in a patch area that is used for \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Point Cloud** & **Voxel Depth** & **Frame** & **Vertices** \\ \hline \hline Long dress & 10 & 1300 & 857,966 \\ \hline Red and black & 10 & 1450 & 729,133 \\ \hline Loot & 10 & 1200 & 805,285 \\ \hline Soldier & 10 & 690 & 1,089,091 \\ \hline Egyptian Mask & 12 & - & 274,432 \\ \hline Statue Klimt & 12 & - & 499,712 \\ \hline Skining3-Zippering & 9 & 111 & 174,144 \\ \hline S17\_5KW\_Xenia-Zippering & 9 & 1580 & 81,028 \\ \hline Static scene & 10 & - & 94,698 \\ \hline \end{tabular} \end{table} TABLE I: Datasets information. Fig. 7: PCs used in experiments. the estimation of matrices \(\mathbf{E}\) and \(\mathbf{G}\) in Eqs. (4), (7) during geometry saliency estimation, are 2 parameters that play an important role in the quality of the final reconstructed model. However, the optimal value for each parameter can vary from model to model as they have different properties, structure, and density. To avoid exhaustive searching for optimal values for each model separately, we suggest using fixed values that provide good reconstruction results in most cases. For our experiments we have set \(k_{a}=125\) and \(k_{g}=25\). The saliency threshold in Eq. (10) was set to \(s_{0}=2mean(s_{11i})\) since curvature estimation is restricted to the most significant features of the first step. Anchor points are necessary because they minimize quantization errors during the reconstruction of the 3D PC, but their number should be insignificant in comparison to the size of the PC in order not to affect the compression ratio. For that purpose, we set \(k_{c}=0.01n\) which means that anchor points correspond to the \(1\%\) of the total number of vertices.We also investigated the individual impact of each map on the final saliency, in order to come up with the best strategy for defining the aforementioned weights of Eq. (17). For that reason, each weight \(w_{i}\)\(\forall\)\(i=1,\cdots,4\) was set in the range \([0-1]\) while the remaining weights were given a fixed value i.e. \(0.5\). We apply our compression scheme to the "static scene" for each group of weights and asses the results with D1 and D2 metrics. The \(s_{thresh}\) was changed in order to achieve the same bits per point (bpp) in each run, while the remaining parameters of our pipeline had their default values. In Fig. 8, we present the contribution of each weight to the reconstruction quality when the "static scene" is compressed with \(0.5\) bpp. As we expected, visibility and geometry saliency increase the overall reconstruction accuracy when multiplied with weights that are equal to 1. On the other hand, due to their subjective nature, focus and depth saliency contribute negatively to the extended saliency metric as the quality is reduced in big parts of the scene. Their presence on the final map is perceptually important since the quality is better distributed on the scene for small bit rates. For that reason, we assign small values to their weights that increase both metrics like \(w_{3}=w_{4}=0.1\). Table II summarizes the default values that we used and a short description. ### _Quality Evaluation and Experimental Results_ #### Iv-D1 Objective Comparison We compared our proposed PC codec with G-PCC using the latest test model category 13 (TMC13 v12.0 by MPEG), suitable for both static (category 1) and dynamically acquired (category 3) PCs. The TMC13 codec contains 2 geometry encoders (Octree and Trisoup) and 2 color encoders (Predlift and RAHT). Since our method encodes only geometry information we used the first two encoders. Although both compression frameworks reconstruct the whole PC scenes, for the comparisons we focused only on the visible points. In order to compare our approach with G-PCC octree and G-PCC trisoup, we adjusted accordingly each codec's parameters for compressing the aforementioned PCs using the same bit rate range. For our method, we set \(s_{thresh}\) of Eq. (19) from 0.3 to 0.001, leaving the parameters of Table II as default. The values we adopted for G-PCC's coding parameters are those defined by MPEG in the so-called Common Test Conditions (CTC) document [54]. For some parameters we used different values for enforcing the desired bit rates. To be more specific, for both G-PCC geometry encoders, the _positionQuantizationScale_ parameter was configured to determine the maximum voxel depth of the compressed PC. The _trisoup_node_size_log2_ was additionally modified to define the size of the block to which the triangular sour approximation is applied. So, for G-PCC octree we set _positionQuantizationScale_ from 0.8 to 0.01 and for G-PCC trisoup we set _trisoup_node_size_log2_ to 2, 3, 4, and _positionQuantizationScale_ to 1 for denser PCs such as the VCL/ITI's dataset and to small values for sparser models such as 0.1 for static scene 1 and 0.08 for Statue Klimt. This downscaling was also applied in [25] for typically sparser but with higher precision models. Due to the fact that each model has Fig. 8: Weight contribution on the final saliency map Fig. 9: Geometric point-to-point (D1) and point-to-plane (D2) PSNR comparisons. For our method we set \(s_{thresh}\) in \([0.001,0.3]\), for G-PCC octree _positionQuantizationScale_ in \([0.01,0.8]\), for G-PCC trisoup_node_size_log2 \(=4,3,2\). different properties and structure, they require different ranges of the aforementioned parameters (\(s_{thresh}\), _positionQuantizationScale_, _trisoup_node_size_log2_) in order to achieve a range of \(0.02-1.6\) bits per point (bpp). The adopted objective quality criteria for the rate-distortion performance assessment are those used by MPEG, the aforementioned geometric point-to-point (D1) and point-to-plane (D2) PSNR metrics. Rate-distortion (RD) curves are presented in the Figs. 9 and 11. Table III presents the BD-PSNR against octree G-PCC for bit-rates below 1. Both the figures and the table indicate that the proposed method achieves significant quality results for minor bit rates. For bit rates less than one, the performance of our algorithm on average is better for \(10.1756\) dB and \(7.1748\) dB for D1 and D2 metrics respectively, compared to octree G-PCC. Notably for even smaller bit rates the results are more satisfactory. Since the differences in terms of quality change minimally for bigger bit rates, our method is ideal for aggressive compression ratios. ## V Conclusion In this work we presented a novel, geometry-based, end-to-end compression scheme for static PCs. Our proposed method highlights the most visually significant parts of the PC and compresses the position of each point based on an extended saliency metric that combines viewer's relative position and geometric saliency. The quality reduction in perceptually insignificant parts of the scene adds a realistic sensation and is not noticeable by the user, even for aggressive compression rates. Extensive assessment tests, performed with a dataset of PCs with different characteristics, verify the superiority of our approach for aggressive bit rates, as compared to two benchmarks of MPEG, namely G-PCC octree and G-PCC trisoup. Rate distortion curves and BD-PSNR table prove that our method is better for bit rates less than one. Qualitative tests have shown that for our method, the quality of the reconstructed PCs remains almost unaffected even for very small bit rates. The majority of geometric detail is lost for the baseline benchmark methods for small rates. So, we could extend our studies by integrating the saliency-aware encoding scheme to user-interactive rendering applications in order to increase compression efficiency in such scenes. There is one major limitation of the proposed framework that could be addressed in future research. The reconstruction of the PC, based on solving a sparse linear system on the decoder side, leads to high execution times compared to SoA compression codecs. It is essential to focus future efforts on reducing execution time at the decoder using efficient schemes and parallelizable implementations in order to allow real-time performance on commodity hardware.
2302.03685
Photometric binaries, mass functions, and structural parameters of 78 Galactic open clusters
Binary stars play a crucial role in our understanding of the formation and evolution of star clusters and their stellar populations. We use Gaia Data Release 3 to homogeneously analyze 78 Galactic open clusters and the unresolved binary systems they host, each composed of two main sequence (MS) stars. We first investigated the structural parameters of these clusters, such as the core radius and the central density, and determined the cluster mass function (MF) and total mass by interpolating the density profile of each cluster. We measured the fraction of binaries with a large mass ratio and the fraction of blue straggler stars (BSSs), and finally investigated possible connections between the populations of binary stars and BSSs with the main parameters of the host cluster. {Remarkably, we find that the MFs of 78 analyzed open clusters follow a similar trend and are well reproduced by two single power-law functions, with a change in slope around masses of 1$M_{\odot}$. The fraction of binary stars ranges from $\sim$15\% to more than $\sim$60\% without significant correlation with the mass and the age of the host cluster. Moreover, we detect hints of a correlation between the total fraction of binary stars and the central density of the host cluster. We compared the fraction of binary stars with that of BSSs, finding that clusters with high and low central density exhibit different trends. The fraction of binaries does not significantly change with the mass of the primary star and the mass ratio. The radial distribution of binary stars depends on cluster age. The binaries of clusters younger than $\sim$800\,Myr typically show a flat radial distribution, with some hints of a double peak. In contrast, the binaries of the remaining clusters are more centrally concentrated than the single stars, which is similar to what is observed in globular clusters.
Giacomo Cordoni, Antonino P. Milone, Anna F. Marino, Enrico Vesperini, Emanuele Dondoglio, Maria Vittoria Legnardi, Anjana Mohandasan, Marilia Carlos, Edoardo P. Lagioia, Sohee Jang, Tuila Ziliotto
2023-02-07T18:59:25Z
http://arxiv.org/abs/2302.03685v2
# Photometric binaries, mass functions, and structural parameters of 78 Galactic open clusters+ ###### Abstract Context:Binary stars play a crucial role in our understanding of the formation and evolution of star clusters and their stellar populations Aims:We use Gaia Data Release 3 to homogeneously analyze 78 Galactic open clusters and the unresolved binary systems they host, each composed of two main sequence (MS) stars. Methods:We first investigated the structural parameters of these clusters, such as the core radius and the central density, and determined the cluster mass function (MF) and total mass by interpolating the density profile of each cluster. We measured the fraction of binaries with a large mass ratio and the fraction of blue straggler stars (BSSs), and finally investigated possible connections between the populations of binary stars and BSSs with the main parameters of the host cluster. Results:Remarkably, we find that the MFs of 78 analyzed open clusters follow a similar trend and are well reproduced by two single power-law functions, with a change in slope around masses of \(1M_{\odot}\). The fraction of binary stars ranges from \(\sim\)15% to more than \(\sim\)60% without significant correlation with the mass and the age of the host cluster. Moreover, we detect hints of a correlation between the total fraction of binary stars and the central density of the host cluster. We compared the fraction of binary stars with that of BSSs, finding that clusters with high and low central density exhibit different trends. The fraction of binaries does not significantly change with the mass of the primary star and the mass ratio. The radial distribution of binary stars depends on cluster age. The binaries of clusters younger than \(\sim\)800 Myr typically show a flat radial distribution, with some hints of a double peak. In contrast, the binaries of the remaining clusters are more centrally concentrated than the single stars, which is similar to what is observed in globular clusters. Conclusions: ## 1 Introduction Characterization of binary stellar systems in star clusters is a crucial step in shedding light on various fields of stellar astrophysics, including the dynamic evolution of stellar systems, star formation, and stellar evolution. For example, a robust determination of the physical properties of a star cluster, including the mass function (MF) and the total mass, would require significant knowledge of its populations of binary stars. Furthermore, the stellar evolution of a binary system can strongly differ from that of single stars, and depends on the physical properties of the system, including the mass ratio, binding energy, and orbital period. Various approaches can be used to identify and characterize binary stars in stellar clusters. For example, binaries can be detected from radial-velocity variation or from photometric variability. While these methods have the advantage of constraining each binary system, they are either limited to bright stars or biased toward binaries with short periods. In this work, we follow an alternative approach based on the fact that binary stars formed by couples of main sequence (MS) stars exhibit redder colors with respect to single MS stars. Hence, binary stars can be identified as stars lying on the red side of the MS fiducial line in the color-magnitude diagram (CMD; e.g., Romani & Weinberg 1991; Bolte 1992; Rubenstein & Bailyn 1997; Bellazzini et al. 2002; Clark et al. 2004; Richer et al. 2004; Zhao & Bailyn 2005; Milone et al. 2009). The main advantages of this approach are that (_i_) it requires observations in only two different filters, hence requiring a small amount of telescope time; (_ii_) it allows us to simultaneously investigate large numbers of stars in the CMD; and (_iii_) the detection efficiency does not depend on binary properties such as period and inclination. Clearly, high-precision photometry is needed to disentangle binaries from single stars. Moreover, the correction for differential reddening and the identification of field stars that contaminate the cluster CMD are crucial ingredients to infer the fraction of binaries from photometry. On the other hand, the use of this method comes with some caveats, as follows. This approach has been used to investigate the binaries of a large sample of 67 Galactic GCs (see e.g., Sollima et al. 2007; Milone et al. 2012; Ji & Bregman 2015; Milone et al. 2016) using homogeneous photometry from images collected with the _Hubble Space Telescope_, and for Galactic open clusters with the combination of multiple surveys (Malofeeva et al. 2022, 2023).
2302.12950
Two-Disk Compound Symmetry Groups
Symmetry is at the heart of much of mathematics, physics, and art. Traditional geometric symmetry groups are defined in terms of isometries of the ambient space of a shape or pattern. If we slightly generalize this notion to allow the isometries to operate on overlapping but non-identical metric spaces, we obtain what we call compound symmetry groups. A natural example is that of the groups generated by discrete rotations of overlapping disks in the plane. Investigation of these groups reveals a new family of fractals, as well as a rich structure that is intriguing both mathematically and artistically. We report on our initial investigations.
Robert A. Hearn, William Kretschmer, Tomas Rokicki, Benjamin Streeter, Eric Vergo
2023-02-25T01:28:40Z
http://arxiv.org/abs/2302.12950v1
# Two-Disk Compound Symmetry Groups ###### Abstract Symmetry is at the heart of much of mathematics, physics, and art. Traditional geometric symmetry groups are defined in terms of isometries of the ambient space of a shape or pattern. If we slightly generalize this notion to allow the isometries to operate on overlapping but non-identical metric spaces, we obtain what we call _compound symmetry groups_. A natural example is that of the groups generated by discrete rotations of overlapping disks in the plane. Investigation of these groups reveals a new family of fractals, as well as a rich structure that is intriguing both mathematically and artistically. We report on our initial investigations. ## Introduction Symmetry is of fundamental importance in many disciplines of mathematics, from the fields of Galois theory, to the automorphisms of abstract algebra, to the isometries of wallpaper patterns and crystals. In physics, Noether's theorem connects the symmetries of space and time with conservation laws. In art, subtleties of degrees and kinds of symmetry arguably lie at the heart of what constitutes beauty. Here we expand on the traditional notion of geometric symmetry, with mathematical and artistic consequences, at least. The _symmetry group_ of a shape or pattern is defined as the group of all isometries of the ambient space that preserve that shape or pattern. Isometries may only be combined in so many ways in any given metric space: there are (up to isomorphism) only 17 wallpaper groups, 7 frieze groups, 230 3-dimensional space groups, etc. We cannot have, for example, five-fold rotational symmetry in any repeating pattern in the plane--though quasicrystals can approximate this [6]. Figure 1: Images of compound symmetry groups: (a) \(GG_{3,5}(2.42,2.41)\), (b) the \(n=5\) fractal. By considering groups generated by isometries of metric subspaces that are not identical, but instead overlap, we can probe some of these "forbidden" symmetries in new ways. A portion of one of these "compound symmetry groups" is shown in Figure 1a. Locally, this image contains regions of three-fold and five-fold symmetry--necessarily broken on larger scales--as well as combinations such as 15-fold. A new family of fractals lies embedded within these groups at critical parameter values, as shown in Figure 1b. In this paper we explore the characteristics of these new kinds of symmetry group. Of particular importance will be determining the parameter values at which the groups become infinite, exploring the underlying dynamics, and also understanding the characteristic fractals, or sometimes pseudofractals, that appear precisely at these transitions. This work began with the study of the behavior of certain "circle puzzles" [1, 2]--especially _Gizmo Gears_, shown in Figure 2, designed by Doug Engel.1 A two-disk compound symmetry group is the mathematical generalization of a circle puzzle. The study of the group structure of circle puzzles seems to have been initiated in [5]. Footnote 1: The specific question was this: can Gizmo Gears be finitely “unbandaged”, or chopped into pieces so that all natural turns (30\({}^{\circ}\) in this case) are unblocked, regardless of configuration? If not, a puzzle is said to “jumble”. Against all intuition at the time, Gizmo Gears does in fact jumble—it is past the critical radius (barely) for the compound symmetry group family \(GG_{12}\). The original discussion can be found here: [https://twistypuzzles.com/forum/viewtopic.php?t=25752](https://twistypuzzles.com/forum/viewtopic.php?t=25752). ### Definitions and Basic Properties of Two-Disk Systems A _compound symmetry group_ is a group generated by a set of isometries of subspaces of a metric space. Here we will primarily be concerned with compound symmetry groups generated by discrete rotations of two overlapping closed disks in the Euclidean plane. We sometimes call these _two-disk systems_. Without loss of generality, let the two disks be centered at \((-1,0)\) and \((1,0)\). Denote the left disk's radius as \(r_{1}\), and the right disk's as \(r_{2}\). The generators \(a,b\) are rotation of the left disk by \(-2\pi/n_{1}\), and of the right disk by \(-2\pi/n_{2}\). The group operation is function composition on the left2: \(ab(x)=b(a(x))\). We denote the group with these properties3 as \(GG_{n_{1},n_{2}}(r_{1},r_{2})\). If \(n_{1}=n_{2}\) we use a single subscript, and similarly for \(r_{1}\) and \(r_{2}\). We can also omit the radius specification to indicate a family of groups with unspecified but equal radii. For example, one very important family is \(GG_{5}\)--the groups generated by five-fold rotation of two equal disks. Footnote 2: We choose this convention so that move sequences can be read left to right. Similarly, the generators are defined to be clockwise rotations for compatibility with normal twisty puzzle notation. Footnote 3: \(GG\) was chosen in honor of Gizmo Gears, and also, conveniently, to indicate the interaction of two groups. To build some intuition about how two-disk systems work, consider Figure 3. This figure shows the action of the group elements on points in the plane, for \(GG_{5}\) at various \(r\). Regions that remain connected under all elements (_pieces_) are colored identically; the color is a function of the size of the orbit. In Figure 3a, \(r<1\), and the two rotations do not interact--the group is isomorphic to \(C_{5}\times C_{5}\). In Figure 3b, the disks overlap, so there is some interaction. Viewed as a circle puzzle, we have added 9 pieces to the puzzle. This group is isomorphic to \(C_{5}\times C_{5}\times A_{9}\) (we have added the even permutations of the wedge pieces). In Figure 3c, Figure 2: The Gizmo Gears puzzle. the overlap has increased, and many more small pieces are created. Observe that in all cases, we do have five-fold rotational symmetry about two different points--but only within a fixed radius. It is important to note that while regular symmetry groups are generated by and consist of isometries, compound symmetry groups are likewise generated by isometries, but the general group element is _not_ an isometry: If we perform \(ab\), different regions have been rotated by different amounts about different centers. This is called a _piecewise isometry_[3]. _Infinite Groups_ A key question about any two-disk group is whether it is finite or infinite. If a family \(GG\) has some infinite member, it will have a _critical radius_, \(r_{c}(GG)\), such that \(GG(r)\) is finite when \(r<r_{c}(GG)\), and infinite when \(r>r_{c}(GG)\).4 We know this because the size of the orbit of any given point cannot decrease as \(r\) increases--all group elements that affect it are still available--so \(GG\) can never go from infinite to finite as \(r\) increases. We can also speak of the critical radius when \(r_{1}\neq r_{2}\) if we fix one radius. Footnote 4: We believe, but have not proved, that it will also be infinite exactly at the critical radius. We can precisely characterize which \(GG_{n_{1},n_{2}}\) have infinite members: **Theorem 1**.: _There exists some \(r\) for which \(GG_{n_{1},n_{2}}(r)\) is infinite if and only if \(\operatorname{lcm}(n_{1},n_{2})\not\in\{2,3,4,6\}\).5_ Footnote 5: This fact is closely related to the crystallographic restriction theorem. Proof.: We omit the "only if" proof in this paper, and prove the more interesting direction. First, assume that \(n_{1}=n_{2}=n\). Observe that \(a^{-1}b\) is a translation of all points moved by both rotations: The point at for example \((-1,0)\) is moved (if \(r\geq 2\)), but the net rotation is \(0\). In particular, \(a^{-1}b\) represents one side of a regular \(n\)-gon of circumradius \(2\), as shown in Figure 4. \(a^{-2}b^{2}\) is another translation, and so on. We can generate translations from any one vertex of this \(n\)-gon to any other, as long as \(r\) allows the point to remain within both disks, by composing these sequences and their inverses. Figure 4: _Constructible translations can shrink arbitrarily._ Figure 3: _Different cases of \(GG_{5}\). (a) is isomorphic to \(C_{5}\times C_{5}\); (b) is isomorphic to \(C_{5}\times C_{5}\times A_{9}\); (c) is more complicated._ There are two cases. If \(n>5\), we take all the translations from one polygon vertex to an adjacent one, and observe the images we get of the origin under all of them. The resulting \(n\) points form another \(n\)-gon, but smaller than the original. But again, we can form translations between any of these new points by taking differences of the appropriate translations, and then repeat the process, resulting in a yet smaller polygon. Thus, we can generate arbitrarily small translations, and the group must be infinite. If \(n=5\), we start at the origin and apply every other pentagon edge translation, resulting in a pentagram shape whose vertices again form a pentagon smaller than the original. Again, we can iterate. We can do this with bounded \(r\)--inspection shows that \(r\geq 4\) is sufficient; no moves described ever move a relevant point more than a distance of \(4\) from either disk center. If \(n_{1}\neq n_{2}\), then it is easy to show that \((a^{-1}b)^{\alpha}\) and \((ba^{-1})^{\alpha}\), for some \(\alpha\), are rotations by \(2\pi/\mathrm{lcm}(n_{1},n_{2})\) about two different centers. We can use these rotations in place of \(a\) and \(b\) in the proof above (with \(r\geq 8\)). ### Geometric Constructions For some \(n\), we have geometric constructions showing that \(GG_{n}\) is infinite at a value of \(r\) matching our numerical estimates for critical radius. For other \(n\) we have plausible geometric constructions which agree well with our numerical estimates. But for most \(n\), all we have is our numerical estimates. The simplest case is \(n=5\). Figure 4(a) shows the relevant geometry. Using the shown relationships, simple trigonometry yields \(r=\sqrt{3+\varphi}\approx 2.149\), where \(\varphi\) is the golden ratio. A similar analysis of the geometry in Figure 4(b), where \(n=10\), gives \(r=\sqrt{4-\varphi}\approx 1.543\). The dynamical processes behind the transition from finite to infinite at the critical radius remain mysterious in most cases. However, for \(n=5\), we can see what is going on. (We omit due to space a proof that the single generator \(ab^{-1}\) produces the same behavior.) **Theorem 2**.: \(GG_{5}\) _is infinite at \(r=\sqrt{3+\varphi}\)._ Proof.: Referring to Figure 4(a), interpreted now as the complex plane, let \(\zeta_{n}=e^{2\pi i/n}\), and the point \(E=\zeta_{5}-\zeta_{5}^{2}\). Note that \(|E+1|=r\). We focus on how the line segment \(E^{\prime}E\) moves under specific sequences. The point \(F=1-\zeta_{5}+\zeta_{5}^{2}-\zeta_{5}^{3}\) lies on \(E^{\prime}E\), as does the point \(G=2F-E\). We have three cases: 1. Line segment \(E^{\prime}F^{\prime}\) is transformed by \(a^{-2}b^{-1}a^{-1}b^{-1}\) to line segment \(GF\). 2. Line segment \(F^{\prime}G^{\prime}\) is transformed by \(abab^{2}\) to line segment \(FE\). 3. Line segment \(G^{\prime}E\) is transformed by \(abab^{-1}a^{-1}b^{-1}\) to line segment \(E^{\prime}G\). Together, these three operations can translate any portion of the line segment \(E^{\prime}E\) piecewise onto itself. At no time does any point leave the intersection of the two disks during these transformations. The first two cases are translations of length \(|F-F^{\prime}|\), and the third case is a translation of length \(|E-G|\). These two values are not rationally related to the total length \(|E-E^{\prime}|\), since \(|E-E^{\prime}|/|F-F^{\prime}|=\varphi\). We can thus map the origin to successive points along \(E^{\prime}E\), by repeatedly choosing the transformation matching the region the point is in, indefinitely; it has an infinite image. For the cases of \(n=8\) (Figure 4(c)) and \(n=12\) (Figure 4(d)), their characteristic fractals (see below) can provide insight. A path of consecutive line segments that follows the fractal structure can be realized, starting from the center of one disk and approaching the disk boundary from the interior. For \(n=8\), consecutive segments scale down by a factor of \(\sqrt{2}-1\) and traverse angles of \(\pi/8\). We can calculate the corresponding limit point from the path to yield \(r=\sqrt{5(2-\sqrt{2})}\approx 1.711\). For \(n=12\), an analogous construction gives a scale factor of \(2-\sqrt{3}\), and \(r=\sqrt{2(20-11\sqrt{3})}\approx 1.377\). These closed-form radii rely on the limit point lying on the disk boundary at the critical value, which has not been proven. ### Critical Transitions and Fractals Precisely at any group's critical radius, we always observe a distinct fractal embedded in the image. It seems remarkable that these fractals have gone unnoticed for so long; they seem as natural as, e.g., the Mandelbrot set. We can define the _characteristic fractal_ for \(GG\) to be the set of points with infinite orbits at \(GG(r_{c}(GG))\). In particular, this associates a unique fractal with every \(n\notin\{2,3,4,6\}\), and similarly when \(n_{1}\neq n_{2}\).6 The canonical example is the fractal for \(n=5\), shown in Figure 0(b). We also include in this paper the fractals for \(n=8\) (Figure 4(c)) and \(n=12\) (Figure 4(d)). In Appendix B, we include higher-resolution images of the characteristic fractals for all \(n\) up to \(20\). Footnote 6: We also have fractals when \(r_{1}\neq r_{2}\), but in that case the critical radius becomes a two-parameter family. The appearance of fractals here seems somewhat mysterious. The hallmark of a fractal is the repeating of a pattern on successively smaller scales. But unlike fractals constructed with an explicit recursive rule, there are no scaling operations in these systems, only rotations. Where do they come from? Why do they look so different for different \(n\)? Some insight may be gained by considering Figure 4, which shows that in fact, rotations can combine to shrink patterns. A natural question for any characteristic fractal is whether it is the closure of the orbit of a single point, or whether it consists of finitely or infinitely many disjoint closures of orbits. In most cases the fractals seem to be closures of the orbit of a single point. For \(n=23\), it appears that the fractal consists of two distinct pieces, symmetric about the \(x\)-axis. However, it is possible that these pieces are connected, and we have not been able to reach this connection numerically. An especially interesting case is \(n=7\). Here, there seems to be a clear fractal structure. We can zoom in several times and see the same structure repeating. But then at a certain point, after zooming in by a factor of 500,000, the pattern suddenly changes. This was not apparent until we imaged the fractal to over 2 trillion Figure 5: Geometric constructions for critical radius: (a) \(n=5\), (b) \(n=10\), (c) \(n=8\), (d) \(n=12\). points. It is possible that this broken scale symmetry is a numerical artifact, but evidence argues against this. If it is real, this is perhaps the most mysterious process related to these systems we have yet observed. A video showing this transition may be found at [https://www.youtube.com/watch?v=FFeSlh0ifYE](https://www.youtube.com/watch?v=FFeSlh0ifYE). A portion of \(GG_{7}\) just short of the critical radius, shown in Figure 5(a), reveals the characteristic \(n=7\) fractal motif. In Figure 5(b), we see that embedded within \(GG_{12}\) lies a Koch-snowflake-like fractal seemingly based on four-fold symmetry, rather than the traditional three-fold. ### Numerical Models and Algorithms All of the images in this paper were generated by programs which simulate compound symmetry groups. Conceptually, for each point we want to image, we consider all possible rotation sequences applied to it, and plot the results--i.e., we plot its orbit. Figure 0(b), for example, was generated this way. When we want to show the action of the group on the entire space, it is more efficient to only image the disk boundaries. We image a boundary by discretizing it into small segments, imaging those segments, drawing the images into a high-resolution bitmap, then filling the resulting spaces within the bounded regions with appropriate colors, according to some coloring rule. This is how Figures 0(a), 5(a), 5(a), and 5(b) were generated. Figure 6: Figure 7: Filled images: (a) a portion of \(GG_{9}(1.408)\), (b) a portion of \(GG_{5}(2.144)\). The use of frontier search [4] when generating orbits helps enormously, allowing us to not keep the entire orbit in memory. It has enabled some points to be imaged to over 10 trillion distinct targets, helping us refine our critical radius estimates, and exposing the curious behaviors of some fractals discussed above. ### Single-Generator Images Additional structure can be revealed by imaging orbits under a single generator, rather than all group elements. In the simplest case, we have \(a^{\alpha}b^{\beta}\). Two examples are shown in Figure 8. This defines a discrete dynamical system which is an iterated piecewise isometry. Some work has been done to characterize these systems, but many questions remain. [7, 8] ### Critical Radii Table 1 summarizes our knowledge of critical radii for \(GG_{n}\) with \(n<20\). A more complete table, up to \(n=100\), may be found in Appendix A. In all cases, points were found with a minimum of 10 billion images, with some up to 10 trillion. These estimates may be too high because points with infinite image were missed, or too low because the points searched had very large but finite images. However, there is good agreement with the geometrically derived values, and we have confidence they are good to about 5 decimal places. \begin{table} \begin{tabular}{|c|c|c|} \hline \(n\) & Numerical estimate & Algebraic expression & Minimum Polynomial \\ \hline 5 & 2.148961 & \(\sqrt{3+\varphi}\approx 2.148961\) & \(x^{4}-7x^{2}+11\) \\ \hline 7 & 1.623574 & & \\ \hline 8 & 1.711411 & \(\sqrt{5(2-\sqrt{2})}\approx 1.711412\) & \(x^{4}-20x^{2}+50\) \\ \hline 9 & 1.408482 & & \\ \hline 10 & 1.543357 & \(\sqrt{4-\varphi}\approx 1.543362\) & \(x^{4}-7x^{2}+11\) \\ \hline 11 & 1.290582 & & \\ \hline 12 & 1.376547 & \(\sqrt{2(20-11\sqrt{3})}\approx 1.376547\) & \(x^{4}-80x^{2}+148\) \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \(n\) & Numerical estimate \\ \hline 13 & 1.213594 \\ \hline 14 & 1.196554 \\ \hline 15 & 1.163276 \\ \hline 16 & 1.148470 \\ \hline 17 & 1.127509 \\ \hline 18 & 1.121505 \\ \hline 19 & 1.104246 \\ \hline \end{tabular} \end{table} Table 1: Critical radii for \(GG_{n}\). Figure 8: (a) The orbit of the upper intersection point is plotted and colored according to its local density. (b) The full boundary is plotted, with spaces colored according to the order of their orbit. ## 5 Summary and Conclusions The concept of compound symmetry group opens up a new frontier in mathematics. Here we have just begun this exploration, by considering the two-disk compound symmetry groups. This investigation has revealed a new family of fractals, and a rich new source of spaces that combine symmetries in interesting and unexpected ways. These are new "places" to explore, similar for example to "Seahorse Valley" in the Mandelbrot set, that have until now remain unseen. We must omit due to space many additional topics we would like to cover, such as the appearance of quasicrystals when we move beyond the critical radius, and other observations of aperiodic behavior. While we have made significant progress in understanding two-disk compound symmetry groups, yet more work can still be done. In many cases, we lack even a basic theoretical understanding of the behaviors that cause the transition from finite to infinite size in these groups. For example, does every infinite two-disk group contain a point whose image is infinite, or a generator of infinite order? Is the critical radius of a two-disk system always an algebraic number, and is there a general formula for the critical radius? What dynamics are responsible for the creation of the fractals? Similar questions can be raised more generally for multi-disk systems or arbitrary compound symmetry groups, where the behavior governing infinite size groups is more complicated. For example, we observe that Theorem 1 fails to generalize to three-disk systems: consider a set of three disks centered at, say, \((0,0)\), \((1,0)\), and \((\sqrt{2},0)\), and consider the compound symmetry group obtained by taking rotation increments \(n_{1}=n_{2}=n_{3}=2\). Then if we choose disk radii to be sufficiently large, the resulting compound symmetry group will be infinite, because the three corresponding rotations of the plane generate a pair of translations along the \(x\)-axis whose ratio is irrational. Another question that presents itself is whether it is even decidable whether a given multi-disk compound symmetry group is finite. Some movies of systems with varying zooms and radii may be found at [https://tinyurl.com/yz9pntwy](https://tinyurl.com/yz9pntwy). ## 6 Acknowledgments We thank Doug Engel for Gizmo Gears, and his two definitive books on circle puzzles. We thank Brandon Enright for his significant contributions, insights, and ideas. We thank Oskar van Deventer and Carl Hoff, who first introduced Hearn to this problem, and contributed insights. We also thank Bram Cohen, Scott Elliott, Landon Kryger, Andreas Nortmann, Jason Smith, Nathaniel Virgo, and all others who shared interest and insights in the twistypuzzles.com forum.
2304.10211
Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba
2023-04-20T10:59:56Z
http://arxiv.org/abs/2304.10211v1
# Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras ###### Abstract. Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER. ## 1. Introduction Facial Expression Recognition (FER) has received particular interest in recent years given its diverse and practical applications in computer vision, e.g., security, health, and communication. So far, efficient learning-based models have been proposed for FER (Krizhevsky et al., 2014). However, these methods often overcome the energy consumption constraint (Krizhevsky et al., 2014). The convergence between the need to reinforce the complexity of learning models and the requirement of a physical platform at the cutting edge of technology induces heavy consequences in terms of energy consumption and has repercussions on the environment of the planet. Spiking Neural Networks (SNNs) have become popular due to practical interest in terms of addressable complexity and energy efficiency than artificial neural networks (ANN). The neurons in SNNs asynchronously transmit information through sparse and binary spikes, enabling event-driven computing. Unlike ANN requiring dense matrix operations, energy consumption occurs only when a spike is generated on a neuromorphic chip. Hence, SNNs can significantly improve the energy efficiency of artificial intelligence systems. Although SNNs are less energy-consuming, they do not match the performance of ANNs in computer vision. Inspired by the advances in ANNs, many recent methods are proposed to improve the performance of SNNs, e.g., ANN to SNN conversion (Shi et al., 2017), innovative architectures (Shi et al., 2017), new features encoding methods (Beng et al., 2017), and data augmentation (Shi et al., 2017)). However, very few studies have focused on FER, mainly due to the lack of training data. In this paper, we propose an innovative framework for FER using SNNs, as illustrated in Fig. 1. First, a conversion framework using the V2E converter (Vaswani et al., 2017) is proposed in order to preprocess the well-known FER video datasets and generate event-based streams which are a suitable format for SNN architecture. Then, the Spiking-FER model is trained using Surrogate Gradient Learning (Srivastava et al., 2014), which enables the applicability of the backpropagation algorithm for deep SNN architectures. Finally, several experiments are conducted in order to evaluate the proposed model and compare it to a conventional ANN in terms of recognition accuracy and energy consumption. Our proposal brings three novelties: * **Event-based FER Benchmark**: we provide a reproducible protocol to generate event-based FER benchmarks from the most popular video FER datasets which is, to the best of our knowledge, the first event-based benchmark proposed for FER. * **New SNN Model Architecture**: we propose a new end-to-end deep convolutional SNN method, called "Spiking-FER" that encodes the event streams into spiking features that are then used by the output accumulator in order to predict the facial expression. * **Event Data Augmentation for FER**: we analyze the performance of Spiking-FER and its corresponding ANN architecture depending on popular Event Data Augmentations (EDAs) to investigate their impacts on event-based FER. The paper is structured as follows. In Section 2, we review relevant recent methods for FER and on the evolution of the SNN for event-based vision. Section 3 presents our SNN model architecture and the training process on event-based FER data. Section 4 introduces the experimental setup including datasets and evaluation protocols. The experimental results are provided in Section 5. Finally, we discuss the results and future work in Section 6. ## 2. Related Works ### Facial Expression Recognition FER methods could be classified into two categories: static (frame-based) and dynamic (sequence-based) methods. Frame-based methods extract spatial features from still images. They rely on hand-crafted features (Shi et al., 2017) or learned features (Shi et al., 2017) using mainly CNNs (Shi et al., 2017), but recently transformer architecture has also come into play (Krizhevsky et al., 2014). Sequence-based methods are performed in different ways: either by aggregating frames, e.g., onset-apex-offset (Shi et al., 2017) or onset-apex (Shi et al., 2017), or by using consecutive frames in order to encode the temporal information. These methods use mainly deep architectures such as 3D CNN (Shi et al., 2017), Recurrent Neural Networks (RNN) (Shi et al., 2017), and Transformers (Shi et al., 2017). Recently, motion has come into play, and has proven to be effective in sequence-based FER (Shi et al., 2017). Moreover, it has been proposed to address different challenges for FER (occlusions (Shi et al., 2017), intensity (Shi et al., 2017)), taking advantage of the fact that inter-individual variation in motion is better suited for FER than appearance features (Beng et al., 2017). Performance improvement is achieved by increasing the complexity of learning approaches, especially by taking into account spatio-temporal encoding. However, this improvement comes usually with the expense of energy consumption. ### Event-based Vision paired with Spiking Neural Networks Recently, learning algorithms adapted from backpropagation such as surrogate gradient learning (Srivastava et al., 2014) enable the training of deep SNN architecture by solving the non-differentiability issue of spiking neurons (Goodfellow et al., 2016). Such directly trained deep architectures are the first of several attempts capable of tackling event-based vision problems of similar complexity to those addressed by ANNs currently, such as object detection (Goodfellow et al., 2016), semantic segmentation (Goodfellow et al., 2016), and object tracking (Wang et al., 2017). Furthermore, these architectures start to adapt state-of-the-art techniques from ANNs (Vision Transformer (Vaswani et al., 2017), spatio-temporal attention (Srivastava et al., 2014),...) to operate with spiking neurons thanks to this new ability of gradient-based optimization. This recent direction of directly trained SNNs coupled with event cameras demonstrates impressive efficiency by being able to outperform ANNs (Goodfellow et al., 2016) and showing reduced energy consumption by orders of magnitude (Goodfellow et al., 2016). ## 3. Methodology ### Problem Formulation During a time interval \(\Delta_{\mathcal{T}}\), an event camera with a \(H\times W\) resolution produces a set of \(N\) asynchronous events \(\mathcal{E}=\{e_{i}\}_{i=1}^{N}\). Each event \(e_{i}\) of the sequence can be formulated as a tuple of 4 values: \(e_{i}=\{x_{i},y_{i},t_{i},p_{i}\}\), where \((x_{i},y_{i})\) correspond to the pixel coordinates, \(t_{i}\) is the timestamp, and \(p_{i}\in\{1,-1\}\) is the sign of the polarity change. As the asynchronous nature of events is not appropriate for many computer vision approaches (Garfani et al., 2017), a popular event representation method (Garfani et al., 2017) is to discretize a stream of events \(\mathcal{E}\) into a sequence of \(T\) binary event frames \(\mathbf{X}_{T}\in\mathbb{B}^{T\times 2\times H\times W}=\{X_{t}\}_{t=1}^{T}\). In this work, it is done by accumulating events during \(T\) subsequent time intervals \(\frac{\Delta_{\mathcal{T}}}{T}\) to create the sequence of binary frames and thus the final spike tensor \(\mathbf{X}_{T}\). Event-based FER can be defined as follows: given an event sequence \(\mathcal{E}\) obtained from capturing a subject that performs a facial expression, the objective is to recognize this expression as the appropriate label \(c\) among \(\mathcal{C}\) different classes. To do so, a model \(f_{\alpha}(\cdot)\) with a set of learnable parameters \(\alpha\) is trained such that: \(c=f_{\alpha}(\mathbf{X}_{T})\). The top of Fig. 2 illustrates the formulation of event-based FER with the related notations. ### Spiking-FER Spiking-FER is represented by the model \(f_{\alpha}(\cdot)\), where \(\alpha\) denotes its synaptic weights. The bottom of Fig. 2 illustrates an overview of the proposed Spiking-FER architecture. #### 3.2.1. Spiking Neuron Model The proposed convolutional SNN architecture uses the Integrate-and-Fire (IF) neuron (Srivastava et al., 2014) as the spiking neuron model. It accumulates input spikes weighted by the synaptic weights into a'membrane potential'. When this membrane potential exceeds a certain threshold value, the neuron emits an output spike and is reset to zero. The discretized dynamics of a layer \(l\) of IF neurons from Spiking-FER at a certain time-step \(1\leq t\leq T\) is described as follows: \[U_{t}^{l}=U_{t-1}^{l}+\mathcal{W}^{J}X_{t-1}^{l-1}-\theta X_{t}^{l} \tag{2}\] \[X_{t}^{l}=\Theta(U_{t}^{l}-\theta) \tag{1}\] where \(U_{t}^{l}\) denotes the membrane potentials of the IF neurons, \(\mathcal{W}^{l}\) is the set of synaptic weights, \(X_{t}^{l}\in\mathbb{B}\) denotes the output spike tensor. \(X_{t}^{l}\) consists of 1's when the related element of \(U_{t}^{l}\) exceeds the threshold value \(\theta\), and 0's otherwise. For simplicity, the threshold is set to 1 for all layers (i.e., \(\theta=1\)). This mechanism, formulated in Eq. 2 is known as the Heaviside step function (\(\Theta(\cdot)\)). #### 3.2.2. Direct Training via Surrogate Gradient Spiking-FER is trained using Surrogate Gradient Learning (Goodfellow et al., 2016; Srivastava et al., 2014), a popular and effective training approach for deep SNN models. An SNN can be expressed as a Recurrent Neural Network where the membrane potentials are internal states. Consequently, the synaptic weights can be trained using Backpropagation Through Time (Srivastava et al., 2014). The main issue is related to the backward pass, where \(\Theta(\cdot)\) is not differentiable - i.e., its derivative is 0 almost everywhere, and +\(\infty\) at 0 - causing the gradient chain to break ("dead neuron problem" (Goodfellow et al., 2016)). Therefore, surrogate gradient learning solves this problem by employing the derivative of a continuous surrogate function \(\sigma(\cdot)\) on the backward pass as an approximation of the derivative of \(\Theta(\cdot)\). In Spiking-FER, we define \(\sigma(x)=\frac{1}{\pi}\arctan(\pi x)+\frac{1}{2}\). #### 3.2.3. Model Architecture Strongly related to (Goodfellow et al., 2016), Spiking-FER consists of two modules: **(1)** a deep convolutional SNN encoder that encodes the event streams into spiking features; and **(2)** an output accumulator module (Goodfellow et al., 2016) that predicts the emotion of the sample from the encoded spiking features. The encoder is a SEW-ResNet-18 (Goodfellow et al., 2016) architecture that outputs spiking feature vectors \(F_{t}\in\mathbb{B}^{d}\), where \(d\) is the number of output channels (in SEW-ResNet-18, \(d=512\)). At each time-step, these extracted spiking features are fed into the output accumulator module responsible for making the final prediction. As shown in the rightmost part of Fig. 2, the output accumulator module is composed of one fully connected layer of artificial neurons and one linear classifier. Firstly, it accumulates the spiking features from all time-steps to obtain a single feature vector \(\mathcal{F}\in\mathbb{R}^{d}\) such that: \[\mathcal{F}=\sum_{t=1}^{T}\mathcal{W}\times F_{t} \tag{3}\] , where \(\mathcal{W}\in\mathbb{R}^{d\times d}\) is the set of trainable weights in the fully connected layer. Then, the features \(\mathcal{F}\) are fed into the linear classifier to obtain the final classification prediction. The whole network is trained end-to-end using the cross-entropy loss. ## 4. Experimental Setup In this section, we present the experimental setup including datasets, evaluation protocols and the models configurations. ### Video-to-Events Conversion To validate the applicability of event-based data and SNNs to FER applications, while being comparable to standard FER baselines (Kal we convert some of the most popular video FER datasets: ADFES (Srivastava et al., 2017), CASIA (Srivastava et al., 2017), CK+ (Kang et al., 2017), and MMI (Mori et al., 2017) to an event-based format. Each video of a given FER dataset is processed by two successive steps. The first step is a standardization of all frames (Bahdan et al., 2016): the face of the represented subject is cropped and rotated based on 68 facial landmarks and converted to grayscale. Then, the resulting frame is resized to a resolution of \((200\times 200)\). The second step corresponds to the conversion of the standardized video into events using v2e (Srivastava et al., 2017), a video-to-event converter for realistic events simulation, as illustrated in Fig. 3. The code and parameters to reproduce the benchmark are available1. Footnote 1: The code will be released upon acceptance ### Evaluation Protocol Models that are evaluated on an event-based FER dataset follow a 10-fold cross-validation configuration: the given dataset is randomly split into 10 folds of equal size. The employed metric for every iteration is the top-1 classification accuracy. Finally, we report the mean accuracy score of the 10 folds. ### Implementation Details The experiments are implemented in PyTorch, Tonic (Tonic, 2017) and Spikingfelly (Spikingfelly, 2017) as our SNN simulator, and run on one NVIDIA A40 GPU. We train our models (Spiking-FER and ResNet-18) during 500 epochs, using an SGD optimizer, with a learning rate of 0.01 and a cosine annealing scheduler (Toledo et al., 2017), and keep the best performance on the validation set. A low-latency regime is adopted for Spiking-FER, with \(T=6\). ### Comparison with ANN Since the convolutional SNN encoder of Spiking-FER is a SEW-ResNet-18, we choose a ResNet-18 (He et al., 2018) model as the corresponding ANN. Similarly to the ANN model defined in (Bahdan et al., 2016), the spike tensor \(\mathbf{X}_{T}\) is fed into the 2D-CNN by concatenating all binary event frames together along the time axis. ## 5. Experiments ### Study on Event Data Augmentation To investigate the impacts of popular event data augmentations (EDAs)(Kang et al., 2017), given in Table 1, on the model performance, the experiments are conducted in 2 successive parts: **(1)** an analysis of common EDAs; **(2)** an analysis on specific EDAs for either regularization of training with scarce datasets, or FER-specific transformation. #### 5.1.1. Common EDAs Since EDAs can be applied in combination, the main objective of this part is to assess which EDA has the best impact when they are combined with each other. Therefore, we run all possible combinations of common EDAs, which gives a total of 32 experiments for a given dataset, as illustrated in Fig. 4. Figure 2. Overview of the proposed framework. _Top_) Formulation of Event-based Facial Expression Recognition. _Bottom_) The Spiking-FER architecture where the convolutional SNN encoder is expressed as a recurrent neural network. The baseline results show that the SNN model performs better than the ANN model without augmentation - i.e., using only the original data from the event stream. Often observed in neural network training, data augmentation tends to significantly improve performance. This is especially true for FER, where databases are scarce. On ANNs, we observe that all the EDAs combinations have a positive or null impact, unlike the SNNs, where some EDAs combinations tend to decrease the performances. Among the EDA methods, the combination {_Crop_, _H Flip_ and _Noise_} significantly improves the performance on both ANNs and SNNs, except for the MMI dataset, where the improvement is less significant. This can be explained by the greater complexity of the data, where greater head pose variations and variety of facial movement patterns appear. Then, we evaluate the accuracy scores of all folds for all experiments, which gives 320 scores. We perform a multivariate regression analysis on this population of 320 scores by considering the applied EDAs as categorical independent variables. For a given EDA, the regression analysis gives an approximation of the expected benefit in performance. Fig. 5 shows the results of the regression analysis for each dataset. According to the regression coefficients, \(Crop\) and \(HFlip\) have generally a positive impact, which suggest that they are well adapted for event-based FER. These methods cover well the small variations, e.g., face translation or image resolution changes, on the data observed in the different databases that compose the benchmark. However, _Reverse_ that reports either non-significant results or negative impacts in all cases. This can be explained by the fact that the activation of a facial expression follows a temporal sequence induced by the facial muscles. In this case, the reversal of the event flow is not consistent, especially since in this benchmark, where the sequences only go from the neutral state to the apex. _PolFlip_ highlights the differences between Spiking-FER and the ANN: while Spiking-FER constantly reports negative effects, the ANN model obtains a positive impact. This suggests that SNNs do not benefit from _PolFlip_ for event-based FER. #### 5.1.2. Specific EDAs We keep the best-performing combinations of common EDAs and evaluate the specific ones. For a given dataset, the best combination of common EDAs is defined as the highest mean accuracy score obtained on the 10-fold cross-validation. Fig. 6 reports the results obtained with and without these specific EDAs, adapted to the event flows for FER. Considering the performances, we note that the combination of _EventDrop_, which regularizes the training of neural networks on limited datasets, and _Mirror_, which transforms the visual aspect of a subject's face, is perfectly adapted to augment facial expressions for both ANNs and SNNs. In addition, to improve the performance of the models, the performance gap between the ANNs and SNNs models is significantly reduced, especially for the ADFES dataset. Both EDAs have been designed to adapt to inter-individual variation, e.g., face symmetry and expression activation time. ### Estimation of Energy Consumption Similarly to (Beng et al., 2017; Wang et al., 2018), we compare the energy efficiency of Spiking-FER and a similar ANN when simulated on a 45nm CMOS chip (Wang et al., 2018). The estimation methodology is described as follows: firstly, we quantify the spiking rate of each layer, as spiking neurons consume energy only when generating a spike. The spiking rate of a given layer \(l\) is calculated as follows: \[Rs(l)=\frac{\text{\# spikes of $l$ over all time-steps}}{\text{\# neurons of $l$}} \tag{4}\] Secondly, we compute the total floating-point operations (FLOPs) of a layer of spiking neurons (\(FLOPs_{\text{\small SNN}}\)) by using the FLOPs of the same layer in a non-spiking neural network (\(FLOPs_{\text{\small ANN}}\)) and the spike rate of the spiking neuron layer: \begin{table} \begin{tabular}{l l} \hline \hline **EDA** & **Description** \\ \hline \(Crop\) & Spatial crop of the whole sequence with a random scale \\ \(HFlip\) & Horizontal flip of the whole sequence \\ \(Noise\) (\(BA\)) & Noisy events due to corrupted pixels in event cameras (Garfani et al., 2016). \\ \(PolFlip\) & Flip of polarity (i.e., \(p_{i}=-p_{i}\) for all events) \\ \(Reverse\) & Reverse the orders of events. \\ **EventDrop**(Wang et al., 2018) & Randomly drops events spatially, temporally or globally \\ **Mirror** & Mirrors the left or right half of the sequence \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of EDAs. Common EDAs and Specific EDAs are respectively in italic and in bold. Figure 3. Illustration of the proposed event-based FER benchmark. The video sequences are converted into events corresponding to the output of event cameras. \[FLOPS_{SSNN}(I) =FLOPS_{SANN}(I)\times Rs(I) \tag{5}\] \[FLOPS_{ANN}(I) =\begin{cases}k^{2}\times O^{2}\times C_{in}\times C_{out}&\text{if $l$ is Conv.}\\ C_{in}\times C_{out}&\text{if $l$ is Linear.}\end{cases} \tag{6}\] In Equation 6, \(k\) represents the kernel size, \(O\) represents the size of output feature maps, \(C_{in}\) represents the number of input channels, and \(C_{out}\) represents the number of output channels. Finally, the total energy consumption of a model can be estimated on CMOS technology (Kang et al., 2017) by using the total FLOPs across all layers. Table 2 presents the energy cost of relevant operations in a 45nm CMOS process. MAC operation in ANNs requires one addition (32bit FP ADD) and one FP multiplication (32bit FP MULT) (Shen et al., 2016), whereas SNNs require only one FP addition per MAC operation due to binary spike processing. The total energy consumption of ANNs and SNNs are represented by \(E_{ANN}\) and \(E_{SSNN}\), respectively. \[E_{ANN} =\sum_{I}FLOPS_{SANN}(I)\times E_{MAC} \tag{7}\] \[E_{SSNN} =\sum_{I}FLOPS_{SSNN}(I)\times E_{AC} \tag{8}\] Table 3 reports the mean inference energy estimation for each dataset. Similarly to previous works (Chen et al., 2017), Spiking-FER shows better energy efficiency by orders of magnitude (from 47.42\(\times\) to 65.39\(\times\) more efficient), which proves the applicability of SNNs for low-power FER application on edge devices. ## 6. Conclusion In this work, we introduced _event-based benchmarks for Facial Expression Recognition_ (FER) and proposed a new SNN architecture named _Spiking-FER_. We applied traditional augmentation techniques adapted to event streams, along with two specific techniques - _EventDrop_(Kang et al., 2017) and _Mirror_ - that led to significant improvements in our model's performance. Our proposed approach achieved similar performance to a traditional Artificial Neural Network (ANN) while consuming much less energy (up to 65.39\(\times\)). Our future work will extend this study to other applications such as gesture or action analysis. \begin{table} \begin{tabular}{|c|c|c|} \multicolumn{2}{c|}{**Operation**} & \multicolumn{1}{c|}{**Energy (p)**} \\ \hline 32bit FP MULT (\(E_{MULT}\)) & 3.7 \\ 32bit FP ADD (\(E_{ADD}\)) & 0.9 \\ 32bit FP MAC (\(E_{MAC}\)) & 4.6 (\(=E_{MULT}+E_{ADD}\)) \\ 32bit FP AC (\(E_{AC}\)) & 0.9 \\ \end{tabular} \end{table} Table 2. Energy table for a 45nm CMOS process (from (Kang et al., 2017)). Figure 4. Acc. obtained according to combinations of common EDA; (A) H Flip; (B) Noise; (C) Reverse; (D) Pol Flip; (E) Crop. Figure 5. Significance regression coefficients (p-value \(<0.05\)) calculated on the 320 scores, corresponding to each common EDA for the different datasets (higher is better). \begin{table} \begin{tabular}{|c|
2307.09210
Nested stochastic block model for simultaneously clustering networks and nodes
We introduce the nested stochastic block model (NSBM) to cluster a collection of networks while simultaneously detecting communities within each network. NSBM has several appealing features including the ability to work on unlabeled networks with potentially different node sets, the flexibility to model heterogeneous communities, and the means to automatically select the number of classes for the networks and the number of communities within each network. This is accomplished via a Bayesian model, with a novel application of the nested Dirichlet process (NDP) as a prior to jointly model the between-network and within-network clusters. The dependency introduced by the network data creates nontrivial challenges for the NDP, especially in the development of efficient samplers. For posterior inference, we propose several Markov chain Monte Carlo algorithms including a standard Gibbs sampler, a collapsed Gibbs sampler, and two blocked Gibbs samplers that ultimately return two levels of clustering labels from both within and across the networks. Extensive simulation studies are carried out which demonstrate that the model provides very accurate estimates of both levels of the clustering structure. We also apply our model to two social network datasets that cannot be analyzed using any previous method in the literature due to the anonymity of the nodes and the varying number of nodes in each network.
Nathaniel Josephs, Arash A. Amini, Marina Paez, Lizhen Lin
2023-07-18T12:46:34Z
http://arxiv.org/abs/2307.09210v1
# Nested Stochastic Block Model for Simultaneously Clustering Networks and Nodes ###### Abstract. We introduce the nested stochastic block model (NSBM) to cluster a collection of networks while simultaneously detecting communities within each network. NSBM has several appealing features including the ability to work on unlabeled networks with potentially different node sets, the flexibility to model heterogeneous communities, and the means to automatically select the number of classes for the networks and the number of communities within each network. This is accomplished via a Bayesian model, with a novel application of the nested Dirichlet process (NDP) as a prior to jointly model the between-network and within-network clusters. The dependency introduced by the network data creates nontrivial challenges for the NDP, especially in the development of efficient samplers. For posterior inference, we propose several Markov chain Monte Carlo algorithms including a standard Gibbs sampler, a collapsed Gibbs sampler, and two blocked Gibbs samplers that ultimately return two levels of clustering labels from both within and across the networks. Extensive simulation studies are carried out which demonstrate that the model provides very accurate estimates of both levels of the clustering structure. We also apply our model to two social network datasets that cannot be analyzed using any previous method in the literature due to the anonymity of the nodes and the varying number of nodes in each network. **Keywords:** Multiple networks, clustering network objects, community detection, nested Dirichlet process, stochastic block model, Gibbs sampler (1) Department of Biostatistics, Yale University (2) Department of Statistics, UCLA (3) Department of Statistical Methods, Federal University of Rio de Janeiro (4) Department of Mathematics, The University of Maryland ## 1. Introduction Suppose we have a collection of networks represented by a sequence of adjacency matrices. How do we simultaneously cluster these networks and the nodes within? This question, in its most difficult form, is what we address in this paper. Clustering nodes within a single network, also known as community detection, has been studied extensively in the past. Clustering nodes within multiple (related) networks, so-called multilayer or multiplex community detection, has also been studied. Much less has been done, however, on trying to find structure among networks at the same time as performing community detection on them. There are two fundamental network settings in terms of difficulty: 1. There is _node correspondence_ among the networks. That is, the networks describe the relationship among the "same" set of nodes and we know the 1-1 correspondences that map the nodes from network to network. We refer to this case as the _labeled_ case. 2. The _unlabeled_ case, where there is no node correspondence among the networks. This includes the case where the nodes are the same among networks, but we do not know the correspondence. It also includes the much more interesting case where the nodes in each network are genuinely different; in this case, the networks can even be of different orders. We are interested in the unlabeled case. In this case, even if one performs individual clustering on nodes in each network, the clustering of the networks is still challenging, since there is the _hard problem of matching_ the estimated communities among different networks. This is in addition to the information loss incurred by doing individual community detection. Can we do simultaneous community detection in each network, borrowing information across them in doing so, and at the same time cluster the networks into groups themselves, all in the unlabeled setting? The nested stochastic block model (NBSBM) we propose in this paper is an affirmative answer to this question. NBSBM is a hierarchical nonparametric Bayesian model, extending the very well-known stochastic block model (SBM) for individual community detection to the setting of simultaneous network and node clustering. In NBSBM, the individual networks follow an SBM, but the node label (community assignment) priors and the connectivity matrices of these SBMs are connected through a hierarchical model inspired by the nested Dirichlet Process (NDP) (Rodriguez et al., 2008). In extending NDP to the network setting, we encountered difficulties and interesting phenomena when Gibbs sampling. These issues exist in the original NDP, but are exacerbated, hence easier to observe, once we extend to the network setting. To investigate these issues, we propose four different Gibbs samplers and provide an extensive numerical comparison among them. A clear empirical picture arises through our simulations and real-world experiments as to the relative standing of these approaches. Although past work has noted difficulties in sampling from DP mixtures, those involved in sampling NDP-based models seem to be of different nature. As far as we know, our work is the first to report on these issues and provide a comprehensive exploration. We provide a summary of our findings in Section 4, suggest some plausible explanations, and leave the door open to future exploration of these phenomena, from both theoretical and practical perspectives. ### Related work There is a large literature on community detection for _a single network_. Approaches include graph partitioning, hierarchical clustering, spectral clustering, semidefinite programming, likelihood and modularity-based approaches, and testing algorithms (Abbe, 2017; Fortunato, 2010). There are also several Bayesian models that use the Dirichlet process for community detection in a single network (Kemp et al., 2006; Kim et al., 2012; Zhou, 2015; Newman and Clauset, 2016; Zhao et al., 2017; Shen et al., 2022). Recently, there has been an emerging line of work on multiple-network data in both supervised and unsupervised settings including averaging (Ginestet et al., 2017), hypothesis testing (Chen, Zhou, et al., 2019; Chen, Josephs, et al., 2023), classification (Arroyo Relion et al., 2019; Josephs, Lin, et al., 2023), shared community detection (Lei et al., 2020), supervised shared community detection (Arroyo and Levina, 2020), and shared latent space modeling (Arroyo, Athreya, et al., 2021). Each of these methods is for the labeled case, but unlabeled networks arise in many applications involving anonymized networks, misalignment from experimental mismeasurements, and active learning for sampling nodes. More generally, they arise as a population of networks sampled from a general _exchangeable random array_ model, including but not limited to graphon models (Orbanz and Roy, 2014). While there is a large literature on graph matching in which the alignment between two networks is unknown, there are only a few methods for multiple networks in the unlabeled case (Kolaczyk et al., 2020; Josephs, Li, et al., 2021; Amini et al., 2022). There are several methods within the multiple-network inference literature for clustering a population of networks. Reyes and Rodriguez (2016) introduce a Bayesian nonparametric model for clustering networks with similar community structure over a fixed vertex set. The authors modify the infinite relational model (IRM) from Kemp et al. (2006), which is closely related to the Dirichlet process, in order to capture (dis)assortative mixing and to introduce transitivity. Motivated by soccer team playing styles, where each team is represented by a network, Diquigiovanni and Scarpa (2019) introduce agglomerative hierarchical clustering of networks based on the similarity of their community structures, as measured by the Rand index. The community structures are obtained by the Louvain method. The approach clearly requires a shared vertex set. Mukherjee et al. (2017) provide two graph clustering algorithms: Network Clustering based on (a) Graphon Estimation (NCGE), and (b) Log Moments (NCLM). Both methods perform spectral clustering on a matrix of pairwise distances among the networks. In NCGE, one estimates a graphon for each network using, for example, neighborhood smoothing (Zhang et al., 2015) or universal singular value thresholding (Chatterjee, 2015) - more accurately, they estimate \(P=\mathbb{E}[A]\) where \(A\) is the \(n\times n\) adjacency matrix. The pairwise distances are the Frobenius distances of the estimated graphons. In NCLM, one computes a vector of log moments for each network, \(\big{(}\log\text{tr}((A/n)^{k}),k=1,\ldots,K\big{)}\), and then forms pairwise distances among these feature vectors. NCGE only works in the labeled case, while NCLM can handle unlabeled case. To further classify this literature, let us refine our earlier classification of network problems into the "labeled" and "unlabeled" cases. For a community detection problem, the labeled case can be refined based on whether communities of the same (labeled) nodes are allowed to change across networks. If a method allows such \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Network clustering** & **Node clustering** & **Unlabeled networks** & **Community heterogeneity** & **Learns** \\ \hline NBM (this paper) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Reyes and Rodriguez, 2016 & ✓ & ✓ & & (✓) & ✓ \\ \hline MLSBM (Stanley et al., 2016) & & & & & \\ MMLSBM (Jing et al., 2021) & & & & & \\ ALMA (Fan et al., 2022) & ✓ & ✓ & & (✓) & \\ Signorelli and Wit, 2020 & & & & & \\ \hline NCLM (Mukherjee et al., 2017) & ✓ & & ✓ & & \\ \hline NCGE (Mukherjee et al., 2017) & ✓ & & & & \\ \hline Diquigiovanni and Scarpa (2019) & ✓ & & & & \\ \hline RESBM (Paul and Chen, 2020) & & ✓ & & ✓ & \\ \hline Mantziou et al., 2021 & ✓ & & & (✓) & \\ \hline Young et al., 2022 & ✓ & & & & \\ \hline HSBM (Amini et al., 2022) & & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of network clustering methods. (✓) in the penultimate column means the method allows community heterogeneity only at the level of network clusters. variation, we say that it can handle _community heterogeneity_. An example is the Random Effect SBM (RE-SBM) of Paul and Chen (2020), where the communities across networks are Markov perturbations of a representative mean; we consider a very similar model in our simulations (Section 4). Their work also allows variation in the off-diagonal entries of the SBM connectivity matrices. A very similar setting is considered in Chen, Liu, et al. (2022), where minimax rates for recovering the global and individualized communities are established. An alternative approach to modeling a population of networks is to assume that they are perturbations of some latent (true) network. Letting \(A^{*}\) denote the adjacency matrix of the true network, the simplest way to generate perturbed networks is to flip each entry \(A^{*}_{ij}\) independently with some probability. Among the first to consider this model are Pedarsani and Grossglauser (2011) who propose a simple version for studying the privacy of anonymized networks. More recently, Le et al. (2018) generate perturbed \(A=(A_{ij})\) via \[A_{ij}\,|\,A^{*}_{ij}\;\sim\;\begin{cases}\mathsf{Ber}(1-Q_{ij})&A^{*}_{ij}=1 \\ \mathsf{Ber}(P_{ij})&A^{*}_{ij}=0\end{cases}. \tag{1.1}\] Let us refer to the above as the measurement error (ME) process. Le et al. (2018) assume that there is single latent \(A^{*}\) generated from an SBM and one observes multiple noisy measurements of it from the ME process above. The authors also assume that \(P\) and \(Q\) matrices share the same block structure with the underlying SBM, and propose and analyze an EM type recovery algorithm. We refer to the model from Le et al. (2018) as ME-SBM. Mantziou et al. (2021) extend this model to allow multiple true networks \(A^{*}_{c},c=1,\ldots,C\), each observed network being a perturbed version of one of them. In other words, they consider a mixture of ME-SBMs and devise an MCMC sampler for the posterior, similar to the scheme in Lunagomez et al. (2021). A similar mixture of (simplified) ME processes is considered in Young et al. (2022) for network clustering in which \(1-Q_{ij}=\alpha\) and \(P_{ij}=\beta\). The underlying latent networks \(A^{*}_{c}\) are not assumed to have specific structures (such as SBM) and are inferred by Gibbs sampling. Though the above works all assume labeled networks, Josephs, Li, et al. (2021) recently extended the mixture of ME processes to the unlabeled setting. To summarize the previous two general approaches, in the RE-SBM, the "communities" undergo Markov perturbations to account for the variation in the observed networks, while in the ME-SBM type models, the "adjacency matrices" undergo a Markov perturbation. A third line of work treats the multiple networks as layers of a single multilayer network. Two notable example are the strata multilayer stochastic block model (sMLSBM) of Stanley et al. (2016) and the mixture multilayer stochastic block model (MMLSBM) of Jing et al. (2021), which are essentially the same model, independently proposed in different communities. In both cases, the multiple networks, \(A^{j},j=1,\ldots,J\) - viewed as layers - are assumed to be independently drawn from a mixture of SBMs, that is, \[A^{j}\sim\sum_{k=1}^{K}\pi_{k}\,\mathsf{SBM}(B_{k},\mathbf{\zeta}_{k})\;\;, \tag{1.2}\] where \(\mathsf{SBM}(B_{k},\mathbf{\zeta}_{k})\) represent the distribution of the adjacency matrix of an SBM with connectivity matrix \(B_{k}\) and label vector \(\mathbf{\zeta}_{k}\). This model allows community heterogeneity, but only at the level of network clusters, not at the level of individual networks. To fit the model, Stanley et al. (2016) use a variational EM approach, while Jing et al. (2021) use tensor factorization ideas - they compute an approximate Tucker decomposition via alternating power iterations, which they call TWIST - and provide a theoretical analysis of their approach. Fan et al. (2022) introduce a slightly different "alternating minimization algorithm (ALMA)" to compute the tensor decomposition for MMLSBM, arguing that ALMA has higher accuracy numerically and theoretically compared to TWIST. We compare with ALMA in Section 4. While MMLSBM can be used to simultaneously cluster networks and nodes, it requires the networks to be on a shared vertex set. Along the same lines, Signorelli and Wit (2020) model a population of networks as a general mixture model \(A^{j}\sim\sum_{k=1}^{K}\pi_{k}f(\cdot\,|\,\theta_{k})\), but in the end focus mainly on the case where \(f(\cdot\,|\,\theta_{k})\) represents either an SBM as in Equation (1.2) or a \(p_{1}\) model (Holland and Leinhardt, 1981). They use EM for fitting the model and AIC/BIC to determine the number of components. Besides the method of Reyes and Rodriguez (2016), all of the aforementioned methods assume the number of classes and communities to be known. In contrast, by employing an NDP prior, our model learns the number of network classes and node communities. Furthermore, as we will demonstrate later, our method does not require node correspondence between the observed networks and allows for community heterogeneity between networks even in the same class. No other model in the literature exhibits these flexibilities. A summary of features for various graph clustering methods is given in Table 1 and more details on the advantages of NSBM are given in Section 2.4. ### Organization The remainder of our paper is organized as follows. In Section 2, we provide a review of the nested Dirichlet process before introducing NSBM. We develop four efficient Gibbs algorithms for sampling from our posterior in Section 3, highlighting the nontrivial challenges from introducing dependency through the network data. Section 4 is devoted to an extensive simulation study that compares our four samplers to competing methods on clustering problems of varying hardness. We illustrate an important application of our model to two real datasets in Section 5. Section 6 concludes our paper with a discussion of future work. ## 2. Nested stochastic block model ### Original nested Dirichlet process In a multicenter study, \(y_{ij}\) is the observation on subject \(i\) in center \(j\). For example, \(\mathbf{y}_{j}\) is the vector of outcomes for patients in hospital \(j\). To analyze this data, it is common to either pool the subjects from the different centers or separately analyze the centers. As a middle approach, Rodriguez et al. (2008) introduce the nested Dirichlet process (NDP) mixture model for borrowing information across centers while also clustering similar centers. That is, if \(z_{j}\) is the hospital type for hospital \(j\) and \(\xi_{ij}\) is the patient type for patient \(i\), then the NDP mixture model allows simultaneous inference on \(\mathbf{z}=(z_{1},\ldots,z_{J})\) and \(\mathbf{\xi}_{j}=(\xi_{1j},\ldots,\xi_{n_{j}j}),\ j\in[J]\). The original NDP mixture model can be expressed as follows1 Footnote 1: Here, we correct a minor, but subtle, error in the original paper. There, \(Q\) is stated to be \(\equiv\mathsf{DP}(\alpha\,\mathsf{DP}(\beta H))\), whereas one has to sample from this nested DP to get \(Q\). \[Q\sim\mathsf{DP}(\alpha\,\mathsf{DP}(\beta H)),\] \[G_{j}\,|\,Q \sim Q,\] \[y_{ij}\,|\,G_{j} \sim\int p(\,\cdot\,|\,\theta)\,G_{j}(d\theta)\enspace, \tag{2.1}\] where \(j=1,\ldots,J\) in the second line and \(i=1,\ldots,n_{j}\) in the third line. It is not immediately clear how this abstract version of the model can be extended to networks. Below, we develop an alternative equivalent representation that is suitable for such extension. Using the stick-breaking representation of the DP, we can explicitly write \(Q\) as \[Q=\sum_{j=1}^{\infty}\pi_{k}\delta_{G_{k}^{*}},\quad G_{k}^{*}\sim\mathsf{DP} (\beta H),\quad\mathbf{\pi}\sim\mathsf{GEM}(\alpha)\enspace,\] where \(\mathbf{\pi}=(\pi_{k},\ k\in\mathbb{N})\). Here, \(\mathsf{GEM}\) stands for Griffiths, Engen, and McCloskey (Pitman et al., 2002) and refers to the distribution of a random measure on \(\mathbb{N}\) stemming from the well-known stick-breaking construction of the DP (Sethuraman, 1994). See Section 2.3 for a more explicit description of \(\mathsf{GEM}\). Another application of the stick-breaking representation, this time for \(G_{k}^{*}\), gives us \[G_{k}^{*}=\sum_{\ell=1}^{\infty}w_{\ell k}\delta_{\theta_{\ell k}^{*}},\quad \theta_{\ell k}^{*}\sim H,\quad\mathbf{w}_{k}\sim\mathsf{GEM}(\beta)\enspace,\] where \(\mathbf{w}_{k}=(w_{\ell k},\ \ell\in\mathbb{N})\). Next, we note that line (2.1) can be made more explicit by sampling \(\theta_{ij}\,|\,G_{j}\sim G_{j}\) i.i.d. over \(i\), and then sampling \(y_{ij}\,|\,\theta_{ij}\sim p(\,\cdot\ |\ \theta_{ij})\). Furthermore, let \(z_{j}\) denote the "class" of the \(j\)th center, that is, which component of \(Q\) gets assigned to \(G_{j}\). More specifically, \(z_{j}=k\) iff \(G_{j}=G_{k}^{*}\). With this notation, \(\theta_{ij}=\theta_{\xi_{ij},z_{j}}^{*}\) and \(G_{j}=G_{z_{j}}^{*}\), and the model reduces to \[\theta_{\ell k}^{*} \sim H y_{ij}\,|\,\mathbf{\theta}^{*},z_{k},\xi_{ij} \sim p(\,\cdot\,|\ \theta_{\xi_{ij},z_{j}}^{*}). \tag{2.2}\] \[\mathbf{w}_{k} \sim\mathsf{GEM}(\beta) \xi_{ij}\,|\,\mathbf{w},z_{j} \sim\mathbf{w}_{z_{j}}\] (2.3) \[\mathbf{\pi} \sim\mathsf{GEM}(\alpha) z_{j}\,|\,\mathbf{\pi} \sim\mathbf{\pi} \tag{2.4}\] We have tried to stay close to the notation in Rodriguez et al. (2008), with minor modifications (including changing \(\zeta_{j}\) to \(z_{j}\)). For future developments, we will rename \(\alpha\) to \(\pi_{0}\), and \(\beta\) to \(w_{0}\). ### Non-identifiability There is an inherent non-identifiability in NDP mixture models that has not been previously discussed in the literature. In the original NDP motivating example, this amounts to not being able to differentiate between a multicenter study with \(K\) hospital types and \(L_{k}\) patient types in the \(k\)th hospital type versus a single hospital with \(L=\sum_{k=1}^{K}L_{k}\) patient types. In other words, since a single hospital can have infinitely many patient types, the likelihood of the NDP cannot distinguish these two cases. One has to rely solely on the strength of the prior to differentiate them. While this issue is less apparent in the original NDP for Euclidean data, we will see that introducing network data exposes this challenge. ### Nsbm Here, we introduce the nested stochastic block model (NBSM), which is a novel employment of the NDP as a prior for modeling the two-level clustering structure on a collection of network objects. Let \(\mathbf{A}=A^{1},\ldots,A^{J}\) be the observed adjacency matrices of \(J\) networks (say from \(J\) subjects) in which \(A^{j}\) has \(n_{j}\) nodes. Our goal is to model both the within and between clustering structures of the networks. That is, we want to group a collection of network objects into classes while simultaneously clustering the nodes within each network into communities. Let \(K\) be the number of classes which is not known or specified. We denote \(\mathbf{z}=(z_{1},\ldots,z_{J})\) with \(z_{j}\in\{1,\ldots,K\}\) as the class membership for the \(j\)th network. Given the partition structure \(\mathbf{z}\) of these objects, we assume that each of the networks in each class of networks follows a stochastic block model (SBM) with \(n_{j}\) nodes. Denote by \(\mathbf{\xi}_{j}=(\xi_{1j},\ldots,\xi_{n_{j}j})\) the community membership of the \(n_{j}\) nodes for the \(j\)th network; \(\mathbf{\xi}_{j}\) encodes the clustering structure of the nodes within the \(j\)th network. We assume that we can borrow information and cluster across distributions by imposing that \(\mathbf{z}\) and \(\mathbf{\xi}_{j},\ j\in[J]\) follow an NDP prior. This leads to the following _nested SBM_: \[\eta_{xyk} \sim\mathsf{Beta}(\alpha,\beta) A_{st}^{j}\,|\,\mathbf{\xi}_{j},z_{j},\mathbf{\eta} \sim\mathsf{Ber}(\eta_{\xi_{sj}\xi_{tj}z_{j}}) \tag{5}\] \[\mathbf{w}_{k} \sim\mathsf{GEM}(w_{0}) \xi_{sj}\,|\,\mathbf{w},z_{j} \sim\mathbf{w}_{z_{j}}\] (6) \[\mathbf{\pi} \sim\mathsf{GEM}(\pi_{0}) z_{j}\,|\,\mathbf{\pi} \sim\mathbf{\pi} \tag{7}\] independently over \(1\leq s<t\leq n_{j}\), \(x\leq y\), \(k\in\mathbb{N}\) and \(j\in[J]\). Both \(x,y\) range over \(\mathbb{N}\) subject to \(x\leq y\). We note that (5) specifies the SBM likelihood, (6) models the community structure within each network, and (7) models the class level of the network objects. If \(\mathsf{SBM}(B,\mathbf{\zeta})\) represent the distribution of the adjacency matrix of an SBM with connectivity matrix \(B\) and label vector \(\mathbf{\zeta}\), we can compactly write the RHS of (5) as \[A^{j}\,|\,\mathbf{\xi}_{j},z_{j},\mathbf{\eta}\sim\mathsf{SBM}(\mathbf{\eta}_{z_{j}},\mathbf{ \xi}_{j})\enspace.\] The sampling of \(\mathbf{\pi}\) and \(\mathbf{w}_{k}\) in lines (6)-(7) can be made more explicit as follows: \[u_{xk} \sim\mathsf{Beta}(1,w_{0}),\ \ \mathbf{w}_{k}=F(\mathbf{u}_{k}), \tag{8}\] \[v_{k} \sim\mathsf{Beta}(1,\pi_{0}),\ \ \ \ \mathbf{\pi}=F(\mathbf{v})\enspace, \tag{9}\] independently across \(x,k\in\mathbb{N}\), where \(\mathbf{u}_{k}=(u_{xk})\), and \(\mathbf{v}=(v_{k})\), and \(F(\cdot)\) is the stick-breaking map. More specifically, \(F:[0,1]^{\mathbb{N}}\to[0,1]^{\mathbb{N}}\) is given by \[[F(\mathbf{v})]_{k}:=v_{k}\prod_{\ell=1}^{k-1}(1-v_{\ell})\enspace, \tag{10}\] where, by convention, the product over an empty set is equal to \(1\). We will use the following conventions for indices. We often use \(s,t\) to index nodes, and use \(x,y\) to index communities within networks. We use \(k\) to index the class of networks themselves. Throughout, we maintain the notation that \(j\) is the layer/network index, \(z_{j}\) is the class of network \(j\), \(\xi_{sj}\) is the community label for the \(s^{th}\) node in the \(j^{th}\) network, and \(\mathbf{\eta}_{k}\) is the connectivity matrix for the \(k\)th SBM. A few comments are in order. Our model does not follow from the standard NDP mixture model as specified in (1). First, each network records pairwise interactions between any pairs of nodes. In the i.i.d. settings, the data can be naturally grouped into different clusterings based on the shared parameter value \(\theta\). In this case, the NDP does not induce a natural partition of the parameters due to the within-cluster interactions. Instead, we place a random partition prior on the nodes only, under which we assume an SBM. ### Advantages Here, we remark on some of the advantages of NSBM. First, NSBM is the only existing method that currently achieves within and between network clustering _simultaneously_ for networks. By employing an NDP prior, our model allows one to borrow strength from networks that are in the same class in performing clustering. This is in contrast to two-step procedures (such as NCGE and NCLM) that first cluster the network objects into classes and then, assuming the networks share a common vertex set, cluster the nodes of networks in a given class into communities. Moreover, unlike these two-step procedures, our method assigns communities to the nodes for each network within a given class individually. That is, the communities are inferred on a network level rather than a class level, which we refer to as _community heterogeneity_. Community heterogeneity can occur either when nodes change communities between networks, for example if an individual changes friend groups, or if the distribution of nodes in each community differs across networks. In both of these cases, the heterogeneity exists _within_ the same network class. Allowing for community heterogeneity can be easily seen from our construction. Specifically, the community vectors \(\boldsymbol{\xi}_{j}=\left(\boldsymbol{\xi}_{tj}\right)_{t=1}^{n_{j}}\) are inferred at the node level separately for each network. This also underscores the feature that the collection or sample of networks does not have to share a common set of nodes. In case the vertex set is the same, or even if just the number of nodes is the same, our construction does not require a correspondence between the node labels. In the literature, these networks are considered _unlabeled_. While the distinction between labeled and unlabeled networks has received the most attention in the multiple network literature, community heterogeneity is a distinct but important feature. Ultimately, these features make our setup much more general and flexible compared to any of the other existing methods. These features are necessary in the common scenario, for example with social networks, in which the networks lack a node correspondence (anonymity) and/or have a different number of actors (different sample). Finally, our model does not require the prespecification of the number of classes or communities. The value of this feature cannot be overstated, since methods that require \(K\) to be given as input necessarily rely on heuristics such as eigenvalue gaps or elbow plots that are not theoretically justified. ### Joint distribution Recall that we observe multiple networks with adjacency matrices \(A^{j},\ j\in[J]\), from SBMs with connectivity matrices \(\boldsymbol{\eta}_{k}\ =\ (\eta_{xyk}),\ k\in\ \mathbb{N}\). The joint distribution of NSBM factorizes as \[p(\boldsymbol{A},\boldsymbol{\eta},\boldsymbol{\xi},\boldsymbol{ z},\boldsymbol{u},\boldsymbol{v})=\prod_{j}\Big{[}\pi_{z_{j}}\prod_{s<t}p(A^{j}_{st} \,|\,\boldsymbol{\eta},\boldsymbol{\xi}_{j},z_{j})\prod_{s}p(\xi_{sj}\,|\, \boldsymbol{w},z_{j})\Big{]}\cdot\\ \Big{[}p(v_{k})\prod_{x}p(u_{xk})\prod_{x\leq y}p(\eta_{xyk}) \Big{]} \tag{2.11}\] where \(p(\xi_{sj}\,|\,\boldsymbol{w},z_{j})=w_{\xi_{sj},z_{j}}\) and \(p(A^{j}_{st}\,|\,\boldsymbol{\eta},\boldsymbol{\xi}_{j},z_{j})=\eta^{A^{j}_{st }}_{\xi_{sj}\xi_{sj}z_{j}}(1-\eta_{\xi_{sj}\xi_{tj}z_{j}})^{1-A^{j}_{st}}\). Let us define the index sets \[\Gamma^{j}_{xy}=\begin{cases}\{(s,t):1\leq s<t\leq n_{j}\},&x=y\\ \{(s,t):1\leq s\neq t\leq n_{j}\}&x\neq y\end{cases}\, \tag{2.12}\] the block counts \[m^{j}_{xy}=\sum_{(s,t)\in\Gamma^{j}_{xy}}A^{j}_{st}1_{\{\xi_{sj}=x,\xi_{tj}=y\}}, \quad\quad\quad N^{j}_{xy}=\sum_{(s,t)\in\Gamma^{j}_{xy}}1_{\{\xi_{sj}=x,\xi_{tj}= y\}},\] and the _aggregate_ block sums \[m_{xyk}=\sum_{j}1_{\{z_{j}=k\}}m^{j}_{xy},\quad\ N_{xyk}=\sum_{j}1_{\{z_{j}=k\} }N^{j}_{xy},\quad\bar{m}_{xyk}=N_{xyk}-m_{xyk}.\] Then \[\prod_{j}\prod_{s<t}p(A^{j}_{st}\,|\,\boldsymbol{\eta},\boldsymbol {\xi}_{j},z_{j}) =\prod_{j}\prod_{x\leq y}\eta^{m^{j}_{xy}}_{xyz_{j}}(1-\eta_{xyz_{j }})^{N^{j}_{xy}-m^{j}_{xy}} \tag{2.13}\] \[=\prod_{k}\prod_{x\leq y}\eta^{m_{xyk}}_{xyk}(1-\eta_{xyk})^{N_{ xyk}-m_{xyk}} \tag{2.14}\] and \[p(\xi_{sj}\,|\,\cdots) \propto\,p(\xi_{sj}\,|\,\boldsymbol{w},z_{j})\,\prod_{t:t\neq s}p (A^{j}_{st}\,|\,\boldsymbol{\eta},\boldsymbol{\xi}_{j},z_{j})\] \[\propto\,w_{\xi_{sj},z_{j}}\prod_{y}\,\prod_{t:t\neq s,\,\xi_{tj}= y}\eta^{A^{j}_{st}}_{\xi_{sj}yz_{j}}(1-\eta_{\xi_{sj}yz_{j}})^{1-A^{j}_{ xt}}. \tag{2.15}\] These compact representations allow efficient sampling, which we describe in the next section. ## 3 Posterior inference To sample from the posterior distribution, we propose the use of samplers based on truncation approximations, which is what was used in the original Gibbs sampler for NDP. To do so, we suppose there is a finite number of classes \(K\) and clusters of nodes within class, \(L\), with \(K\) and \(L\) specified as very large numbers. There are many known challenges in the literature on sampling from DP mixture models, in particular with the stick-breaking representation, including mixing issues due to local modes and sensitivity to initialization (Neal, 2000; Hastie et al., 2015). As mentioned earlier, NShM introduces new challenges due to the non-identifiability inherent in the NDP and the complexity of the SBM likelihood. To investigate these issues, we consider and compare the following four samplers: 1. **Gibbs (G)**: A standard Gibbs samplers that updates all the (vector) parameters \(\boldsymbol{\xi},\boldsymbol{z},\boldsymbol{\eta},\boldsymbol{w}\) and \(\boldsymbol{\pi}\), one at a time given all the rest. 2. **Collapsed Gibbs (CG)**: Similar to (G), but we first marginalize out \(\boldsymbol{\eta}\) and only perform Gibbs updates on the remaining parameters. 3. **Blocked Gibbs (BG)**: Same as (G) except that we sample \((\boldsymbol{\xi},\boldsymbol{z})\) jointly given the rest of the variables. This joint sampling is exact. Letting \(E=(\boldsymbol{\eta},\boldsymbol{w},\boldsymbol{\pi})\) be all the parameters except the labels, we perform the joint sampling exactly, by sampling \(\boldsymbol{\xi}\,|\,E\) first, followed by \(\boldsymbol{z}\,|\,\boldsymbol{\xi},E\). 4. **Incompatible Blocked Gibbs (IBG)**: Same as (BG) but with the order of the label updates reversed: First, we sample \(\boldsymbol{z}\,|\,\boldsymbol{\xi},E\) and then \(\boldsymbol{\xi}\,|\,E\). For each of our samplers, we initiate \(\boldsymbol{\xi}\) using separate DPSBM models (Shen et al., 2022) on each network. ### Gibbs (G) Given the joint distribution in (2.11), we can easily compute the conditional distributions for a standard Gibbs sampler (G). 1. **Sampling \(\boldsymbol{\eta}\)** Sample \(\eta_{xyk}\) independently for \((x,y):x\leq y\): \[\eta_{xyk}\,|\,\cdots\,\sim\,\mathsf{Beta}\Big{(}m_{xyk}+\alpha,\bar{m}_{xyk} +\beta\Big{)}\] This follows from representation (2.14). 2. **Sampling \(\boldsymbol{\xi}\)** Sample \(\xi_{sj}\) sequentially, given \(\xi_{-sj}\) and other parameters, from the discrete distribution: \[p(\xi_{sj}=x\,|\,\cdots)\propto w_{xz_{j}}\exp\Bigl{(}\sum_{y}\tau_{sy}^{j}u_{ xyz_{j}}+n_{sy}^{j}v_{xyz_{j}}\Bigr{)}\] where \(u_{xyk}=\log[\eta_{xyk}/(1-\eta_{xyk})]\) and \(v_{xyk}=\log(1-\eta_{xyk})\) and \[\tau_{sy}^{j}=\sum_{t:t\neq s}A_{st}^{j}1_{\{\xi_{tj}=y\}},\quad n_{sy}^{j}= \sum_{t:t\neq s}1_{\{\xi_{tj}=y\}}.\] This follows from (2.15). 3. **Sampling \(\boldsymbol{z}\)** Sample \(z_{j}\) independently over \(j\) (i.e., in parallel) given everything else from \[p(z_{j}\,|\,\cdots)\propto\pi_{z_{j}}\prod_{x}w_{xz_{j}}^{n_{x}(\xi_{j})}\prod _{x\leq y}\eta_{xyz_{j}}^{m_{xy}^{j}}(1-\eta_{xyz_{j}})^{N_{xy}^{j}-m_{xy}^{j}},\] that is, \[p(z_{j}=r\,|\,\cdots)\propto\pi_{r}\exp\Bigl{(}\sum_{x}n_{x}(\boldsymbol{\xi} _{j})\log w_{xr}+\sum_{x\leq y}\bigl{(}m_{xy}^{j}u_{rxy}+N_{xy}^{j}v_{rxy} \bigr{)}\Bigr{)}.\] where \(n_{x}(\boldsymbol{\xi}_{j})=\{s:\;\xi_{sj}=x\}\). This follows from representation (2.13) and \[\prod_{s}p(\xi_{sj}\,|\,\boldsymbol{w},z_{j})=\prod_{s}w_{\xi_{sj},z_{j}}= \prod_{x}w_{xz_{j}}^{n_{x}(\boldsymbol{\xi}_{j})}.\] 4. **Sampling \(\boldsymbol{u}\)** Sample \(u_{xk}\) independently across \(x\) and \(k\), as \[u_{xk}\,\,|\,\cdots\sim\mathsf{Beta}\bigl{(}n_{x}(\boldsymbol{\xi}^{(k)})+1, n_{>x}(\boldsymbol{\xi}^{(k)})+w_{0}\bigr{)},\] (3.1) where \(\boldsymbol{\xi}^{(k)}:=\{\xi_{sj}:s\in[n_{j}],\;\;z_{j}=k\}\) and \(n_{x}(\cdot)\) and \(n_{>x}(\cdot)\) are operators counting how many labels are equal to or greater than \(x\), respectively. This follows by noting that \[p(\boldsymbol{u}\,|\,\cdots) \propto\,\prod_{s,j}w_{\xi_{sj},z_{j}}\prod_{k,x}b_{w_{0}}(u_{xk})\] \[=\,\prod_{k}\Bigl{[}\prod_{(s,j):z_{j}=k}[F(\boldsymbol{u}_{k})]_{ \xi_{sj}}\prod_{x}b_{w_{0}}(u_{xk})\Bigr{]},\] where the second line uses \(w_{\xi_{sj},z_{j}}=[w_{z_{j}}]_{\xi_{sj}}=[F(\boldsymbol{u}_{z_{j}})]_{\xi_{sj}}\). This shows that the posterior factorizes over \(k\). 5. **Sampling \(\boldsymbol{v}\)** Sample \(v_{k}\) independently from \[v_{k}\,|\,\cdots\sim\mathsf{Beta}(n_{k}\bigl{(}\boldsymbol{z})+1,n_{>k}( \boldsymbol{z})+\pi_{0}\bigr{)}.\] (3.2) This follows by noting that \[p(\boldsymbol{v}\,|\,\cdots)\propto\prod_{j}\pi_{z_{j}}\prod_{k}b_{\pi_{0}}(v_ {k})=\prod_{j}[F(\boldsymbol{v})]_{z_{j}}\prod_{k}b_{\pi_{0}}(v_{k})\] and using the standard lemma. ### Collapsed Gibbs (CG) Since our goal is simultaneously clustering networks and nodes, the only parameters we need to infer are \(\mathbf{z}\) and \(\mathbf{\xi}\). Hence, the underlying block matrices \(\mathbf{\eta}\) are nuisance parameters. Following the Beta-Binomial conjugacy in our model, we can collapse out the link probabilities of each of the networks for more efficient updating. Note that we retain the ability to estimate these probabilities as the proportion of observed edges between inferred communities in each class. Recall that using the aggregate block sums in Section 2.5, Equation (2.14) allowed us to factor the likelihood as a product over \(k\). Combining this with the prior on \(\mathbf{\eta}\), we have \[\propto\prod_{k}\prod_{x\leq y}\eta_{xyk}^{m_{xyk}+\alpha-1}\big{(}1-\eta_{xyk }\big{)}^{\bar{m}_{xyk}+\beta-1}. \tag{3.3}\] Integrating over \(\mathbf{\eta}\), we obtain the collapsed joint distribution which is given by \[p(\mathbf{A},\mathbf{z},\mathbf{\xi}\mid\mathbf{u},\mathbf{v}) \stackrel{{\approx}}{{\propto}} \prod_{k}\prod_{x\leq y}B\big{(}m_{xyk}+\alpha,\;\bar{m}_{xyk}+ \beta\big{)}\prod_{j}\Big{[}\pi_{z_{j}}\prod_{s=1}^{n_{j}}w_{\xi_{sj},z_{j}} \Big{]}, \tag{3.4}\] \[p(\mathbf{u},\mathbf{v}) \stackrel{{\approx}}{{\propto}} \prod_{k=1}^{K}\Big{[}b_{\pi_{0}}(v_{k})\prod_{x=1}^{L}b_{w_{0}}( u_{xk})\Big{]}, \tag{3.5}\] where \(B(\cdot,\cdot)\) is the beta function. This allows us to define a collapsed Gibbs sampler (CG), which is the same as (G) except for the following updates: 1. **Sampling \(\mathbf{\xi}\)** Given \(\mathbf{z}\), only \(m_{xyz_{j}}\) and \(\bar{m}_{xyz_{j}}\) depend on \(\xi_{sj}\). Hence, \[p(\xi_{sj}\mid\cdots)\;\propto\;w_{\xi_{sj},z_{j}}\prod_{x\leq y}B\big{(}m_{ xy}^{z_{j}}+\alpha_{\eta},\;\bar{m}_{xy}^{z_{j}}+\beta_{\eta}\big{)}.\] The complexity of computing the product above can be reduced from \(O(L^{2})\) to \(O(L)\) by a beta ratio idea similar to the one described below for sampling \(z_{j}\). 1. **Sampling \(\mathbf{z}\)** Let us introduce some notation to simplify the updates of \(z_{j}\) given everything else. Assume that the previous value of \(z_{j}=r_{0}\) and the new candidate value is \(r\). Let \(q_{xyk}\) and \(\bar{q}_{xyk}\) denote the previous values of \(m_{xyk}\) and \(\bar{m}_{xyk}\) based on \(z_{j}=r_{0}\), and let \(m_{xyk}(r)\) and \(\bar{m}_{xyk}(r)\) denote the new values based on \(z_{j}=r\), where we are showing the internal dependence on \(r\) explicitly. Changing \(z_{j}\) from \(r_{0}\) to \(r\) only changes two of the matrices \(\{m_{xyk}(r),k\;\in\;[K]\}\), namely \(m_{xyr}(r)\) and \(m_{xyr_{0}}(r)\). When \(r\neq r_{0}\), the effect is to subtract values from \(m_{xyr_{0}}(r)\) and add them to \(m_{xyr}(r)\) such that \(m_{xyr_{0}}(r)+m_{xyr}(r)\) remains the same. The change to \(m_{xyr}\) is the same for all \(r\neq r_{0}\) and is equal to the block sums of \(A^{j}\) with respect to \(\mathbf{\xi}_{j}\). For \(r=r_{0}\), of course the is no change to \(m_{xyk}(r)\). More specifically, fix \(j\) and let \[D^{j}_{xy}:=\sum_{(s,t)\in\Gamma^{j}_{xy}}A^{j}_{st}\;1_{\{\xi_{sj}=x,\;\xi_{ tj}=y\}}\;\;.\] Then, for all \(r\neq r_{0}\), \[\begin{cases}m_{xyk}(r)=q_{xyk},&k\not\in\{r,r_{0}\},\\ m_{xyr}(r)=q_{xyr}+D^{j}_{xy},&\\ m_{xyr_{0}}(r)=q_{xyr_{0}}-D^{j}_{xy},&\end{cases} \tag{3.6}\] and \(m_{xyk}(r_{0})=q_{xyk}\) for all \(k\). Similar relations holds between \(\bar{m}_{xyk}(r)\) and \(\bar{q}_{xyk}\) with \(\bar{D}_{xy}^{j}\) replacing \(D_{xy}^{j}\) and defined by replacing \(A_{st}^{j}\) with \(\bar{A}_{st}^{j}\) in the definition of \(D_{xy}^{j}\). Dividing (3.4) by \(\prod_{k}\prod_{x\leq y}B\big{(}q_{xyk}+\alpha,\ \bar{q}_{xyk}+\beta\big{)}\), which is constant when considering \(p(z_{j}=r\mid\cdots)\), we obtain \[\begin{split} p(z_{j}=r\mid\cdots)\ \propto\\ \pi_{r}\prod_{s=1}^{n_{j}}w_{\xi_{sj},r}\prod_{k\in\{r_{0},\,r\}} \prod_{x\leq y}\frac{B\big{(}m_{xyk}(r)+\alpha,\ \bar{m}_{xyk}(r)+\beta\big{)}}{B\big{(}q_{xyk}+\alpha,\ \bar{q}_{xyk}+\beta\big{)}}.\end{split} \tag{3.7}\] where we have used (3.6) to simplify the product over \(k\). Note from (3.6) that \(m_{xyr_{0}}(r)\) is the same for all \(r\neq r_{0}\), obtained by subtracting counts from \(q_{xyr_{0}}\), while we have \(m_{xyr_{0}}(r_{0})=q_{xyr_{0}}\). Let \[\kappa_{r}:=\prod_{x\leq y}\frac{B\big{(}m_{xyr_{0}}(r)+\alpha,\ \bar{m}_{xyr_{0}}(r)+\beta\big{)}}{B\big{(}q_{xyr_{0}}+\alpha,\ \bar{q}_{xyr_{0}}+\beta\big{)}},\quad r\neq r_{0}\] and \(\kappa_{r_{0}}:=1\). Note that \(\kappa_{r}\) is the same for all \(r\neq r_{0}\) and can be computed once for some \(r\neq r_{0}\). Considering the two terms \(k=r_{0}\) and \(k=r\) in (3.7) separately, we have \[p(z_{j}=r\mid\cdots)\ \propto\ \pi_{r}\prod_{x}w_{xr}^{n_{x}(\mathbf{\xi}_{j})} \cdot\kappa_{r}\prod_{x\leq y}\frac{B\big{(}m_{xyr}(r)+\alpha,\ \bar{m}_{xyr}(r)+\beta\big{)}}{B\big{(}q_{xyr}+\alpha,\ \bar{q}_{xyr}+\beta\big{)}}, \tag{3.8}\] using the further simplification \(\prod_{s=1}^{n_{j}}w_{\xi_{sj},r}=\prod_{x}w_{xr}^{n_{x}(\mathbf{\xi}_{j})}\). Here, \(n_{x}(\cdot)\) counts how many labels are equal to \(x\). Note that the beta ratios in (3.8) can be computed efficiently and accurately, which we describe in Appendix A. ### Blocked Gibbs (BG) Rodriguez et al. (2008) proposed a Gibbs sampler when they introduced the NDP. Although not mentioned, they used a block sampler for sampling exactly from the joint distribution of \(\mathbf{z}\) and \(\mathbf{\xi}\). Specifically, letting \(E=(\mathbf{y},\mathbf{\theta},\mathbf{w},\mathbf{\pi})\) collect all the parameters except \(\mathbf{\xi}\) and \(\mathbf{z}\), the authors' approach exactly samples from \(p(\mathbf{z},\mathbf{\xi}\,|\,E)\) as one step in their Gibbs sampler. Note that because \(p(\mathbf{z},\mathbf{\xi}\,|\,E)=\prod_{j}p(z_{j},\mathbf{\xi}_{j}\,|\,E)\) factorizes over \(j\), it is enough to focus on a single \(j\). Rodriguez et al. (2008) suggested marginalizing out \(\xi_{j}\) to sample from \(p(z_{j}\mid E)\) as follows: \[p(z_{j}\,|\,E) =\sum_{\mathbf{\xi}_{j}}p(z_{j},\mathbf{\xi}_{j}\,|\,E)\] \[\propto\pi_{z_{j}}\sum_{\mathbf{\xi}_{j}}\prod_{i}\!\left\{p(y_{ij} \,|\,\theta,\xi_{ij},z_{j})w_{\xi_{ij},z_{j}}\right\}\,\] and then sampling from \(p(\mathbf{\xi}_{j}\,|\,z_{j},E)\). By interchanging the sum and product, we obtain \[p(z_{j}\,|\,E)\propto\pi_{z_{j}}\prod_{i}\sum_{\xi_{ij}}\!\left\{p(y_{ij}\,|\, \theta,\xi_{ij},z_{j})w_{\xi_{ij},z_{j}}\right\}\.\] Importantly, this exchange is justified because of the following identity: \[\sum_{x_{1},\ldots,x_{n}}\prod_{i=1}^{n}f_{i}(x_{i}) =\sum_{x_{1},\ldots,x_{n-1}}\left[f_{1}(x_{1})f_{2}(x_{2})\cdots f_ {n-1}(x_{n-1})\sum_{x_{n}}f_{n}(x_{n})\right]\] \[=\sum_{x_{1},\ldots,x_{n-1}}\left[f_{1}(x_{1})f_{2}(x_{2})\cdots f _{n-1}(x_{n-1})\right]\cdot\sum_{x_{n}}f_{n}(x_{n})\] \[=\prod_{i=1}^{n}\sum_{x_{i}}f_{i}(x_{i}),\] where \(x_{i}\in[L]\) for all \(i\). The last line follows by a recursive application of the argument. In the NDP, \(f\) is the likelihood function and \(\xi_{ij}\) plays the role of \(x_{i}\) here. Note the summation on the LHS is over \(L^{n}\) terms. The overall complexity of calculating the LHS directly is \(O(nL^{n})\), while that of the RHS is \(nL+n=O(nL)\). However, this interchange of summation and product does not work for the NSBM because our likelihood function does not factor as a product. This is directly a result of the dependency introduced by modeling network data. Specifically, our likelihood is a function of _edges_, which encode the information between _pairs_ of nodes, rather than individual data points. Unfortunately, exact sampling from \(p(z_{j}\mid E)\) is intractable without the interchange. In other words, the complexity of exact sampling from \(p(z_{j}\mid E)\) is that of sampling from the posterior of a SBM. See Section 6 for further discussion. However, we can recover a blocked sampler by swapping the order: first sampling \(\boldsymbol{\xi}_{j}\) from \(p(\boldsymbol{\xi}_{j}\,|\,E)\) by marginalizing out \(z_{j}\) and then sampling \(z_{j}\) from \(p(z_{j}\,|\,\boldsymbol{\xi}_{j},E)\). This allows us to utilize the interchange for NSBM. We obtain the blocked Gibbs sampler (BG) by replacing the \(\boldsymbol{\xi}\) update in (G) with the following: * **Sampling \(\boldsymbol{\xi}\)** Sample \(\xi_{sj}\) sequentially, given \(\xi_{-sj}\) and the other parameters besides \(\boldsymbol{z}\), from the discrete distribution: \[p(\xi_{sj}=x\,|\,\cdots)\propto\prod_{k}\sum_{y}\pi_{k}\cdot w_{xk}\cdot\exp \Bigl{(}\tau_{sy}^{j}u_{xyz_{j}}+n_{sy}^{j}v_{xyz_{j}}\Bigr{)}\] A blocked sampler is a particular type of partially collapsed Gibbs (PCG) sampler. Van Dyk and Park (2008) demonstrate the importance of sampling order when implementing PCG samplers. In particular, we need to sample first from \(p(\boldsymbol{\xi}_{j}\mid E)\) and then from \(p(z_{j}\mid\boldsymbol{\xi}_{j},E)\) since sampling in the opposite order may result in a stationary distribution that differs from our target posterior distribution. However, it is often "only" dependency between parameters that is lost, which may not affect model performance such as clustering results. We therefore consider a fourth sampler, which we refer to as an incompatible blocked Gibbs (IBG), that uses the opposite sampling order. ## 4 Simulations In this section, we perform several simulation studies to evaluate the clustering performance of NSBM. We first provide a simulation that analyzes the various Gibbs samplers from Section 3. We then assess how each NSBM sampler scales as the number of nodes grows. Lastly, we compare the performance of NSBM to three competing methods on both assortative and non-assortative networks. The first two comparison methods are two-step procedures based on the graph clustering algorithms from Mukherjee et al. (2017): first cluster the networks using NCGE or NCLM, then cluster the nodes using a community detection algorithm on the average adjacency matrix from the networks in each class. Although NCLM works for unlabeled networks, the second step requires the networks to be labeled so that averaging the adjacency matrices is sensical. We also compare our method to MMLSBM from Jing et al. (2021) using the improved algorithm ALMA from Fan et al. (2022). While this method simultaneously clusters networks and nodes, it also requires the networks to be labeled. We compare our model with these three methods in various simulation settings. In such settings, we sample labeled networks so that the comparison to the other methods is fair. However, we emphasize that NSBM does not utilize the node correspondence, hence, as we show in Section 4.4, the performance would be the same if the networks were not labeled. For NCGE, NCLM, and ALMA, we provide the true number of classes and clusters as input. In contrast, NSBM learns both the number of network classes and the number of node communities automatically. For NSBM, we evaluate the maximum a posteriori (MAP) labels for the network classes and node communities. The MAP label is defined as the most frequently assigned label over the chain, i.e. the mode. We measure the performance of the estimated \(\mathbf{z}\) and \(\mathbf{\xi}\) labels using normalized mutual information (NMI). NMI ranges from 0 (random guessing) to 1 (perfect agreement) and is used to compare two partitions with a different number of labels while accounting for the issue of label invariance. For \(\mathbf{\xi}\)-NMI, we report the average NMI across \(\mathbf{\xi}_{j}\). The simulations were performed on a high-computing cluster. The code for these experiments is available at the GitHub repository aaamini/nsbm(Josephs, Amini, et al., 2023). An overview of our findings is as follows: 1. (IBG) performs the best clustering networks, followed by (CG). 2. (CG) performs the best clustering nodes, followed by (G). 3. (IBG) typically outperforms (BG), especially for larger networks. 4. Overall, (CG) performs the best. ### Simulation setting We begin by defining a general data-generating mechanism that we will use throughout our simulations to sample a collection of networks. For each network \(j=1,\ldots,J\), we begin by sampling its class membership \(z_{j}\sim\{1,\ldots,K\}\). For each network class, we then sample \(\mathbf{\xi}_{k}^{*}\sim\mathsf{Unif}(\{1,\ldots,L(k)\})^{\otimes n}\). To introduce community heterogeneity, we first let \(\mathbf{\xi}_{j}=\mathbf{\xi}_{z_{j}}^{*}\) and then for each coordinate of \(\mathbf{\xi}_{j}\), independently and with probability \(\tau\), resample the coordinate from \(\mathsf{Unif}(\{1,\ldots,L(k)\})\). This creates a Markov perturbation of community labels within a network cluster, where each node retains its class level community with probability \(1-\tau\) or gets a new label with probability \(\tau.\) Consequently, although the nodes are aligned, the community membership of a given node may change within a network class when \(\tau>0\). When \(\tau=0\), the community structure is identical for networks in the same class (the labeled case), but the community heterogeneity increases as \(\tau\) increases. Finally, we sample the networks \[A^{j}\mid\mathbf{\eta},\,\mathbf{\xi}_{j},z_{j}\;\sim\;\mathsf{SBM}\Big{(}\mathbf{\xi}_{j },\alpha_{j}\mathbf{\eta}_{z_{j}}\Big{)}\enspace, \tag{4.1}\] where \(\mathbf{\eta}_{z_{j}}\) is the connectivity matrix for class \(z_{j}\) and \(\alpha_{j}\) is a scaling parameter. ### Convergence analysis We being by comparing the Gibbs samplers from Section 3. For \(k=1,\ldots,K\) and \(\gamma\in[0,1]\), we let \[\mathbf{\eta}_{k}=(1-\gamma)\cdot I_{L(k)}+\gamma\cdot U_{L(k)}\enspace, \tag{4.2}\] where \(I_{n}\) is the \(n\times n\) identity matrix of and \(U_{n}\) is a random symmetric \(n\times n\) matrix, with entries uniformly distributed in \([0,1]\). For \(\gamma=0\), we have a perfectly assortative stochastic block model, which becomes less assortative as \(\gamma\) increases to \(1\). Therefore, \(\gamma\) controls the difficulty of clustering the networks. We let \[\alpha_{j}=\frac{\lambda}{\text{ead}(n_{j},\mathbf{\eta}_{z_{j}})}\enspace, \tag{4.3}\] where \(\text{ead}(n_{j},\mathbf{\eta}_{z_{j}})\) is the expected average degree (EAD) of an SBM with \(n_{j}\) nodes and connectivity matrix \(\mathbf{\eta}_{z_{j}}\). The scaling of \(\mathbf{\eta}_{z_{j}}\) by \(\alpha_{j}\) is done to obtain an SBM with an EAD of \(\lambda\), a parameter that can then be used to control the sparsity of the network. As \(\lambda\) increases, the networks become more sparse making community detection more difficult. We take \(J=60\) networks on \(n=200\) nodes each with \(\gamma=0.1\) and \(\lambda=30\). We also let \(\tau=0\) so that we are working in the labeled case, although as we will see later, NSBM is not affected by \(\tau\). We sample evenly from \(K=3\) classes with \(L=2\), \(3\), and \(5\) communities. We repeat each experiment \(100\) times and plot the mean \(\mathbf{z}\)-NMI and \(\mathbf{\xi}\)-NMI along with the interquartile range in Figure 4.1. The first thing we see is that each of the Gibbs samplers "converges" quickly. This highlights how our samplers are able to efficiently find good clusters after just a few iterations. Next, we see that (IBG) outperforms (BG) despite only differing in the order of their exact joint sampling of \(\mathbf{z}\) and \(\mathbf{\xi}\). Van Dyk and Park (2008) suggest that incompatibility can actually improve mixing in partially collapsed Gibbs samplers at the risk of sacrificing the correct correlation structure between parameters. In terms of clustering performance, our results indicate that this sacrifice may be worthwhile. Finally, we see that although (IBG) and (CG) perform the best, on average, on \(\mathbf{z}\) and \(\mathbf{\xi}\), respectively, there is considerable variability in the results. This is especially prominent with (BG), as the interquartile band demonstrates. One interesting finding that relates to this variability is that we observed \(\mathbf{z}\) collapse to a single class, which results in \(\mathbf{z}\)-NMI of \(0\). This may be due to the previously unmentioned non-identifiability in NDP mixture models that we introduced in Section 2. Specifically, since the number of classes and clusters can take any value, a single class can fully capture the data with a growing number of clusters. In this example, that means we cannot distinguish between \(K=3\) classes with \(L=2,3\), and \(5\) communities compared to a single class with \(L=10\) communities. ### Scaling analysis In this simulation, we keep the same setup from Section 4.2, but we vary \(n\) to see how the samplers scale with the network order. However, for the scaling parameter in Equation (4.3), we let \(\lambda=n/15\) so that the average degree grows with the network. Throughout, we take \(J=30\) and \(\gamma=0.4\), and we vary \(n\) from \(20\) to \(500\). Figure 4.2 shows the results. We see that \(\mathbf{\xi}\)-NMI increases as the networks grow for all of the methods, which is what we expect for community detection. As before, we see that (G) and (CG) outperform (BG) and (IGB). We also see a sharp increase in \(\mathbf{z}\)-NMI for (G), (CG), and (IBG), with (IBG) performing the best for large networks. Interestingly, \(\mathbf{z}\)-NMI is stable for (BG), which can be explained by noting that blocking requires marginalization over \(nL+n\) terms, hence its chain might be slower to converge. In contrast, the improvement of (IBG) provides further evidence that incompatibility can improve mixing. Similarly, by marginalizing out \(\mathbf{\eta}\), the number of parameters that (CG) samples grows linearly with \(n\) unlike with (G). Another interesting finding is that \(\mathbf{z}\)-NMI eventually stabilizes for (G) and (CG). There may be an underlying trade-off between the number of nodes \(n\) and the number of networks \(J\) but this relationship needs to be examined further. ### Community heterogeneity analysis In this simulation, we vary \(\tau\in[0,1]\). Throughout, we take \(J=20\) and \(n=200\). We also fix \(\gamma=0.2\) and \(\lambda=25\) from Equations (4.2) and (4.3), respectively. We omit BG to declutter the results due to its poor performance in the previous simulations. Figure 4.3 shows the results. We see that the NSBM samplers are robust to community heterogeneity as \(\mathbf{z}\)-NMI and \(\mathbf{\xi}\)-NMI are stable for all values of \(\tau\). On the other hand, NCGE and ALMA exhibit a sharp phase transition for \(\tau\geq 0.5\): with low community heterogeneity, NCGE and ALMA perform the best, whereas with high community heterogeneity, they perform uniformly worse than NSBM even though we provide them the true number of communities. It is not surprising that NCGE and ALMA outperform NSBM for low \(\tau\), because they use the additional information encoded in the alignment of the nodes, whereas NSBM does not utilize the alignment at all. Lastly, NCLM has low \(\mathbf{z}\)-NMI for all values of \(\tau\). Importantly, we see that only the NSBM samplers exhibit good community detection performance, while NCGE, NCLM, and ALMA show decreasing \(\mathbf{\xi}\)-NMI as \(\tau\) increases. Figure 4.1: Cluster performance of our four Gibbs algorithms. \(\mathbf{z}\)-NMI (left) measures how well we are clustering the network objects and \(\mathbf{\xi}\)-NMI (right) measures how well we are performing community detection on the nodes. The bands are 50% quantile regions based on 100 experiments. ### Non-assortative networks Here, we test our model on non-assortative networks based on multilayer personality-friendship networks from Amini et al. (2022). In this setting, we suppose there are three types of schools that have different interaction frequencies between students. In each school, students belong Figure 4.3: Varying \(\tau\) for fixed \(\gamma=0.2\) and \(\lambda=25\). to one of three personalty types: extrovert, ambivert, or introvert. The probabilities of interaction between students of various personality types are given as \[\mathbf{\eta}_{1}=\begin{pmatrix}.9&.75&.5\\.75&.6&.25\\.5&.25&.1\end{pmatrix},\quad\mathbf{\eta}_{2}=\begin{pmatrix}.8&.1&.3\\.1&.9&.2\\.3&.2&.7\end{pmatrix},\quad\mathbf{\eta}_{3}=\begin{pmatrix}.1&.4&.6\\.4&.3&.1\\.6&.1&.5\end{pmatrix},\] where \(\mathbf{\eta}_{k}\) represents the probabilities for school type \(k\). We see that for school type \(1\), extroverts interact most often with other extroverts, but are still marginally more likely to interact with any student, whereas for school type \(2\), the students mix assortatively within their personality type. The third school type is a mix of the first two schools: ambiverts and introverts mix assortatively, whereas extroverts prefer to mix with non-extroverts. The proportion of students belonging to these personality types is given in Table 2. We sample \(40\) social networks for each school (\(J=120\)) and each school has \(n\sim\mathsf{Unif}(20,100)\) students. In this setting, we take the networks to be unlabeled (\(\tau=1\)), which reflects realistic anonymity and differing node sets in real social networks. Therefore, we only include NCLM as a comparison and only in its performance clustering the networks. The results for \(100\) replicates are given in Figure 4.4. As in the previous simulations, (CG) and (G) perform the best on \(\mathbf{\xi}\). (BG) and (IBG) both perform very well on \(\mathbf{z}\), dominating NCLM. In this case, (BG) actually outperforms its incompatible version, providing evidence that (BG) performs well on small networks. ## 5 Real data analysis To showcase the flexibility of NSBM, we consider two real datasets. First, we apply it to an aggregated dataset consisting of five different types of social network from Oettershagen et al. (2020): high school, Facebook, Tumblr, MIT, DBLP. In total, there are over \(2300\) networks from these \(5\) datasets ranging from \(n=20\) to \(100\). This dataset accurately reflects the difficulty in comparing social networks since they are anonymous and also have a different number of nodes per network even within network type. In order to resemble the more common setting in which only a few social networks are observed, we perform an _in silico_ study by sampling \(J\sim\mathsf{Unif}(20,100)\) per experiment. We compare the results of NSBM to NCLM, the only other method that works for unlabeled networks, on \(100\) different experiments. For the comparison, the true classes are the social network type (high school, Facebook, Tumblr, MIT, DBLP). However, as the networks are anonymous, we do not have ground-truth node labels. Therefore, we only compare NSBM and NCLM on \(\mathbf{z}\)-NMI and not on \(\mathbf{\xi}\)-NMI. The results in Table 3 match what we found in the simulation on the multilayer personality-friendship networks. That is, (IBG) and (BG) perform the \begin{table} \begin{tabular}{c c c c} \hline \hline **School** & **Extrovert** & **Ambivert** & **Introvert** \\ \hline **1** & 40 & 35 & 25 \\ **2** & 70 & 15 & 15 \\ **3** & 20 & 40 & 40 \\ \hline \hline \end{tabular} \end{table} Table 2: Percentage of students belonging to the three personality types for each of the school types. best. IBG outperforms NCLM even though NCLM is given the true number of classes. We also apply NSBM to character-interaction networks from popular films and television series (David, 2020). We consider \(J=15\) networks: 6 from the original and sequel Star Wars trilogies, 7 from the Game of Thrones television series, and 2 Lord of the Rings films. For each network, the nodes represent characters and an edge exists between two characters if they share a scene together. The number of nodes ranges from \(n=20\) to \(172\). In this case, we can assess the clustering performance of NSBM on both network and node labels. For the network labels, we let each network belong to its franchise (Star Wars, Game of Thrones, and Lord of the Rings). Since we have character names, we can also assign node labels. For the characters in Star Wars, we assign them to an "affiliation" based on their Wookiepedia entry ([https://starwars.fandom.com/](https://starwars.fandom.com/)). The labels are: Rebel Alliance, Galactic Republic, Galactic Empire, Confederacy of Independent Systems, Jedi Order, and Sith Order. Of the 91 unique characters included in the six films, 25 do not belong to one of these affiliations. Therefore, we discard them from our cluster performance evaluation, but note that we leave them in as nodes in the networks. We do not assign labels to the Game of Thrones networks because there are 400 unique characters, which we leave for future researchers. The results are shown in Table 3. Again, we see that (BG) and (IBG) performs the best on \(\mathbf{z}\)-NMI. We also see that (G) and (IBG) perform the best on \(\mathbf{\xi}\)-NMI. We note that the low \(\mathbf{\xi}\)-NMI performance can, at least partially, be explained by the arbitrary affiliations we assigned the characters. In particular, we used the last reported affiliation for convenience, but an affiliation that varies with time or film may be more appropriate. Alternatively, since many of the characters belong to several affiliations, a more appropriate model might be a mixed-membership Figure 4.4: Simulated multilayer personality-friendship networks. stochastic block model. Finally, some of the affiliations are more similar than others, for example Galactic Empire and Galactic Republic, but NMI does not weigh these mismatches differently than any random pair. A better understanding of the affiliation hierarchy could yield a better performance metric. ## 6 Conclusion and discussions In this work, we have introduced the nested stochastic block model (NBSM). By novelly employing the nested Dirichlet process (NDP) prior (Rodriguez et al., 2008), NSBM has the flexibility to cluster unlabeled networks while simultaneously performing community detection. Importantly, NSBM also learns the number of classes at the network and node levels. NSBM is not a straightforward application of an NDP mixture model because network edges represent pairwise interactions between nodes and thus violates the original i.i.d. setting. This further manifests as the inability to use the Gibbs sampler originally proposed in Rodriguez et al. (2008). As a consequence, we proposed four different Gibbs samplers that are shown to have various strengths in our numerical results. Importantly, although three of the samplers have the same (correct) posterior, we observe that they seem to converge to different clustering solutions. In particular, (IBG) tends to posterior modes that correspond to accurate network clustering, while (G) converges to accurate node clustering. Since (CG) performs well in both tasks, we conclude that it is the best sampler, but it remains open why these three samplers, with identical stationary distributions, perform differently. One hypothesis is that the non-identifibaility of the NDP likelihood coupled with a weak label prior leads to a truly multimodal posterior with regions of very low probability in between, trapping each sampler in the vicinity of a different mode. We also observe that although (IBG) is incompatible, it outperforms (BG) on the clustering tasks. Van Dyk and Park (2008) suggest that incompatibility, while breaking the parameter dependency and creating a stationary distribution that differs from the true posterior, can improve mixing. Since mixing is a known challenge for DP mixture models (Neal, 2000; Hastie et al., 2015), this may be worthwhile. However, more theory is needed to understand this trade-off. In particular, the relationship between the marginal posterior distributions and the global posterior distribution, and how they relate to clustering performance, needs to be studied. One can look at other samplers besides the four we considered. As mentioned in the discussion of (BG), computing \(p(z_{j}=k\,|\,E)\) is difficult. It boils down to \begin{table} \begin{tabular}{l c c c c c} \hline **Dataset** & **G** & **CG** & **BG** & **IBG** & **NCLM** \\ \hline Social (\(\mathbf{z}\)) & 0.21 & 0.33 & 0.46 & **0.57** & 0.47 \\ & (0.17-0.23) & (0.28-0.37) & (0.38-0.56) & (0.50-0.62) & (0.44-0.51) \\ \hline Characters (\(\mathbf{z}\)) & 0.30 & 0.22 & **0.70** & 0.33 & 0.36 \\ \hline Characters (\(\mathbf{\xi}\)) & **0.24** & 0.18 & 0.15 & 0.22 & NA \\ \hline \end{tabular} \end{table} Table 3: NMI (and interquartile range) comparison of NSBM MAP labels and NCLM on two real datasets: social networks (Oettershagen et al., 2020) and character-interaction networks (David, 2020). computing sums of the form \[\sum_{x_{1},x_{2},\ldots,x_{n_{j}}}\prod_{s<t}f_{st}(x_{s},x_{t})\enspace, \tag{6.1}\] where \(f_{st}(x_{s},x_{t})=\eta_{x_{s}x_{t}k}^{A_{st}^{j}}(1-\eta_{x_{s}x_{t}k})^{A_{s }^{j}}\). Loopy belief propagation (BP), a message-passing algorithm, can be used to approximately compute the sum. Still, this requires passing messages over the complete graph. Ideas from Decelle et al. (2011) can be used to further approximate by passing messages only over edges of \(A^{j}\), leading to a speed up for sparse networks. We can also use a Gibbs sampler to compute (6.1), leading to Gibbs-within-Gibbs samplers. These samplers are interesting to implement and study; however, for practical use, they are mostly infeasible due to the computational cost of calculating \(p(z_{j}=k\,|\,E)\) in each step. One important discovery that we made in our simulations is the phenomena that chains occasionally collapse into a single class for \(\mathbf{z}\). We experienced this for all of the samplers, though most notably with (BG), and it only occurs some of the times for some of the settings. We believe this is due to an inherent non-identifiability in NDP mixture models, which has not been previously discussed. This issue is of independent interest and should be investigated further. It is also unclear why this issue did not occur in the original NDP simulations. Finally, there are several natural modifications that could be made to NSBM including extensions to undirected networks, as well as to degree-corrected and mixed-membership stochastic block models. We leave these extensions to future researchers. ## Acknowledgments NJ was partially supported by NIH/NICHD grant 1DP2HD091799-01. AA was supported by NSF grant DMS-1945667. LL was supported by NSF grants DMS 2113642 and DMS 1654579.
2305.06982
How Much Partiality Is Needed for a Theory of Computability?
Partiality is a natural phenomenon in computability that we cannot get around. So, the question is whether we can give the areas where partiality occurs, that is, where non-termination happens, more structure. In this paper we consider function classes which besides the total functions only contain finite functions whose domain of definition is an initial segment of the natural numbers. Such functions appear naturally in computation. We show that a rich computability theory can be developed for these functions classes which embraces the central results of classical computability theory, in which all partial (computable) functions are considered. To do so, the concept of a G\"odel number is generalised, resulting in a broader class of numberings. The central algorithmic idea in this approach is to search in enumerated lists. In this way, function computability is reduced to set listability. Besides the development of a computability theory for the functions classes, the new numberings -- called quasi-G\"odel numberings -- are studied from a numbering-theoretic perspective: they are complete, and each of the function classes numbered in this way is a retract of the G\"odel numbered set of all partial computable functions. Moreover, the Rogers semi-lattice of all computable numberings of the considered function classes is studied and results as in the case of the computable numberings of the partial computable functions are obtained. The function classes are shown to be effectively given algebraic domains in the sense of Scott-Ershov. The quasi-G\"odel numberings are exactly the admissible numberings of the computable elements of the domain. Moreover, the domain can be computably mapped onto every other effectively given one so that every admissible numbering of the computable domain elements is generated by a quasi-G\"odel numbering via this mapping.
Dieter Spreen
2023-05-11T17:10:05Z
http://arxiv.org/abs/2305.06982v2
# How Much Partiality Is Needed for a Theory of Computability? ###### Abstract Partiality is a natural phenomenon in computability that we cannot get around, So, the question is whether we can give the areas where partiality occurs, that is, where non-termination happens more structure. In this paper we consider function classes which besides the total functions only contain finite functions whose domain of definition is an initial segment of the natural numbers. Such functions appear naturally in computation. We show that a rich computability theory can be developed for these functions classes which embraces the central results of classical computability theory, in which all partial (computable) functions are considered. To do so the concept of a Godel number is generalised, resulting in a broader class of numberings. The central algorithmic idea in this approach is to search in enumerated lists. By this way the notion of computation is reduced to that of enumeration. Beside of the development of a computability theory for the functions classes, the new numberings--called quasi-Godel numberings--are studied from a numbering-theoretic perspective: they are complete, and each of the function classes numbered in this way is a retract of the Godel numbered set of all partial computable functions. Moreover, the Rogers semi-lattice of all computable numberings of the considered function classes is studied and results as in the case of the computable numberings of the partial computable functions are obtained. The function classes are shown to be effectively given algebraic domains in the sense of Scott-Ershov. The quasi-Godel numberings are exactly the admissible numberings of the domain. Moreover, the domain can be computable mapped onto every other effectively given one so that every admissible numbering of the computable domain elements is generated by a quasi-Godel numbering via this mapping. ###### Contents * 1 Introduction * 2 The function classes \(\mathcal{S}_{A}^{(n)}\) * 3 A machine model * 4 Computability theory for \(\mathcal{S}_{A}\) * 5 Computably enumerable sets * 6 More results in the computability theory with quasi-Godel numberings * 7 Quasi-Godel numberings and the Rogers semi-lattice of computable numberings * 8 ###### Abstract We consider the case of a non-trivial eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a eigenvalue of a eigenvalue of a certain eigenvalue of a eigenvalue of a certain eigenvalue of a domain as a neighbourhood or information system [38, 39], the basic elements correspond to set systems that define an object only vaguely. The advantage of this approach is that names for the objects as well as for the "approximating" neighbourhoods are available in one namespace. First investigations on the problem whether all partial computable functions have to be considered in order to develop a satisfying theory of computability were together with W. H. Kersjes [21]. We started with a modified Turing machine model which for given computably enumerable set \(A\) computed all total computable functions and exactly the initial segment functions with a domain of length in \(A\). Based on this machine model, a numbering of this function class was introduced. Yet, in this investigation we did not have at hand any characterisation of this numbering as known for Godel numberings. As a result, in many cases the computability of index functions had to be proven by constructing suitable machines and could not be derived from properties of the numbering. This could only be given in the habilitation thesis [40] of the current author, of which the present paper is an update. The problem was that although the numbering has a computable universal function, it is not in the corresponding function class. However, the graph of the universal function can be enumerated by a total computable function. This property corresponds to the Enumeration Theorem for Godel numberings [36, Theorem IV]. In addition, Godel numberings are maximal with respect to reducibility among all numberings whose universal function is computable. Similarly, it now turns out that for every computable enumeration of the graphs of a family \((r_{i})_{i\in\omega}\) of functions in the class under consideration, there is a total computable function computing an index of \(r_{i}\). We call numberings that satisfy these two conditions Quasi-Godel Numberings. As will be shown, any two such numberings are recursively isomorphic. Furthermore, all important theorems known for Godel numberings can be derived for this new type of numberings, without using any results of classical computability theory. As every Godel numbering is also a quasi-Godel numbering, this shows that the new notion is a reasonable generalisation of the concept of Godel numbering. In classical computability theory. the concept of computable enumerability has proven to be fundamental. For example, in [36], the concept of a partial recursive operator is traced back to that of a computably enumerable set. As follows already from the definition of quasi-Godel numberings, the concept of computable enumeration is also of central importance for the approach to computability theory that we want to present here. Many of the proofs consist of constructing enumerations for function graphs. In the subsequent sections we start with developing computability theory on the basis of quasi-Godel numberings. In Section 2 we introduce the functions classes which, in addition to the total computable functions, only contain those initial segment functions whose domains have certain given lengths. We then show that these classes have standard numberings with respect to a given Godel numbering of all partial computable functions. The standard numberings obtained in this way are in particular quasi-Godel numberings. In addition, we will discuss some applications of these function classes. Section 3 shows that a quasi-Godel numbering of these function classes can also be obtained without using a Godel numbering. We present a machine model which computes exactly the functions in such a class and use this to define a numbering that turns out to be quasi-Godel numbering. This shows that a computability theory can be founded on the function classes considered here without prior knowledge of the theory of all partial computable functions. In the next sections we derive some central results of computability theory from the properties of quasi-Godel numberings. In Section 4 we show that the _smn_-theorem holds, discuss the effectiveness of substitution, give a normal form theorem and prove the recursion theorem, Rice's theorem and some consequences of these results. In contrast to the theory of all partial computable functions, the partial functions of the kind considered here can be extended to total computable functions, but the extension cannot be effectively computed from an index of the partial function. Also, the length of its domain cannot be computed from the index of an initial segment function. As we shall see, the _smn_-theorem does not have the same importance in this theory as it does in computability theory with Godel numbering. There it is mainly used to construct index functions. Here these functions have to be constructed in a different way. In Section 5 computably enumerable sets are introduced in the usual way as sets that are either empty or the range of a total computable function. They can also be characterised as ranges of the functions considered here, not necessarily total ones. With the help of a quasi-Godel numbering of this function class we then simultaneously get an enumeration of these sets. On the basis of a few selected theorems it will become clear that in this approach to the theory of computably enumerable sets essentially all classical results apply except for those in whose formulation the domain characterisation of computably enumerable sets is included. Because of the normalisation of the domains of the considered functions, this characterisation is no longer meaningful here. As will be seen however, in the usual proofs the constructions using this characterisation can be replaced by others. In addition, we show that the numbering of computably enumerable sets introduced in the manner described is computably isomorphic to the numbering of these sets defined via a Godel numbering. This shows that the approach to computability theory presented here can be used to study computably enumerable sets in the same way as the classical one. In Section 6 we continue our development of a computability theory for the function classes described above that we started in Section 4: the theorems of Rice/Shapiro and Myhill/Shepherdson will be derived. Section 7 is reserved for a numbering-theoretical investigation of quasi-Godel numberings. Here we address the question of the existence of minimal supersets of the class of total computable functions for which still quasi-Godel numberings exist. Furthermore, it is shown that all quasi-Godel numberings of a function class are computably isomorphic. From this it follows in particular that in Section 2 there is no restriction to construct a quasi-Godel numbering as a standard numbering for a Godel numbering. Finally, the Rogers semi-lattices of computable numberings of the function classes under consideration are examined and results known for the case of all partial computable functions are transferred, such as Goetze's theorem [9, 10] that every countable partially ordered set can be embedded isomorphically in this semi-lattice. It also follows from results of Mal'cev [27] and Khutoretskii [24] that the considered function classes have infinitely many incomparable Friedberg numberings and that there are positive numberings of these classes to which no Friedberg numbering can be reduced. In Section 8 we investigate the connection with domain theory. This theory was established independently by Ershov [6] and Scott [38] in their aim to develop a mathematical pleasing way to study the computable functionals of higher type and to construct a model of the untyped lambda calculus, respectively. As will be shown, the functions considered in the present work are just the computable elements of an effectively given algebraic domain, which contains all total number-theoretic functions in addition to the initial segment functions. The quasi-Godel numberings studied in the previous sections are precisely the admissible numberings in the sense of domain theory. We show that the domain of number-theoretic functions just described can be computably mapped onto any other effectively given domain. As a consequence, we obtain that every admissible numbering of the computable elements of an effectively given domain can be generated from a quasi-Godel numbering. The paper finishes with a conclusion. ## 2 The function classes \(\mathcal{S}_{A}^{(n)}\) In what follows let \(\langle\cdot,\cdot\rangle\colon\omega^{2}\to\omega\) be a one-to-one and onto computable pair encoding so that \(\langle x,y\rangle\geqslant x,y\). Extend the pairing function as usual to an \(n\)-tuple encoding (\(n>0\)) by setting \(\langle x\rangle\stackrel{{\rm Def}}{{=}}x\) and \(\langle x_{1},\ldots,x_{n+1}\rangle\stackrel{{\rm Def}}{{=}} \langle x_{1},\langle x_{2},\ldots,x_{n+1}\rangle\rangle\). Let \(\pi_{i}^{(n)}\) (\(i=1,\ldots,n\)) be the associated decodings such that \(\pi_{i}^{(n)}(\langle x_{1},\ldots,x_{n}\rangle)=x_{i}\). The sets of all n-ary partial, total, partial computable, and total computable functions, respectively, will be denoted by \(\mathcal{PF}^{(n)}\), \(\mathcal{F}^{(n)}\), \(\mathcal{P}^{(n)}\), and \(\mathcal{R}^{(n)}\). The arity \(n\) of these functions and the dimension of the Cartesian products of \(\omega\) that will be considered in the sequel is always assumed to at least 1. In some cases the case \(n=0\) could be included. But we will not go into this. Let \(\varphi^{(n)}\) be a Godel numbering of \(\mathcal{P}^{(n)}\) and \(W_{i}\) be the domain of \(\varphi_{i}^{(1)}\). Should the arity of the considered functions be clear from the context or its knowledge be not important, we write \(\varphi\) instead of \(\varphi^{(n)}\). We proceed accordingly with all other numberings of function classes considered here. The value of a numbering \(\zeta\) at argument \(i\) is denoted by \(\zeta_{i}\), but sometimes also by \(\zeta(i)\). Instead of \((x_{1},\ldots,x_{n})\) we also write \(\vec{x}\). Moreover, we write \(\varphi(\vec{x})\!\!\downarrow\) if the computation of \(\varphi(\vec{x})\) converges and \(\varphi(\vec{x})\!\!\downarrow_{m}\) if it converges in \(m\) steps. Otherwise we write \(\varphi(\vec{x})\!\!\uparrow\) or \(\varphi(\vec{x})\!\!\uparrow_{m}\). If \(C\) is a non-empty computably enumerable (c.e.) subset of \(\omega\), then we denote by \(C_{t}\) the subset of elements of \(C\) enumerated in \(t\) steps with respect to a fixed enumeration. A subset \(B\) of \(\omega\) is called _initial segment_ of \(\omega\) if for every \(m\in B\) we have that also \(m^{\prime}\in B\), for all \(m^{\prime}<m\). The cardinality of \(B\) is said to be the _length_ of \(B\). If \(C=B^{n}\), for some initial segment \(B\) of \(\omega\), then \(C\) is called _initial segment_ of \(\omega^{n}\). The length of \(B\) is then also said to be the _edge length_ of \(C\). As has already been pointed out, we will consider functions the domain of which is either \(\omega\). the empty set, or an initial segment of \(\omega^{n}\) with an edge length in a given subset \(A\) of \(\omega\). Let \(\textsc{AnF}^{(n)}_{A}\) be the set of functions in \(\mathcal{PF}^{(n)}\) the domain of which is either empty or an initial segment of \(\omega^{n}\) with edge length in \(A\). Then we set \[\widehat{\mathcal{S}}^{(n)}_{A}\stackrel{{\rm Def}}{ {=}}\mathcal{F}^{(n)}\cup\textsc{AnF}^{(n)}_{A}, \quad\widehat{\mathcal{S}}_{A}\stackrel{{\rm Def}}{{=}}\bigcup \{\widehat{\mathcal{S}}^{(n)}_{A}\mid n>0\,\},\] \[\mathcal{S}^{(n)}_{A}\stackrel{{\rm Def}}{{=}} \mathcal{R}^{(n)}\cup\textsc{AnF}^{(n)}_{A}, \quad\mathcal{S}_{A}\stackrel{{\rm Def}}{{=}}\bigcup \{\mathcal{S}^{(n)}_{A}\mid n>0\,\}.\] For infinite c.e. sets \(A\), \(\mathcal{S}^{(n)}_{A}\) is an enumerable subset of \(\mathcal{P}^{(n)}\). Let to this end, for \(a\in A\), \(a^{(n)}\) denote the \(n\)-tupel \((a,\ldots,a)\). We extend the usual less-or-equal relation on \(\omega\) coordinatewise to \(\omega^{n}\) and write \(\vec{a}<\vec{b}\) to mean \(\vec{a}\leq\vec{b}\) and \(\vec{a}\neq\vec{b}\). Then there is some \(f\in\mathcal{R}^{(1)}\) so that \[\varphi^{(n)}_{f(i)}(\vec{x})=\begin{cases}\varphi^{(n)}_{i}(\vec{x})&\text{ if for some $a\in A$, $\vec{x}<a^{(n)}$ and for all $\vec{y}<a^{(n)}$, $\varphi^{(n)}_{i}(\vec{y})\!\!\downarrow$},\\ \text{undefined}&\text{otherwise.}\end{cases}\] For \(q\in\mathcal{PF}^{(n)}\) let \(\mathrm{dom}(q)\) and \(\mathrm{range}(q)\), respectively, be the domain and the range of \(q\). Moreover, let \[\mathrm{graph}(q)\stackrel{{\rm Def}}{{=}}\{\, \langle\langle\langle\vec{x}\rangle,z\rangle\mid\vec{x}\in\mathrm{dom}(q)\, \wedge\,q(\vec{x})=z\,\},\] \[\mathrm{graph}_{\mathrm{e}}(q)\stackrel{{\rm Def}}{{= }}\{\,\langle\langle\langle\vec{y}\rangle,0\rangle,\langle\langle\langle\vec {x}\rangle,z+1\rangle\mid\vec{y}\in\omega^{n}\,\wedge\,\vec{x}\in\mathrm{dom}( q)\,\wedge\,q(\vec{x})=z\,\}\] respectively be the _graph_ and the _extended graph_ of \(q\). Then it is readily verified that the enumeration \(\lambda i\). \(\varphi^{(n)}_{f(i)}\) has the following two properties \[\varphi^{(n)}_{i}\in\mathcal{S}^{(n)}_{A}\Rightarrow\varphi^{(n)} _{f(i)}=\varphi^{(n)}_{i}, \tag{1}\] \[\mathrm{graph}(\varphi^{(n)}_{f(i)})\subseteq\mathrm{graph}( \varphi^{(n)}_{i}). \tag{2}\] Enumerations of a subset of \(\mathcal{P}^{(n)}\) that satisfy Condition (1) are called \(\varphi\)_-standard numberings_ and those that additionally meet Condition (2) are _special \(\varphi\)-standard numberings_[28]. Enumerations of this kind have first been considered by Lachlan [26] for the case of classes of c.e. sets. The class \(\{\,\mathrm{graph}_{\mathrm{e}}(\varphi^{(n)}_{f(i)})\mid i\in\omega\,\}\) is an example for the kind of classes he studied, that is, a standard class. Set \[{}^{A}\varphi^{(n)}_{i}\stackrel{{\rm Def}}{{=}}\varphi^{(n)}_{f( i)},\] then we have that \({}^{A}\varphi^{(n)}_{i}\) is a special \(\varphi\)-standard numbering of \(\mathcal{S}^{(n)}_{A}\), for every infinite c.e. \(A\subseteq\omega\), \(\varphi\)-standard numberings of \(\mathcal{S}^{(n)}_{A}\) can be characterised by conditions similar to those known for Godel numberings. Since their universal function is computable, but not contained in \(\mathcal{S}^{(n+1)}_{A}\), they will not satisfy the conditions for Godel numberings. **Theorem 2.1**.: _Let \(\psi\) be a \(\varphi\)-standard numbering of \(\mathcal{S}^{(n)}_{A}\). Then the next two conditions hold:_ 1. _The extended graph of_ \(\lambda i,\vec{x}\)_._ \(\psi_{i}(\vec{x}))\) _is enumerable by some function in_ \(\mathcal{R}^{(n)}\) * _For all_ \(k\in\mathcal{R}^{(2)}\) _there exists_ \(v\in\mathcal{R}^{(1)}\) _such that if, for some_ \(i\in\omega\)_,_ \(\lambda t\)_._ \(k(i,t)\) _enumerates the extended graph of a function_ \(r\in\mathcal{S}^{(n)}_{A}\)_, then_ \(r=\psi_{v(i)}\)_._ Proof.: Because \(\psi\) is a \(\varphi\)-standard numbering there is some \(d\in\mathcal{R}^{(1)}\) with \(\psi_{i}=\varphi_{d(i)}\). Therefore, \(\lambda i,\vec{x}\). \(\psi_{i}(\vec{x})\) is computable and \(\operatorname{graph}_{\mathrm{e}}(\lambda i,\vec{x}\). \(\psi_{i}(\vec{x}))\) c.e. By construction \(\operatorname{graph}_{\mathrm{e}}(\lambda i,\vec{x}\). \(\psi_{i}(\vec{x}))\) is infinite. Hence, it can be enumerated by total computable function. Thus, (QGN I) holds. For the derivation of (QGN II) let \(v\in\mathcal{R}^{(1)}\) so that \[\varphi_{v(i)}(\vec{x})=\pi_{2}^{(2)}(\mu\langle t,z\rangle[\pi_{1}^{(2)}(k(i,t))=\langle\vec{x}\rangle\wedge\pi_{2}^{(2)}(k(i,t))=z\wedge z>0])-1.\] If \(\lambda t\). \(k(i,t)\) enumerates the extended graph of a function \(r\in\mathcal{S}^{(n)}_{A}\) it follows that \(\varphi_{v(i)}=r\). Because \(r\in\mathcal{S}^{(n)}_{A}\) and \(\psi\) is a \(\varphi\)-standard numbering we thus obtain that \(r=\varphi_{d(v(i))}=\psi_{v(i)}\). In the sequel, a numbering of a set of functions \(X\supset\mathcal{R}^{(n)}\) that satisfies Conditions (QGN I) and (QGN II) is called _quasi-Godel numbering_ (replace \(\mathcal{S}^{(n)}_{A}\) by \(X\) in (QGN II)). As we have just seen, every \(\varphi\)-standard numbering of such a function class is a quasi-Godel numbering. In particular. **Corollary 2.2**.: _Every Godel numbering is a quasi-Godel numbering._ As will be shown in Section 7, every quasi-Godel numbering of \(\mathcal{S}^{(n)}_{A}\) is a \(\varphi\)-standard numbering, up to recursive isomorphism. The analogy of Conditions (QGN I) and (QGN II) to the corresponding conditions for Godel enumerations can best be seen by identifying functions with their graphs. (QGN I) then corresponds to the computability of the universal function of the enumeration and (QGN II) says that every effective enumeration of functions of the considered set can be computably reduced to the given enumeration. As mentioned earlier, the set of extended graphs of a \(\varphi\)-standard numberable function class is a special case of the classes of c.e. sets studied by Lachlan [26]. He calls these classes standard classes. If one interprets the numbering of such a set of functions as an enumeration of the class of the associated extended graphs, then the Conditions (QGN I) and (QGN II) correspond precisely to Lachlan's requirements on a standard enumeration of a standard class. As we see, the classes \(\mathcal{S}^{(n)}_{A}\) for infinite c.e. sets \(A\) have very pleasant effectivity properties. This already suggests that we have made the right choice with the initial segment functions in our plan to develop a computability theory for a set of functions which, in addition to the total computable functions, only contains certain partial functions. Since every total function can be approximated by a sequence of such initial segment functions, these functions also occur quite naturally in computability theory. For example, a function defined by primitive recursion is evaluated from the beginning, i.e. starting at \(0\). You proceed in the same way with the normalised \(\mu\) operator \[\mu x.\,[\![q(x)=0]\!]\stackrel{{\mathrm{Def}}}{{=}}\begin{cases} \min\{\,x\mid q(x)=0\wedge(\forall y<x)\,q(y)\!\!\downarrow\,\}&\text{if $\{\,x\mid q(x)=0\wedge(\forall y<x)\,q(y)\!\! \downarrow\,\}$ is}\\ &\text{not empty},\\ &\text{otherwise},\end{cases}\] where the function \(q\) must be defined on at least an initial segment of \(\omega\). Moreover, for every infinite \(A\subseteq\omega\), the collection of sets \(\{\,\{\,g\in\mathcal{R}^{(1)}\mid\operatorname{graph}(p)\subset\operatorname{ graph}(g)\,\}\mid p\in\textsc{AnF}^{(1)}_{A}\,\}\) is a basis of a metric topology on \(\mathcal{R}^{(1)}\), the Baire topology. Finally, this approximation property is used in the definition of computable functionals on sets of total functions (cf. [4, 46]). We want to clarify this using the example of a functional \(G\colon\mathcal{F}^{(1)}\times\omega\to\omega\). Let \(\llbracket\![\cdot]\!]\) be a one-to-one computable coding of all finite sequences of natural numbers and for \(p\colon\{0,\ldots,a\}\to\omega\) \[\llbracket\![p]\!]\stackrel{{\mathrm{Def}}}{{=}}\llbracket\![p(0), \ldots,p(a)\rrbracket\!].\] Then \(G\) is called _computable_ if there is some \(g\in\mathcal{R}^{(2)}\) such that \(G(d,x)=y\), exactly if there exists \(p\in\textsc{AnF}^{(1)}_{\omega}\) with \(\operatorname{graph}(p)\subseteq\operatorname{graph}(d)\) so that \(g(\llbracket\![p]\!],x)=y\). The initial segment function \(p\) and in particular the length of its domain is a measure of the amount of information about \(d\) that the algorithm \(g\) needs to compute \(G(d,b)\). Of course one would like to have such algorithms \(g\) that manage with as little information as possible. Gordon and Shamir [11] e.g. investigated whether such algorithms always exist and how they can be constructed. There are various approaches to define computability for uncountable sets other than \(\mathcal{F}^{(1)}\) or \(\mathcal{PF}^{(1)}\), such as computable analysis [43, 44, 33, 48], domain theory [37, 6, 47, 42], the theory of filter spaces [19], the theory of effectively given spaces [20], the theory of finitely approximable sets [18]. Most of these approaches have in common that the elements of the sets under consideration can be approximated by sequences of other finite objects; the real numbers, for example, by normed Cauchy sequences of rational numbers or descending sequences of closed intervals with rational endpoints. Encoding the finite objects used for the approximation allows to describe the approximating sequences by sequences of natural numbers. In [14, 15, 16, 17, 46, 25] it is therefore proposed to use functions in \(\mathcal{F}^{(1)}\) as names for the elements approximated in this way, which led to the now well developed theory of representations. These assignments can be meaningfully extended to \(\widehat{\mathcal{S}}_{\omega}^{(1)}\), or to put it another way, it makes sense to also use initial segment functions as names. If one considers that all information contained in the approximating finite objects is encoded in the elements to be approximated, then these objects conversely contain only a finite part of the information contained in the approximated elements (cf. [39, 18]). The approximating finite objects and thus also every finite sequence of these objects therefore corresponds to a certain amount of uncertainty with regard to the element to be approximated: this is not yet uniquely determined by the finite approximation. In many cases these uncertainty sets are open sets in the topological space of the elements to be approximated ( cf. however [19]).This observation suggests to take initial segment functions as names for the uncertainty set generated by finite sequences of approximating objects, which has the advantage that one has names for the elements of the considered set as well as for the uncertainty sets occurring in the approximation in one namespace. In addition, the described extension of representations to \(\widehat{\mathcal{S}}_{\omega}^{(1)}\) is continuous: if the total function used in the representation is approximated by initial segment functions, then the element named by the total function is approximated by the uncertainty sets corresponding to the initial segment functions. We want to illustrate this with an example below. In Section 8 we will make the statement precise and prove it for the case of effectively given algebraic domains. **Example 2.3**.: _We assume the real numbers \(x\in[0,1]\) be given in signed digit expansion_ \[x=\sum_{i=0}^{\infty}a_{i}\cdot 2^{-(i+1)}\] _with \(a_{i}\in\{-1,0,1\}\). The information about \(x\) which we can read off from this expansion is the sequence \(a_{0},a_{1},\ldots\) of signed digits. To each finite initial segment of this sequence corresponds a dyadic rational_ \[u_{n}\stackrel{{\mathrm{Def}}}{{=}}\sum_{i=0}^{n}a_{i}\cdot 2^{-( i+1)}\] _approximating \(x\). The uncertainty set coming with this number is the interval \([u_{n}-2^{-(n+1)},u_{n}+2^{-(n+1)}]\) including all real numbers containing the information about \(u_{n}\), that is the sequence \(a_{0},\ldots,a_{n}\), as part of their own information._ _Now, for \(a\in\omega\), let \(\delta(a)\stackrel{{\mathrm{Def}}}{{=}}(a\bmod 3)-1\) and for \(r\in\widehat{\mathcal{S}}_{\omega}^{(1)}\) set_ \[G(r)\stackrel{{\mathrm{Def}}}{{=}}\bigcap\{\,[\delta(r(n))-2^{-( n+1)},\delta(r(n))+2^{-(n+1)}]\mid n\in\mathrm{dom}(r)\,\}.\] _If, in addition, we identify a real number \(z\) with the one-point interval \([z,z]\), we have thus obtained a representation of both the reals in \([-1,1]\) and the corresponding uncertainty sets which is such that for \(g\in\mathcal{F}^{(1)}\) and any sequence \((p_{n})_{n\in\omega}\) of initial segment functions such that \(\mathrm{graph}(p_{n})\subseteq\mathrm{graph}(p_{n+1})\) and \(\mathrm{graph}(g)=\bigcup\{\,\mathrm{graph}(p_{n})\mid n\in\omega\,\}\),_ \[G(g)=\bigcap\{\,G(p_{n})\mid n\in\omega\,\}.\] A machine model In the last section we constructed a special \(\varphi\)-standard numbering of \(\mathcal{S}_{A}^{(n)}\) for infinite c.e. sets \(A\subseteq\omega\). In this section we present a machine model for the computation of these functions. The numberings derived from this characterisation will be a quasi-Godel numbering. In what follows let \(\Sigma\) be a non-empty finite set and \(\mathrm{TP}_{\Sigma}\) be the set of all Turing programs which use \(\Sigma\) as tape alphabet. Moreover, let \(A\subseteq\omega\) be a non-empty c.e. set and \(\kappa\colon\omega\to\omega^{n}\) be an effective enumeration of \(\omega^{n}\) without repetitions. Let \(P\) be a Turing program and \(M_{A}^{(n)}\) be a Turing program realising the subsequent algorithm: _Input_: \(\vec{x}\) 1. Set \(j:=1\). 2. Compute \(A_{j}\). 3. For \(i=0,\ldots,j-1\): run program \(P\) on input \(\kappa(i)\) for \(j\) steps and store whether there is a result. 4. If there is some \(a\in A_{j}\) so that * \(\vec{x}<a^{(n)}\) * For all \(\vec{y}<a^{(n)}\colon\vec{y}\in\{\kappa(0),\ldots,\kappa(j-1)\}\) and program \(P\) stops on input \(\vec{y}\) within \(j\) steps, then set \(z:=\) output of \(P\) on input \(\vec{x}\); otherwise increase \(j\) by \(1\) and go to (2). _Output_: \(z\). We write \(M_{A}^{(n)}(P)\) to denote the Turing program consisting of program \(M_{A}^{(n)}\) and the subprogram \(P\). If \(\mathrm{Sem}^{(n)}\colon\mathrm{TP}_{\Sigma}\to\mathcal{P}^{(n)}\) is the semantic map that, with regard to a fixed input/output convention, assigns to every Turing program the function it computes, then \(\mathrm{Sem}^{(n)}(M_{A}^{(n)}(P))\) is precisely the function that agrees with \(\mathrm{Sem}^{(n)}(P)\) on the maximal initial segment of \(\omega^{n}\) with an edge length in \(A\) that is contained in \(\mathrm{dom}(\mathrm{Sem}^{(n)}(P))\); and is undefined, otherwise. **Theorem 3.1**.: _For every infinite c.e. set \(A\subseteq\omega\),_ \[\mathcal{S}_{A}^{(n)}=\{\,\mathrm{Sem}^{(n)}(M_{A}^{(n)}(P))\mid P\in\mathrm{ TP}_{\Sigma}\,\}.\] Let \(\mathrm{MP}_{i}\) be the \(i\)-th program with respect to an effective one-to-one enumeration of the set \(M_{A}^{(n)}[\mathrm{TP}_{\Sigma}]\) of all modified Turing programs \(M_{A}^{(n)}(P)\) with \(P\in\mathrm{TP}_{\Sigma}\) and define \[{}^{A}\psi_{i}^{(n)}\stackrel{{\mathrm{Def}}}{{=}}\mathrm{Sem}^{( n)}(\mathrm{MP}_{i}).\] **Theorem 3.2**.: _For every infinite c.e. set \(A\subseteq\omega\), \({}^{A}\psi^{(n)}\) is a quasi-Godel numbering of \(\mathcal{S}_{A}^{(n)}\)._ If one compares the above definition of the set \(M_{A}^{(n)}[\mathrm{TP}_{\Sigma}]\) with the definition of the numbering \({}^{A}\varphi(n)\) in the last section, one sees that the same rule is behind both, algorithms for the computation of functions in \(\mathcal{P}^{(n)}\) are modified to ones for computing functions in \(\mathcal{S}_{A}^{(n)}\). The only difference is that in the first case this is done by direct manipulation of the algorithms formulated in a given algorithmic language, here the language of Turing programs, and in the second case by effective operations on the indices (names) for these algorithms. Therefore, when defining \({}^{A}\varphi^{(n)}\), a coding of all algorithms had to be used (Godel numbering \(\varphi^{(n)}\)), the properties of which were exploited in the proof of Theorem 2.1. The numbering \({}^{A}\psi^{(n)}\), on the other hand, can be defined directly by listing the modified programs. However, in the proof of the above theorem one can no longer fall back on any known properties of a numbering, but must prove the existence of the computable functions stated in Conditions (QGN I) and (QGN II) by specifying suitable Turing programs. Since we essentially want to derive all results presented in the next sections from the properties of quasi-Godel numberings, this procedure shows that a computability theory for \(\mathcal{S}_{A}\) can be established and developed without prior knowledge of the theory of all partial computable functions. The assumption about \(A\) in this section is to be understood as an abbreviated way of saying that \(A\) is the image of a total, one-to-one function that can be computed by a Turing program. It does not mean that you already have to know what a c.e. set is. Proof of Theorem 3.2.: (QGN I). Let \(P\) be a Turing program realising the following algorithm: _Input_: t 1. If \(t\) is odd, set \(z:=\langle(t-1)/2,0\rangle\) and go to (4); otherwise, find \(i\), \(j\) and \(\vec{x}\) so that \(\langle i,j,\vec{x}\rangle=t/2\). 2. Simulate \(j\) steps of the computation of the \(i\)-th modified Turing program \(\text{MP}_{i}\) on input \(\vec{x}\). 3. If the computation of \(\text{MP}_{i}\) on input \(\vec{x}\) stops within \(j\) steps, set \(z:=\langle\langle i,\vec{x}\rangle,1+\text{output of }\text{MP}_{i}\) on input \(\vec{x}\rangle\); otherwise, set \(z:=\langle t/2,0\rangle\). 4. Stop. _Output_: \(z\). Let \(h\stackrel{{\text{Def}}}{{=}}\text{Sem}^{(1)}(P)\). Then \(h\in\mathcal{R}^{(1)}\) with \(\text{range}(h)=\text{graph}_{\text{e}}(\lambda i,\vec{x}\). \({}^{A}\psi_{i}^{(n)}(\vec{x}))\). (QGN II). Let \(\widehat{P}\) be a Turing program computing the function \[\pi_{2}^{(2)}(\mu\langle t,z\rangle\left[\pi_{1}^{(2)}(k(i,t))=\langle\vec{x} \rangle\wedge\pi_{2}^{(2)}(k(i,t))=z\wedge z>0\right])-1.\] Moreover, for \(i\in\omega\), let \(P_{i}\) be a program that on input \(\vec{x}\) outputs \(\langle i,\vec{x}\rangle\) in such a way that it can be read as input by \(\widehat{P}\). For two programs \(\widetilde{P}\) and \(\tilde{P}\) let \(\widetilde{P};\tilde{P}\) be the program that first executes \(\tilde{P}\) and then \(\tilde{P}\). Finally, let \(Q\) be a program that on input of \(i\) computes a \(j\) with \(\text{MP}_{j}=M_{A}^{(n)}(\widehat{P};P_{i})\). Then \(v\stackrel{{\text{Def}}}{{=}}\text{Sem}^{(n)}(Q)\) has the property stated in (QGN II). ## 4 Computability theory for \(\mathcal{S}_{A}\) In this and the following sections let \(A\subseteq\omega\) be an infinite c.e. set. In these sections we will show that a sufficiently rich computability theory can be developed for the set of functions \(S_{A}\). For this purpose we give a selection of important theorems of classical computability theory and show that they also apply to the set of functions considered here. Since this contains in particular all total computable functions, computable sets and relations can be introduced as usual in the theory to be developed here. We do not want to go into this any further. Let \(\theta^{(n)}\) be a numbering of \(\mathcal{S}_{A}^{(n)}\). If \(\theta^{(n)}\) satisfies Condition (QGN I), we obtain an analogue to Kleene's Normal Form Theorem. **Theorem 4.1**.: _Let \(\theta^{(n)}\) satisfy Condition (QGN I). Then there is a function \(q\in\mathcal{R}^{(1)}\) and an \((n+2)\)-ary computable predicate \(T\) so that_ \[\theta_{i}^{(n)}(\vec{x})=q(\mu y.\ T(i,\vec{x},y)).\] Proof.: Let \(h\in\mathcal{R}^{(1)}\) enumerate the extended graph of the universal function of the numbering \(\theta^{(n)}\). Then \[\theta_{i}^{(n)}(\vec{x})=\pi_{2}^{(2)}(\mu\langle t,z\rangle\left[\pi_{1}^{( 2)}(h(t))=\langle i,\vec{x}\rangle\wedge\pi_{2}^{(2)}(h(t))=z\wedge z>0\right] )-1.\] Therefore, defining \(q(a)\stackrel{{\text{Def}}}{{=}}\pi_{2}^{(2)}(a)\stackrel{{ \cdot}}{{\cdot}}1\) and \[T(i,\vec{x},y)\Leftrightarrow\pi_{1}^{(2)}(h(\pi_{1}^{(2)}(y)))=\langle i, \vec{x}\rangle\wedge\pi_{2}^{(2)}(h(\pi_{1}^{(2)}(y)))=\pi_{2}^{(2)}(y) \wedge\pi_{2}^{(2)}(y)>0,\] proves the claim. Since the numberings considered here do not have a universal function in the considered function class, we cannot construct index functions as we do in the case of Godel numberings, by first defining a function that contains the index parameters as arguments and then applying the _smn_-theorem. We have to obtain such index functions by applying condition (QGN II). Therefore, in the proofs of this and the following sections, we often need to construct functions that enumerate the extended graph of another function. These enumeration functions are essentially defined in two ways. In order not to have to repeat these constructions in the following, we want to present the first of these definition methods in the form of a scheme. Let \(\kappa\) be an effective one-to-one enumeration of \(\omega^{n}\), in which for \(a<b\) (\(a,b\in\omega\)) all elements of the initial segment of \(\omega^{n}\) of edge length \(a\) occur before the remaining elements of the initial segment of edge length \(b\). In the following we say that \(\kappa\) enumerates \(\omega^{n}\)_initial segment by initial segment_. Also let \(\mathrm{nf},\mathrm{arg}\in\mathcal{R}^{(1)}\) and for all \(i,j>0\), \(*\in\mathcal{R}^{(1)}\) with \[\mathrm{nf}(\langle x_{1},\ldots,x_{n}\rangle)\stackrel{{ \mathrm{Def}}}{{=}}\langle\kappa(\kappa^{-1}(x_{1},\ldots,x_{n})+1)\rangle,\] \[\mathrm{arg}(z)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases} \langle\overline{0}\rangle&\text{if }\pi_{2}^{(2)}(z)=0,\\ \mathrm{nf}(\pi_{1}^{(2)}(z))&\text{otherwise},\end{cases}\] \[\langle x_{1},\ldots,x_{i}\rangle*\langle y_{1},\ldots,y_{j} \rangle\stackrel{{\mathrm{Def}}}{{=}}\langle x_{1},\ldots,x_{i}, y_{1},\ldots,y_{j}\rangle.\] (With this notation we suppress the dependence of the functions \(\kappa\), \(\mathrm{nf}\), \(\mathrm{arg}\) and \(*\) on \(n\) and \(i\) and \(j\), respectively.) As follows from the definition, \(\mathrm{nf}(\langle\vec{x}\rangle)\) is the encoded successor of \(\vec{x}\) in the enumeration \(\kappa\). **Lemma 4.2**.: _Let \(f\in\mathcal{R}^{(3)}\) and \(Q\subseteq\omega^{3}\) be a computable relation so that \(f(i,t,z)>0\) if \(Q(i,t,z)\) holds. Moreover, let \(k\in\mathcal{PF}^{(2)}\) be defined by_ \[k(i,2t)=\langle\langle\kappa(t)\rangle,0\rangle\] \[k(i,2t+1)=g(i,t),\quad\text{where}\] \[g(i,0)=\langle\langle\overline{0}\rangle,0\rangle,\] \[g(i,t+1)=\begin{cases}\langle\mathrm{arg}(g(i,t)),f(i,t+1,g(i,t ))\rangle&\text{if }Q((i,t+1,g(i,t)),\\ g(i,t)&\text{otherwise}.\end{cases}\] _Then \(k\in\mathcal{R}^{(2)}\) and \(\lambda t\). \(k(i,t)\) enumerates the extended graph of an \(n\)-ary function._ As already mentioned, the concept of enumeration is of central importance for the computability theory to be developed here. The numberings of \(\mathcal{S}^{(n)}_{A}\) considered here do not have universal functions in \(\mathcal{S}^{(n+1)}_{A}\), but the extended graphs of the functions in \(\mathcal{S}^{(n)}_{A}\) can be enumerated uniformly. We first show a more general result. **Lemma 4.3**.: _Let \(\theta^{(m+n)}\) satisfy Condition (QGN I). Then there is some \(k\in\mathcal{R}^{(2)}\) such that_ \[\mathrm{range}(\lambda t.\ k(\langle i,\vec{y}\rangle,t))=\mathrm{ graph}_{\mathrm{e}}(\lambda\vec{x}.\ \theta_{i}^{(m+n)}(\vec{y},\vec{x})).\] Proof.: By Condition (QGN I), the extended graph of \(\lambda i,\vec{y},\vec{x}.\ \theta_{i}^{(m+n)}\) has an enumeration \(h\in\mathcal{R}^{(1)}\). Define \[\widehat{Q}(\langle i,\vec{y}\rangle,a,z)\Leftrightarrow\pi_{2}^ {(2)}(h(a))>0\wedge\pi_{1}^{(2)}(h(a))=\langle i\vec{y}\rangle*\mathrm{arg}(z),\] \[Q(\langle i,\vec{y}\rangle,t,z)\Leftrightarrow(3a\leq t)\, \widehat{Q}(\langle i,\vec{y}\rangle,a,z)\quad\text{and}\] \[f(\langle i,\vec{y}\rangle,t,z)\stackrel{{\mathrm{ Def}}}{{=}}\pi_{2}^{(2)}(h(\mu a\leq t.\ \widehat{Q}(\langle i,\vec{y}\rangle,a,z))).\] Now, by applying Lemma 4.2 we obtain a function \(k\in\mathcal{R}^{(2)}\). As is readily verified, it has the properties stated. The result we are looking for now follows as special case \(m=0\). **Theorem 4.4**.: _Let \(\theta^{(n)}\) satisfy Condition (QGN I). Then there is some \(k\in\mathcal{R}^{(2)}\) so that_ \[\operatorname{range}(\lambda t.\ k(i,t))=\operatorname{graph}_{e}(\theta^{(n)}_ {i}).\] As a further consequence we obtain the _smn_-theorem. **Theorem 4.5** (_smn_-Theorem).: _Let \(m>0\), \(\theta^{(n)}\) satisfy Condition (QGN II) and \(\theta^{(m+n)}\) meet Condition (QGN I). Then there is a function \(s\in\mathcal{R}^{(m+1)}\) so that_ \[\theta^{(n)}_{s(i,\vec{y})}(\vec{x})=\theta^{(m+n)}_{i}(\vec{y},\vec{x}).\] Proof.: By Lemma 4.3 there is some \(k\in\mathcal{R}^{(2)}\) such that \[\operatorname{range}(\lambda t.\ k(\langle i,\vec{y}\rangle,t)=\operatorname{ graph}_{e}(\lambda\vec{x}.\ \theta^{(m+n)}_{i}(\vec{y},\vec{x})).\] Now, let \(v\in\mathcal{R}^{(1)}\) be as in Condition (QGN II). Since \(\lambda\vec{x}.\ \theta^{(m+n)}_{i}(\vec{y},\vec{x})\in\mathcal{S}^{(n)}_{A}\), it follows that \(\theta^{(m+n)}_{i}(\vec{y},\vec{x})=\theta^{(n)}_{v(\langle i,\vec{y}\rangle)}\). Thus, \(s\stackrel{{\mathrm{Def}}}{{=}}\lambda i,\vec{y}.\ v(\langle i, \vec{y}\rangle)\) is as wanted. Just as the _smn_-theorem is the effective version of the reducibility requirement for Godel numberings, there is also an effective version of (QGN II) for quasi-Godel numberings: * There is a function \(d\in\mathcal{R}^{(m+1)}\), for all \(m>0\), such that for all \(i\in\omega\), \(\vec{j}\in\omega^{m}\) and \(r\in\mathcal{S}^{(n)}_{A}\), if \(\lambda t.\ \theta^{(m+1)}_{i}(\vec{j},t)\) enumerates the extended graph of function \(r\), then \(r=\theta^{(n)}_{d(i,\vec{j})}\). **Theorem 4.6**.: _Let \(\theta^{(n)}\) satisfy Condition (QGN I), for every \(n>0\). Then the following equivalences hold:_ \[(1)\Leftrightarrow(3)\quad\text{and}\quad(2)\Leftrightarrow(4),\] _where_ 1. \(\theta^{(n)}\) _meets Condition (QGN II)._ 2. \(\theta^{(n)}\) _meets Condition (QGN II), for all_ \(n>0\)_._ 3. \(\theta^{(n)}\) _meets Condition (QGN E)._ 4. _Requirements (_4_) and (_4_) hold, for all_ \(n>0\) _:_ 1. _There is some function_ \(s\in\mathcal{R}^{(m+1)}\)_, for all_ \(m>0\) _, so that for all_ \(\vec{y}\in\omega^{m}\)_,_ \(\vec{x}\in\omega^{n}\) _and_ \(i\in\omega\)_,_ \(\theta^{(n)}_{s(i,\vec{y})}(\vec{x})=\theta^{(m+n)}_{i}(\vec{y},\vec{x})\)_._ 2. _There exists a function_ \(g\in\mathcal{R}^{(1)}\) _such that for all_ \(i\in\omega\) _and_ \(r\in\mathcal{S}^{(n)}_{A}\)_, if_ \(\theta^{(1)}_{i}\) _enumerates the extended graph of function_ \(r\)_, then_ \(r=\theta^{(n)}_{g(i)}\)_._ Proof.: \((3)\Rightarrow(1)\) holds trivially. We show next that \((1)\Rightarrow(3)\). By Lemma 4.3 there is some \(k\in\mathcal{R}^{(2)}\) such that \(\lambda t.\ k(\langle i\vec{j}\rangle,t)\) enumerates the extended graph of \(\lambda a.\ \theta^{(m+1)}_{i}(\vec{j},a)\). Let \(i\in\omega\), \(\vec{j}\in\omega^{m}\) and \(r\in\mathcal{S}^{(n)}_{A}\) such that \(\lambda a.\ \theta^{(m+1)}_{i}(\vec{j},a)\) enumerates the extended graph of function \(r\). According to its definition every element of \(\operatorname{graph}_{e}(\lambda a.\ \theta^{(m+1)}_{i}(\vec{j},a))\) is of the form \(\langle a,0\rangle\), \(\langle a,\langle\langle\vec{y}\rangle,0\rangle+1\rangle\) or \(\langle a,\langle\langle\vec{y}\rangle,r(\vec{y})+1\rangle+1\rangle\). Therefore, if we set \[\hat{k}(\langle i,\vec{j}\rangle,t)\stackrel{{\mathrm{Def}}}{{=}} \begin{cases}\langle\langle 0\rangle,0\rangle&\text{if }\pi^{(2)}_{2}(k(\langle i,\vec{j} \rangle,t))=0,\\ \pi^{(2)}_{2}(k(\langle i,\vec{j}\rangle,t))\mathbin{\hat{\ }}1&\text{otherwise},\end{cases}\] then \(\hat{k}\in\mathcal{R}^{(2)}\). Moreover \(\lambda t.\ \hat{k}(\langle i,\vec{j}\rangle,t)\) enumerates the extended graph of function \(r\). Since \(\theta^{(n)}\) satisfies Condition (QGN II), there is a function \(v\in\mathcal{R}^{(1)}\) with \(r=\theta^{(n)}_{v(\langle i,\vec{j}\rangle)}\). Thus, \(d\stackrel{{\mathrm{Def}}}{{=}}\lambda i,\vec{j}.\ v(\langle i, \vec{j}\rangle)\) has the property stated in Condition (QGN E). In the same way we obtain that (2) implies (4b): choose \(m=0\). Statement (4a) is just the _smn_-theorem that has been derived from (2) in Theorem 4.5. For the remaining implication it suffices to show (4) \(\Rightarrow\) (3). Let to this end \(i\in\omega\), \(\vec{j}\in\omega^{m}\) and \(r\in\mathcal{S}_{A}^{(n)}\) so that \(\lambda t\). \(\theta_{i}^{(m+1)}(\vec{j},t)\) enumerates the extended graph of \(r\). With Condition (4a) we have that \(\theta_{i}^{(m+1)}(\vec{j},t)=\theta_{s(i,\vec{j})}^{(1)}(t)\), from which it follows with (4b) that \(r=\theta_{g(s(i,\vec{j}))}^{(n)}\). Therefore, \(d\stackrel{{\mathrm{Def}}}{{=}}g\circ s\) has the properties required in (3). For what follows, let \(\theta^{(n)}\) be a quasi-Godel numbering, for all \(n>0\). With the _smn_-theorem we have shown for a known result of computability theory that it also holds in the computability theory for \(\mathcal{S}_{A}\). Next we want to examine whether substitution is an effective operation. Here we first have to state that \(\widehat{\mathcal{S}_{A}}\) is not closed under substitution. Because whenever for \(p,r\in\widehat{\mathcal{S}_{A}^{(1)}}\) the set \(\{\,x\mid p(x)\in\mathrm{dom}(r)\,\}\) refrains from being a segment of \(\omega\) then \(r\circ p\notin\widehat{\mathcal{S}_{A}^{(1)}}\) We therefore introduce a modified substitution. Let \(\widehat{M}_{A}^{(n)}\colon\mathcal{PF}^{n}\to\mathcal{PF}^{n}\) be defined by \[\widehat{M}_{A}^{(n)}(q)(\vec{x})\stackrel{{\mathrm{Def}}}{{=}} \begin{cases}q(\vec{x})&\text{if there exists $a\in A$ so that $\vec{x}<a^{(n)}$ and}\\ &\text{for all $\vec{y}<a^{(n)}$, $\vec{y}\in\mathrm{dom}(q)$,}\\ \text{undefined}&\text{otherwise.}\end{cases}\] Then \(\widehat{M}_{A}^{(n)}\) is idempotent, \(\widehat{M}_{A}^{(n)}[\mathcal{PF}^{(n)}]=\widehat{\mathcal{S}_{A}^{(n)}}\) and \(\widehat{M}_{A}^{(n)}(\varphi_{i}^{(n)})={}^{A}\varphi_{i}^{(n)}\). If \[\mathrm{Subst}^{(m,n)}\colon\mathcal{PF}^{(m)}\times(\mathcal{PF}^{(n)})^{m} \to\mathcal{PF}^{(n)}\] is the usual substitution operation, the modified operation is defined by \[\mathrm{MSubst}_{A}^{(m,n)}\stackrel{{\mathrm{Def}}}{{=}} \widehat{M}_{A}^{(n)}\circ\mathrm{Subst}^{(m,n)}.\] For \(r\in\widehat{\mathcal{S}_{A}^{(m)}}\) and \(p_{1},\ldots,p_{m}\in\widehat{\mathcal{S}_{A}^{(n)}}\), \(\mathrm{MSubst}_{A}^{(m,n)}(r;p_{1},\ldots,p_{m})\) agrees with \[\mathrm{Subst}^{(m,n)}(r;p_{1},\ldots,p_{m})\] on the maximal initial segment of \(\omega^{n}\) with an edge length in \(A\) that is contained in \[\mathrm{dom}(\mathrm{Subst}_{A}^{(m,n)}(r;p_{1},\ldots,p_{m})),\] and is undefined, otherwise. Therefore, for all functions \(r\in\widehat{\mathcal{S}_{A}^{(m)}}\) and \(p_{1},\ldots,p_{m}\in\widehat{\mathcal{S}_{A}^{(n)}}\) with \(\mathop{\times\!\!\!\times}_{\nu=1}^{m}p_{\nu}(\bigcap_{\sigma=1}^{m}\mathrm{ dom}(p_{\sigma}))\subseteq\mathrm{dom}(r)\), \[\mathrm{MSubst}_{A}^{(m,n)}(r;p_{1},\ldots,p_{m})=\mathrm{Subst}^{(m,n)}(r;p_ {1},\ldots,p_{m}).\] In this case, \(\mathrm{dom}(\mathrm{Subst}^{(m,n)}(r;p_{1},\ldots,p_{m}))=\bigcap_{\sigma=1 }^{m}\mathrm{dom}(p_{\sigma}))\). Since the domains of the functions \(p_{\sigma}\) are comparable with respect to set inclusion, the intersection is again an initial segment of \(\omega^{n}\) with an edge length in \(A\). In particular, the modified substitution agrees with the usual substitution for all total functions. This justifies the introduction of \(\mathrm{MSubst}_{A}^{(m,n)}\), as we are essentially concerned with the total functions. Incidentally, it also happens with normal substitution that information gets lost during the substitution process. If for \(p,q\in\mathcal{PF}^{(1)}\), \(q(x)\notin\mathrm{dom}(p)\), then \((p\circ q)(x)\) is undefined. That is, the information coming with \(q(x)\) is lost. In the case of the modified substitution, in addition all information is lost that comes from points which are not contained in the maximal initial segment of \(\omega\) with length in \(A\) that is included in \(\mathrm{dom}(p\circ q)\). This is in agreement with the algorithms defined in the last section using the Turing program \(M_{A}^{(n)}\). **Theorem 4.7**.: _There is a function \(\mathrm{sub}\in\mathcal{R}^{(m+1)}\) so that_ \[\theta_{\mathrm{sub}(j,j_{1},\ldots,j_{m})}^{(n)}=\mathrm{MSubst}_{A}^{(m,n)}( \theta_{j}^{(m)};\theta_{j_{1}}^{(n)},\ldots,\theta_{j_{m}}^{(n)}).\] Proof.: Let \(h,\hat{h}\in\mathcal{R}^{(1)}\) be enumerations as in Condition (QGN I) for \(\theta^{(n)}\) and \(\theta^{(m)}\), respectively. Moreover, let \[\widehat{Q}(\langle j,j_{1},\ldots,j_{m}\rangle,\langle b,b_{1}, \ldots,b_{m}\rangle,\langle\vec{x}\rangle)\Leftrightarrow\] \[\bigwedge_{\nu=1}^{m}[\pi_{1}^{(2)}(h(b_{\nu}))=\langle j_{\nu}, \vec{x}\rangle\wedge\pi_{2}^{(2)}(h(b_{\nu}))>0]\wedge\] \[\pi_{2}^{(2)}(\hat{h}(b))=\langle j,\pi_{2}^{(2)}(h(b_{1})) \dot{-}1,\ldots,\pi_{2}^{(2)}(h(b_{m}))\dot{-}1\rangle\wedge\pi_{2}^{(2)}(\hat {h}(b))>0,\] \[Q(i,t,z)\Leftrightarrow(\exists a\in A_{t})\,\bigwedge_{\nu=1}^{n}\pi_{\nu}^{ (n)}(\arg(z))<a\wedge(\forall\vec{x}<a^{(n)})\,(\exists c\leq t)\,\widehat{Q}( i,c,\langle\vec{x}\rangle),\] \[f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi_{1}^{(2)}(\hat{h}(\pi_ {1}^{(m+1)}(\mu c\leq t.\,\widehat{Q}(i,c,\pi_{1}^{(2)}(z))))).\] By now applying Lemma 4.2 we obtain a function \(k\in R^{(2)}\) such that \(\lambda t.\,\,k(\langle j,j_{1},\ldots,j_{m}\rangle,t)\) enumerates the extended graph of \(\mathrm{MSubst}_{A}^{(m,n)}(\theta_{j}^{(m)};\theta_{j_{1}}^{(n)},\ldots, \theta_{j_{m}}^{(n)})\). Since \(\theta^{(n)}\) meets Condition (QGN II), there is some \(v\in\mathcal{R}^{(1)}\) with \[\mathrm{MSubst}_{A}^{(m,n)}(\theta_{j}^{(m)};\theta_{j_{1}}^{(n)},\ldots, \theta_{j_{m}}^{(n)})=\theta_{v(\langle j,j_{1},\ldots,j_{m}\rangle)}^{(n)}.\] By setting \(\mathrm{sub}\stackrel{{\mathrm{Def}}}{{=}}\lambda j,\vec{j}.\,\,v (\langle j,\vec{j}\rangle)\) we are therefore done. A central result of computability theory is the recursion respectively fixed point theorem. Our next goal is derive this theorem for the function classes and numberings considered in this paper. To this end we have to consider the family \(\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}(\vec{y}))_{(i,\vec{y})\in\omega^ {m+1}}\). Since numbering \(\theta^{(n)}\) is only defined on defined indices in \(\omega\), this family is not well defined. With help of the enumeration theorem is can however be extended to situations in which index functions may be not defined. We then obtain \[\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}(\vec{x})=\begin{cases}\theta_{ \theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}(\vec{x})&\text{if }\theta_{i}^{(m+1)}(i,\vec{y})\downarrow,\\ \text{undefined}&\text{otherwise}.\end{cases}\] Thus, \(\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}\in\mathcal{S}_{A}^{(n)}\), for every choice of \((i,\vec{y})\in\omega^{m+1}\). **Lemma 4.8**.: _There exists \(g\in\mathcal{R}^{(m+1)}\) so that_ \[\theta_{g(i,\vec{y})}^{(n)}=\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}.\] Proof.: Again, let \(h,\hat{h}\in\mathcal{R}^{(1)}\) be enumerations as in Condition (QGN I) for \(\theta^{(n)}\) and \(\theta^{(m+1)}\), respectively. Moreover, define \[\widehat{Q}(\langle i,\vec{y}\rangle,\langle a,b\rangle,z) \Leftrightarrow\pi_{1}^{(2)}(\hat{h}(a))=\langle i,i,\vec{y}\rangle\wedge\pi_ {2}^{(2)}(\hat{h}(a))>0\wedge\] \[\pi_{2}^{2}(h(b))>0\wedge\pi_{1}^{2}(h(b))=\langle\pi_{2}^{2}( \hat{h}(a))\dot{-}1\rangle*\arg(z),\] \[Q(\langle i,\vec{y}\rangle,t,z)\Leftrightarrow(\exists c\leq t)\,\widehat{Q }(\langle i,\vec{y}\rangle,c,z),\] \[f(\langle i,\vec{y}\rangle,t,z)\stackrel{{\mathrm{ Def}}}{{=}}\pi_{2}^{(2)}(h(\pi_{2}^{2}(\mu c\leq t.\,\widehat{Q}(\langle i, \vec{y}\rangle,c,z)))).\] Then by applying Lemma 4.2 we obtain a function \(k\in\mathcal{R}^{(2)}\) so that \(\lambda t.\,\,k(\langle i,\vec{y}\rangle,t)\) enumerates the extended graph of \(\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}\). Now, using Condition (QWGN II) for \(\theta^{(n)}\) we have that there is some \(v\in\mathcal{R}^{(1)}\) with \[\theta_{v(i,\vec{y})}^{(n)}=\theta_{\theta_{i}^{(m+1)}(i,\vec{y})}^{(n)}.\] Set \(g(i,\vec{y})\stackrel{{\mathrm{Def}}}{{=}}\lambda i,\vec{y}.\,\,v (\langle i,\vec{y}\rangle)\). This more technical result allows to derive the recursion theorem. **Theorem 4.9** (Recursion Theorem).: _Let \(f\in\mathcal{R}^{(m+1}\). Then there is a function \(e_{f}\in\mathcal{R}_{(m)}\) so that_ \[\theta^{(n)}_{f(e_{f}(\vec{y}),\vec{y})}=\theta^{(n)}_{e_{f}(\vec{y})}.\] Proof.: Let \(g\in\mathcal{R}^{(m+1)}\) be as in Lemma 4.8. Then there is some index \(j\in\omega\) with \(\theta^{(m+1)}_{j}(i,\vec{y})=f(g(i,\vec{y}),\vec{y})\). Set \(e_{f}(\vec{y})\stackrel{{\mathrm{Def}}}{{=}}g(j,\vec{y})\). Then \(e_{f}\in\mathcal{R}^{(m)}\) and \[\theta^{(n)}_{e_{f}(\vec{y})}=\theta^{(n)}_{g(j,\vec{y})}=\theta^{(n)}_{\theta ^{n+1}_{j}(j,\vec{y})}=\theta^{(n)}_{f(g(j),\vec{y},\vec{y})}=\theta^{(n)}_{f( e_{f}(\vec{y}),\vec{y})}.\qed\] As we see, the proof of this theorem follows essentially the same idea as in the case of classical computability theory (cf. e.g. [36]). This will be the case in several of the subsequent results. In each case, however, there are certain auxiliary functions such as function \(g\) above, the existence of which has to be demonstrated in another way as in the known theory. The above result is the fixed point version of the recursion theorem. With help of the _smn_-theorem we now obtain Kleene's version [23]. **Corollary 4.10**.: _Let \(r\in\mathcal{R}^{(n+1)}\). Then there is some index \(c_{r}\in\omega\) with_ \[r(c_{r},\vec{x})=\theta^{(n)}_{c_{r}}.\] Similarly, we obtain an effective version of this corollary which says the index \(c_{r}\) can be computed from an index of \(r\). **Corollary 4.11**.: _There is a function \(q\in\mathcal{R}^{(1)}\) such that_ \[\theta^{(n+1)}_{i}(q(i),\vec{x})=\theta^{(n)}_{q(i)}(\vec{x}).\] Next, we will derive an effective version of the fixed point formulation above. It says that the fixed point \(e_{f}\) in Theorem 4.9 can be computed from an index of \(f\). **Theorem 4.12**.: _There is a function \(e\in\mathcal{R}^{(m+1)}\) such that all \(i\in\omega\) with \(\theta^{(m+1)}\) being total,_ \[\theta^{(m+1)}_{\theta^{(m+1)}_{i}(e(i,\vec{y}),\vec{y})}=\theta^{(n)}_{e(i, \vec{y})}.\] Proof.: Let \(g\in\mathcal{R}^{(m+1)}\) be as in Lemma 4.8, and \(i\in\omega\) such that \(\theta^{(m+1)}_{i}\) is total. Since for total functions the modified substitution defined above agrees with usual substitution, it follows from Theorem 4.7 that there is a function \(q\in R^{(1)}\) so that \[\theta^{(m+1)}_{q(i)}(j,\vec{y})=\theta^{(m+1)}_{i}(g(j,\vec{y}),\vec{y}).\] Set \(e(j,\vec{y})\stackrel{{\mathrm{Def}}}{{=}}g(q(j),\vec{y})\). Then \(e\in\mathcal{R}^{(m+1)}\) and \[\theta^{(n)}_{e(i,\vec{y})} =\theta^{(n)}_{\theta^{(m+1)}_{q(i)}(q(i),\vec{y})}\] \[=\theta^{(m+1)}_{\theta^{(m+1)}_{i}(g(q(i),\vec{y}),\vec{y})}\] \[=\theta^{(n)}_{\theta^{(m+1)}_{i}(e(i,\vec{y}),\vec{y})}.\qed\] As a further consequence of the recursion theorem we obtain that the index functions used in this section can be chosen as one-to-one. The padding lemma we derive next will be needed here. **Theorem 4.13** (Padding Lemma).: \(\theta^{(n)}\) _has a one-to-one padding function, that is, a one-to-one function \(p\in\mathcal{R}^{(2)}\) so that_ \[\theta^{(n)}_{p(i,j)}=\theta^{(n)}_{i}.\] Proof.: Let \(s\in\mathcal{R}^{(2)}\) be an _smn_-function for the case \(m=2\) and \(r\in\mathcal{R}^{(4)}\) defined by \[r(c,j,x,b)=\begin{cases}0&\text{if there exists $a<j$ with $s(c,a)=s(c,j)$,}\\ 1&\text{if for all $a<j$, $s(c,a)\neq s(c,j)$, $j<x$, and $s(c,j)=s(c,x)$.}\\ b&\text{otherwise.}\end{cases}\] Since the modified substitution of functions in \(\widehat{\mathcal{S}}_{A}\) into total functions agrees with the usual one, it follows with Theorem 4.7 that there is a function \(g\in\mathcal{R}^{(1)}\) so that \[\theta^{(n+2)}_{g(i)}(c,j,\vec{x})=r(c,j,x_{1},\theta^{(n)}_{i}(\vec{x})).\] It then follows with Corollay 4.11 that there exists \(k\in\mathcal{R}^{(1)}\) with \[\theta^{(n+2)}_{g(i)}(k(i),j,\vec{x})=\theta^{(n+1)}_{k(i)}(j,\vec{x}).\] Now, assume that \(\lambda j.\)\(s(k(i),j)\) is not one-to-one, and let \(a\) be minimal with the property that \[(\exists j>a)\,s(k(i),a)=s(k(i),j).\] Let \(\hat{j}\) such one such \(j\). Then, \[0 =\theta^{(n+2)}_{g(i)}(k(i),\hat{j},\hat{j}^{(n)})\] \[=\theta^{(n+1)}_{k(i)}(\hat{j},\hat{j}^{(n)})\] \[=\theta^{(n)}_{s(k(i),\hat{j})}(\hat{j}^{(n)})\] \[=\theta^{(n)}_{s(k(i),a)}(\hat{j}^{(n)})\] \[=\theta^{(n+1)}_{k(i)}(a,\hat{j}^{(n)})\] \[=\theta^{(n+2)}_{g(i)}(k(i),a,\hat{j}^{(n)})\] \[=1.\] Thus, \(\lambda j.\)\(s(k(i),j)\) is one-to-one. Define \[p(i,j)=\begin{cases}s(k(0),0)&\text{if $(i,j)=(0,0)$,}\\ s(k(i),\mu z.\,\left(\forall(a,b)<(i,j)\right)s(k(i),z){\neg}p(a,b))&\text{ otherwise.}\end{cases}\] Note that because \(\lambda j.\)\(s(k(i),j)\) is one-to-one, if \((i,j){\neg}(0,0)\), then there is always some \(z\) with \(s(k(i),z){\neg}p(a,b)\), for all \(a,b\in\omega\) with \((a,b)<(i,j)\). Therefore \(p\in R^{(2)}\). Moreover, \(p\) is one-to-one and \(\theta^{(n)}_{p(i,j)}=\theta^{(n)}_{i}\). **Corollary 4.14**.: _For every \(f\in\mathcal{R}^{(m)}\) there is a one-to-one function \(\hat{f}\in\mathcal{R}^{(m)}\) so that \(\theta^{(m)}_{\hat{f}(\vec{g})}=\theta^{(m)}_{\hat{f}(\vec{g})}\)._ It follows that that the function \(v\) in Requirement (QGN II), the function \(d\) in Condition (QGN E), the _smn_-function, the function \(g\) in Statement (4) of Theorem 4.6, the function sub in Theorem 4.7, the function \(g\) in Lemma 4.8, and hence the function \(e_{f}\) in the recursion theorem, the function \(q\) in Corollary 4.11 and the function \(e\) in the effective version of the recursion theorem can all be chosen as one-to-one. A consequence of the latter fact is that every recursive definition has infinitely many fixed points. **Theorem 4.15**.: _There is a one-to-one function \(\operatorname{fix}\in\mathcal{R}^{(2)}\) so that for all \(i\in\omega\) for which \(\theta^{(n)}_{i}\) is total, and all \(j\in\omega\),_ \[\theta^{(n)}_{\theta^{(1)}_{i}(\operatorname{fix}(i,j))}=\theta^{(n)}_{ \operatorname{fix}(i,j)}.\] Proof.: Let \(i\in\omega\) be such that \(\theta^{(n)}_{i}\) is total. By Theorem 4.7 there is a one-to-one function \(k\in\mathcal{R}^{(1)}\) with \(\theta^{(2)}_{k(i)}(c,j)=\theta^{(1)}_{i}(c)\). Moreover, as just seen, the function \(e\in\mathcal{R}^{(2)}\) in Theorem 4.12 can be chosen as one-to-one. Thus, we have that \[\theta^{(n)}_{\theta^{(1)}_{i}(e(k(i),j))}=\theta^{(n)}_{\theta^{(2)}_{k(i)}(e (k(i),j),j)}=\theta^{(n)}_{e(k(i),j)}.\] It therefore suffices to define \(\operatorname{\mathrm{fix}}(i,j)\stackrel{{\mathrm{Def}}}{{=}} e(k(i),j)\). With help of the recursion theorem we can now show that it is not decidable whether a function is defined on an initial segment only, or is total, though \(\mathcal{S}_{A}\) has a simple structure. This and several similar results will be consequences of Rice's theorem [34]. Let to this end, for \(X\subseteq\mathcal{S}^{(n)}_{A}\), \[I_{\theta^{(n)}}(X)\stackrel{{\mathrm{Def}}}{{=}}\{\,i\in\omega \mid\theta^{(n)}\in X\,\}.\] **Theorem 4.16** (Rice).: _Let \(C\subseteq\mathcal{S}^{(n)}_{A}\). Then \(I_{\theta^{(n)}}(C)\) is computable if, and only if, \(C=\emptyset\) or \(C=\mathcal{S}^{(n)}_{A}\)._ Proof.: If \(C=\emptyset\) or \(C=\mathcal{S}^{(n)}_{A}\), respectively, then \(I_{\theta^{(n)}}(C)=\emptyset\) or \(I_{\theta^{(n)}}(C)=\omega\) and hence computable. For the converse implication assume that \(\emptyset\neq C\neq\mathcal{S}^{(n)}_{A}\), but \(I_{\theta^{(n)}}(C)\) is computable. Then \(\emptyset\neq I_{\theta^{(n)}}(C)\neq\omega\). Let \(i\in I_{\theta^{(n)}}(C)\) and \(j\in\omega\backslash I_{\theta^{(n)}}(C)\). Set \[f(x)=\begin{cases}j&\text{if }x\in I_{\theta^{(n)}}(C).\\ i&\text{otherwise},\end{cases}\] then \(f\in\mathcal{R}^{(1)}\). By the recursion theorem there is hence some index \(a\in\omega\) so that \(\theta^{(n)}_{a}=\theta^{(n)}_{f(a)}\). Then \(\theta^{(n)}_{a}\in C\), exactly if \(\theta^{(n)}_{a}\notin C\), a contradiction. As a consequence of this theorem we now obtain that the sets \(\{\,i\in\omega\mid\theta^{(n)}_{i}\in\operatorname{\mathrm{A}\!\mathrm{n}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Now, consider \(\theta_{i}^{(n)}(\kappa(\hat{i}))\), and suppose that in (3) the first case holds. Then \[\theta_{i}^{(n)}(\kappa(\hat{i}))=\theta_{i}^{(n)}(\kappa(\hat{i}))+1.\] Therefore, the second case must hold. By the properties of \(v\), we have that for all \(\vec{x}\leq\kappa(\hat{i})\), \(\theta_{v(i)}^{(n)}(\vec{x})=\theta_{i}^{(n)}(\vec{x})\). Moreover, \(\theta_{v(i)}^{(n)}\) is an initial segment function and hence \(\theta_{q(v(i))}^{(n)}\) a total extension of \(\theta_{v(i)}^{(n)}\). Because \(\kappa(\hat{i})\in\operatorname{dom}(\theta_{v(i)}^{(n)})\), we have that \(\theta_{q(v(i))}^{(n)}(\kappa(\hat{i}))=\theta_{v(i)}^{(n)}(\kappa(\hat{i}))\). Thus, \[\theta_{i}^{(n)}(\kappa(\hat{i}))=\theta_{q(v(i))}^{(n)}(\kappa(\hat{i}))+1= \theta_{v(\hat{i})}^{(n)}(\kappa(\hat{i}))+1=\theta_{i}^{(n)}(\kappa(\hat{i}) )+1.\] This shows that there cannot exist a function \(q\) with the stated properties. Next, we will consider the construction of the functions \(v\) and \(g\) used in the proof above. We will start with the construction of \(v\). Let to this end \(h\in\mathcal{R}^{(1)}\) be an enumeration of the extended graph of the universal function of \(\theta_{A}^{(n)}\) and \(\operatorname{er}(i)\) be the first \(a\in A\) with respect to a fixed enumeration of \(A\) so that \(\kappa(i)<a^{(n)}\). Moreover, let \[\widehat{Q}(i,b,z)\Leftrightarrow\pi_{2}^{(2)}(h(b))>0\wedge \bigwedge_{\nu=1}^{n}\pi_{\nu}^{(n)}(\arg(z))<\operatorname{er}(i)\wedge\pi_{1 }^{(2)}(h(b))=\langle i\rangle*\arg(z),\] \[Q(i,t,z)\Leftrightarrow(\exists b\leq t)\,\widehat{Q}(i,b,z),\] \[f(i,t,z)\stackrel{{\operatorname{Def}}}{{=}}\pi_{2 }^{(2)}(h(\mu b\leq t.\,\widehat{Q}(i,b,z))))\] and \(k\in\mathcal{R}^{(2)}\) be as in Lemma 4.2. Then \(\lambda t.k(i,t)\) enumerates the extended graph of the function \[r(\vec{x})=\begin{cases}\theta_{i}^{(n)}(\vec{x})&\text{if }\vec{x}<( \operatorname{er}(i))^{(n)},\\ \text{undefined}&\text{otherwise}.\end{cases}\] Then \(r\in\textsc{AnF}_{A}^{(n)}\). Hence, by (QGN II), there is a function \(v\in\mathcal{R}^{(1)}\) with \(r=\theta_{v(i)}^{(n)}\). As is easily seen, \(v\) has the properties mentioned above. For the construction of function \(g\) let \(h^{\prime}\in\mathcal{R}^{(1)}\) enumerate the extended graph of the universal function of \(\theta^{(1)}\). Moreover, define \[\widehat{Q}_{1}(i,b,z)\Leftrightarrow\pi_{2}^{(2)}(h(b))>0\wedge \pi_{1}^{(2)}(h(b))=\langle i\rangle*\arg(z),\] \[\widehat{Q}_{2}(i,b)\Leftrightarrow\pi_{1}^{(2)}(h^{\prime}(b))= \langle j,i\rangle\wedge\pi_{2}^{(2)}(h^{\prime}(b))>0,\] \[\widehat{Q}_{3}(i.b,z)\Leftrightarrow\pi_{2}^{(2)}(h(b))>0\wedge \pi_{1}^{(2)}h(b))=\langle q(v(i))\rangle*\arg z,\] \[\widehat{Q}_{4}(i,b,z)\Leftrightarrow\] \[\quad\quad\quad[\widehat{Q}_{1}(i,b,z)\wedge-(\exists c<b)\, \widehat{Q}_{2}(i,c)]\vee[\widehat{Q}_{3}(i,b,z)\wedge(\exists b^{\prime}<b )\,[\widehat{Q}_{2}(i,b^{\prime})\wedge\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad As consequence of this result it follows that also the edge length of the domain of an initial segment function \(\theta_{i}^{(n)}\) cannot be computed from given index \(i\). **Corollary 4.18**.: _There is no function \(p\in\mathcal{R}^{(1)}\) such that \(p(i)\) is the edge length of \(\mathrm{dom}(\theta_{i}^{(n)})\), if \(\theta_{i}^{(n)}\in\textsc{Anf}_{A}^{(n)}\)._ Proof.: Again we assume that there is such a function \(p\). Let \(j\) be a \(\theta^{(1)}\)-index of \(p\). In what follows we construct a function \(q\in\mathcal{R}^{(1)}\) so that \[\theta_{q(i)}^{(n)}(\vec{x})=\begin{cases}\theta_{i}^{(n)}(\vec{x})&\text{if there is some $t\in\omega$ so that for any $t^{\prime}\in\omega$, if $\langle\langle i,\vec{x}\rangle,\theta_{i}^{(n)}(\vec{x})+1\rangle\in \mathrm{graph}_{\mathrm{e}}(\lambda a,\vec{y}.\ \theta_{a}^{(n)}(\vec{y}))_{t}$ and $\langle\langle j,i\rangle,p(i)+1\rangle\in \mathrm{graph}_{\mathrm{e}}(\lambda a,y.\ \theta_{a}^{(1)}(y))_{t^{\prime}}$ then $t\leq t^{\prime}$, or $\vec{x}<(p(i))^{(n)}$,}\\ 0&\text{if $\vec{x}\prec(p(i))^{(n)}$ and there is some $t^{\prime}\in\omega$ so that for any $t\in\omega$, if}\\ \langle\langle i,\vec{x}\rangle,\theta_{i}^{(n)}(\vec{x})+1\rangle\in\mathrm{ graph}_{\mathrm{e}}(\lambda a,\vec{y}.\ \theta_{a}^{(n)}(\vec{y}))_{t}$ and $\langle\langle j,i\rangle,p(i)+1\rangle\in\mathrm{ graph}_{\mathrm{e}}(\lambda a,y.\ \theta_{a}^{(1)}(y))_{t^{\prime}}$ then $t^{\prime}<t$.}\\ \end{cases}\] Because \(p\) is total, always one of the two cases holds. If in the first case we find \(\langle\langle i,\vec{x}\rangle,\theta_{i}^{(n)}(\vec{x})\rangle\) not later than the other tuple, it follows that \(\theta_{i}^{(n)}(\vec{x})\) is defined. The same is true, once we know that \(\vec{x}<(p(i))^{(n)}\). Since then \(\vec{x}\in\mathrm{dom}(\theta_{i}^{(n)})\), by the properties of \(p\). It follows that \(\theta_{q(i)}^{(n)}(\vec{x})\in\mathcal{R}^{(n)}\). Moreover, \(\theta_{q(i)}^{(n)}\) is an extension of \(\theta_{i}^{(n)}\). This contradicts what we have shown in the previous theorem. Hence, there is no such function \(p\). It remains to show how the function \(q\) can be constructed. Let to this end, \(h,\hat{h}\), respectively, be computable enumerations of the extended graphs of \(\theta^{(n)}\) and \(\theta^{(1)}\) that exist by Condition (QGN I). Moreover, let the relations \(\widehat{Q}_{1}\) and \(\widehat{Q}_{2}\) be defined as in the proof of the previous theorem. In addition, let \[\widehat{Q}_{3}(i,b,z)\Leftrightarrow\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ Since the effective version of the recursion theorem holds for \(\theta^{(n)}\), it follows by a result of Ershov [7, Satz 9] that \(\theta^{(n)}\) is precomplete which means that for every partial computable function \(r\in\mathcal{P}^{(1)}\) there is a total computable function \(g\in\mathcal{R}^{(1)}\) so that for all \(i\in\operatorname{dom}(r)\), \[\theta^{(n)}_{r(i)}=\theta^{(n)}_{g(i)}.\] Consequently, we would not be able to obtain positive results in Theorem 4.17 and Corollary 4.18 by working in classical computability theory. In that case one has more algorithms at hand. However, because of the precompleteness of \(\theta^{(n)}\) there also cannot exist any partial computable functions \(q\) and \(p\) with the properties stated in both results. ## 5 Computably enumerable sets As usual, we call a set \(C\subseteq\omega\)_computably enumerable (c.e.)_ if there is a function \(f\in\mathcal{R}^{(}1)\) with \(C=\operatorname{range}(f)\) or \(C\) is empty. Then, of course, it follows in the usual way that a set of natural numbers is computable if and only if the set and its complement are both c.e. **Theorem 5.1**.: _The following five statements are equivalent:_ 1. \(C\) _is c.e._ 2. \(C=\operatorname{range}(r)\)_, for some_ \(r\in\mathcal{S}^{(1)}_{A}\)_._ 3. _For some_ \(f\in\mathcal{R}^{(1)}\)_,_ \(C=\{\,a\mid a+1\in\operatorname{range}\!f\,\}\)__ 4. _For some computable_ \(B\subseteq\omega\)_,_ \(C=\{\,a\mid(\exists i)\,\langle a,i\rangle\in B\,\}\)_._ 5. _For some c.e._ \(B\subseteq\omega\)_,_ \(C=\{\,a\mid(\exists i)\,\langle a,i\rangle\in B\,\}\)_._ Proof.: The proof of \((1)\Rightarrow(2)\) is obvious: If \(C\) is empty let \(r\) be the nowhere defined function. In the other case there is a function \(f\in\mathcal{R}^{(1)}\) with \(C=\operatorname{range}(f)\). Then choose \(r=f\). Next, we show that \((2)\Rightarrow(3)\). If \(C\) is empty set \(f\stackrel{{\operatorname{Def}}}{{=}}\lambda x.\ 0\). In case \(C\) is not empty but finite, say \(C=\{a_{0},\ldots,a_{m}\}\), define \[f(x)\stackrel{{\operatorname{Def}}}{{=}}\begin{cases}a_{x}+1& \text{if }x<m,\\ a_{m}+1&\text{otherwise.}\end{cases}\] If, finally, \(C\) is infinite, then the function \(r\in\mathcal{S}^{(1)}_{A}\) with \(C=\operatorname{range}(r)\) needs be total. So, set \(f\stackrel{{\operatorname{Def}}}{{=}}\lambda x.\ r(x)+1\). To show \((4)\Rightarrow)(4)\), set \(B\stackrel{{\operatorname{Def}}}{{=}}\{\,\langle a,i\rangle\mid f (i)=a+1\,\}\). Since \(f\) is total computable, it follows as ususally that \(B\) is computable. Since computable sets are c.e., we also have that \((4)\Rightarrow(5)\). So, it remains to show that \((5)\Rightarrow(1)\). If \(B\) is empty, so same holds for \(C\). Assume that \(B\) is not empty. Then \(B=\operatorname{range}(g)\), for some \(g\in\mathcal{R}^{(1)}\). Set \(f\stackrel{{\operatorname{Def}}}{{=}}\pi_{1}^{(2)}\circ g\). Then \(f\in\mathcal{R}^{(1)}\) and \(C=\operatorname{range}(f)\). Thus, \(C\) is c.e. The above result contains the well know characterisations of c.e. sets. Only the characterisation via function domains is missing. Here we only have that the domain of every function in \(\mathcal{S}^{(1)}_{A}\) is c.e. Statements \((4)\) and \((5)\) are also know as projection theorems. The next result provides the connection between the function studied in this work and the c.e. sets. **Proposition 5.2**.: 1. _For_ \(f\in\mathcal{F}^{(1)}\)_,_ \[f\in\mathcal{R}^{(1)}\Leftrightarrow\operatorname{graph}(f)\text{ is computable} \Leftrightarrow\operatorname{graph}(f)\text{ is c.e.}\] 2. _For_ \(r\in\widehat{\mathcal{S}}^{(1)}_{A}\)_,_ \[r\in\mathcal{S}^{(1)}_{A}\Leftrightarrow\operatorname{graph}(r)\text{ is c.e.}\] Proof.: We only show (2). The other statement follows as in the classical case. Because of statement (1) we only have to consider the case that \(r\in\textsc{Anf}_{A}^{(1)}\). Then \(r\in\mathcal{S}_{A}^{(1)}\) and \(\operatorname{graph}(r)\) is a finite set. Therefore the equivalence holds. Next, we will derive the well know closure properties of the class of c.e. sets. In the literature some of them are usually shown by using the function domain characterisation. Here, we will have to give other proofs. For completeness reasons, we include proofs of all statements. **Theorem 5.3**.: _For \(r\in\mathcal{S}_{A}^{(1)}\) and \(B,C\subseteq\omega\), if \(B\) and \(C\) are c.e. then so are \(B\cap C\), \(B\cup C\), \(\langle B,C\rangle\)\(r^{-1}[B]\) and \(r[B]\)._ Proof.: By Theorem 5.1 there are functions\(g,k\in\mathcal{R}^{(1)}\) with \(B=\{\,a\mid a+1\in\operatorname{range}(g)\,\}\) and \(C=\{\,a\mid a+1\in\operatorname{range}(k)\,\}\). set \[f_{1}(x)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}g(x)&\text{if for some $y\leqslant x$, $g(x)=k(y)$},\\ k(x)&\text{if for some $y<x$, $k(x)=g(y)$},\\ 0&\text{otherwise},\end{cases}\] \[f_{2}(x)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}g(x)&\text{if $x $ is even},\\ k(x)&\text{otherwise},\end{cases}\] \[f_{3}(x)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}\langle g(x) \mathrel{\raisebox{-1.0pt}{$\rightharpoonup$}}1,k(x)\mathrel{\raisebox{-1. 0pt}{$\rightharpoonup$}}1\rangle+1&\text{if $g(x)>0$ and $k(x)>0$},\\ 0&\text{otherwise}.\end{cases}\] Then \(f_{1},f_{2},f_{3}\in\mathcal{R}^{(1)}\). Moreover, \(B\cap C=\{\,a\mid a\in\operatorname{range}(f_{1})\,\}\), \(B\cup C=\{\,a\mid a+1\in\operatorname{range}(f_{2})\,\}\) and \(\langle B,C\rangle=\{\,a\mid a+1\in\operatorname{range}(f_{3})\,\}\). Hence, these sets are c.e. The computable enumerability of \(r^{-1}[B]\) follows with the Projection Theorem 5.1(5), since \[r^{-1}[B] =\{\,x\in\omega\mid(\exists y)\,\langle x,y\rangle\in \operatorname{graph}(r)\wedge y\in B\,\}\] \[=\{\,x\in\omega\mid(\exists y)\,\langle x,y\rangle\in \operatorname{graph}(r)\cap\langle\omega,B\,\rangle\,\}.\] In the same way we obtain that \(r[B]\) is c.e. As we have seen in Theorem 5.1, the c.e. sets are the ranges of functions in \(\mathcal{S}_{A}^{(1)}\). This allows us to introduce a numbering \(W^{A}\) of all c.e. sets. Let \(\theta\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\) and define \[W_{i}^{A}\stackrel{{\mathrm{Def}}}{{=}}\operatorname{range}( \theta_{i}).\] Since \(\theta\) is a quasi-Godel numbering, \(W^{A}\) satisfies the subsequent normal form theorem. **Theorem 5.4** (Enumeration Theorem).: _There is a computable set \(B\subseteq\omega\) such that for all \(i\in\omega\),_ \[W_{i}^{A}=\{\,x\in\omega\mid(\exists t)\,\langle i,x,t\rangle\in B\,\}.\] Proof.: By Condition (QGN I) there is a computable enumeration \(h\in\mathcal{R}^{(1)}\) of the extended graph of the universal function of \(\theta\). The we have that \[x\in W_{i}^{A}\Leftrightarrow(\exists t)\,\pi_{1}^{(2)}(\pi_{1}^{(2)}(h(t)))= i\wedge\pi_{2}^{(2)}(h(t))=x+1.\] Set \(B\stackrel{{\mathrm{Def}}}{{=}}\{\,\langle i,x,t\rangle\mid\, \pi_{1}^{(2)}(\pi_{1}^{(2)}(h(t)))=i\wedge\pi_{2}^{(2)}(h(t))=x+1\,\}\). Then \(B\) has the asserted properties. The result strengthens the Projection Theorem 5.1(4): the c.e. sets can be uniformly obtained from the recursive sets by applying a projection. As follows from the above proof we moreover have **Corollary 5.5**.: _The set \(\{\,\langle i,x\rangle\mid x\in W_{i}^{A}\,\}\) is c.e._ Our next aim is to show that for the closure operations in Theorem 5.3 there are corresponding computable index operations. As in the last section, we have to construct functions that enumerate the extended graph of another function. These other functions are now enumerations of c.e. sets. The definition of the graph enumerations is again based on a scheme. which we indicate below. Let to this end \([\![\cdot]\!]\) be an effective coding of all finite sequences of natural numbers that is one-to-one and onto such that there are total computable functions lth and \((\cdot)\). with \[\operatorname{lth}([\![x_{1},\ldots.x_{m}]\!])=m\quad\text{and}\quad([\![x_{1 },\ldots.x_{m}]\!])_{j}=x_{j}\quad(1\leq j\leq m).\] Then the _course of values_ of function \(g\in\mathcal{R}^{(2)}\) up to \(t\) is defined by \[\overline{g}(i,t)\stackrel{{\text{Def}}}{{=}}[\![g(i,0),\ldots,g(i,t)]\!].\] **Lemma 5.6**.: _Let \(f\in\mathcal{R}^{(3)}\) and \(Q\subseteq\omega^{3}\) be a computable relation with \(f(i,t,z)>0\) if \(Q(i,t,z)\) holds. Moreover, let \(k\in\mathcal{PF}^{(2)}\) be defined by_ \[k(i,2t)=\langle t,0\rangle\] \[k(i,2t+1)=g(i,t),\quad\text{where}\] \[g(i,0)=\langle 0,0\rangle\] \[g(i,t+1)=\begin{cases}\langle 0,0\rangle&\text{if not }Q(i,t+1, \overline{g}(i,t))\text{ but}\\ &\pi_{2}^{2}(g(i,t))=0,\\ \langle 0,f(i,t+1,\overline{g}(i,t))\rangle&\text{if }Q(i,t+1,\overline{g}(i,t)) \text{ and}\\ &\pi_{2}^{(2)}(g(i,t))=0,\\ \langle 1+\pi_{1}^{(2)}(g(i,t)),f(i,t+1,\overline{g}(i,t))\rangle&\text{if }Q(i,t+1, \overline{g}(i,t))\text{ and}\\ &\pi_{2}^{(2)}(g*i,t))>0,\\ \langle 1+\pi_{1}^{(2)}(g(i,t)),\pi_{2}^{(2)}(g(i,t))\rangle&\text{otherwise}. \end{cases}\] _Then \(k\in\mathcal{R}^{(2)}\) and \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of a unary computable function which is either nowhere defined or total._ Proof.: As follows from the definition, \(k\in\mathcal{R}^{(2)}\). The function \(\lambda t.\)\(g(i,t)\) first lists the value \(\langle 0,0\rangle\) and repeats this until \(Q(i,t,\overline{g}(i,t-1))\) holds for the first time, and then lists \(\langle 0,f(i,t,\overline{g}(i,t-1))\rangle\). According to the assumption, \(f(i,t,\overline{g}(i,t-1))>0\) in this case. Therefore, we have for all \(t^{\prime}\geq t\) that \(\pi_{2}^{(2)}(g(i,t^{\prime}))>0\). Thus, \(\operatorname{range}(\lambda t.k(i,t))\) is the extended graph of a total function or the nowhere defined function, depending on whether there is some \(t>0\) with \(Q(i,t,\overline{g}(t-1))\), or not. **Theorem 5.7**.: _There are functions \(\operatorname{cut},un,fpair,inv,fim\in\mathcal{R}^{(2)}\) so that_ 1. \(W^{A}_{\operatorname{cut}(i,j)}W^{A}_{i}\cap W^{A}_{j}\)_,_ 2. \(W^{A}_{\operatorname{un}(i,j)}=W^{A}_{i}\cup W^{A}_{j}\)_,_ 3. \(W^{A}_{\operatorname{air}(i,j)}=\langle W^{A}_{i},W^{A}_{j}\rangle\)_,_ 4. \(W^{A}_{\operatorname{inv}(i,j)}=\theta_{i}^{-1}[W^{A}_{j}]\)_,_ 5. \(W^{A}_{\operatorname{im}(i,j)}=\theta_{i}[W^{A}_{j}]\)_._ Proof.: (1) By Condition (QGN I) there exists a computable enumeration of the universal function of numbering \(\theta\), say \(h\in\mathcal{R}^{(2)}\). Moreover define \[\widehat{Q}(\langle i,j\rangle,\langle a,b\rangle.z)\Leftrightarrow\] \[\pi_{1}^{(2)}(\pi_{2}^{(2)}(h(a)))=i\wedge\pi_{1}^{(2)}(\pi_{1}^{ (2)}(h(b)))=j\wedge\pi_{2}^{(2)}(h(a))=\pi_{2}^{(2)}(h(b))\wedge\] \[\pi_{2}^{(2)}((z)_{1})=0\wedge(\forall 1\leq c\leq\operatorname{lth}(z)).\;\pi_{2} ^{(2)}(h(a))\neq\pi_{2}^{(2)}((z)_{c}),\] \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists x\leq t)\,\widehat{Q}(\langle i,j\rangle,x,z),\] \[f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi_{2}^{ (2)}(h(\pi_{1}^{(2)}(\mu x\leq t.\;\widehat{Q}(\langle i,j\rangle,x,z)))).\] Then it follows for the function \(k\in\mathcal{R}^{(2)}\) constructed according to Lemma 5.6 that \(\lambda t.\;k(\langle i,j\rangle,t))\) enumerates the extended graph of a function \(r\in\mathcal{R}^{(1)}\) that enumerates \(W_{i}^{A}\cap W_{j}^{A}\). By Condition (QGN -II) there is therefore a function \(v\in\mathcal{R}^{(1)}\) with \(r=\theta_{v(\langle i,j\rangle,x,z)}\). Thus, it suffices to set \(\operatorname{cut}(i,j)=v(\langle i,j\rangle)\). The remaining statements follow in the same way. We only indicate how \(Q\) and \(f\) have to be chosen in each case. (2) Set \[\widehat{Q}(\langle i,j\rangle,a,z)\Leftrightarrow[\pi_{1}^{(2)} (\pi_{1}^{(2)}(h(a)))=i\vee\pi_{1}^{(2)}(\pi_{1}^{(2)}(h(a))=j]\wedge\] \[\pi_{2}^{(2)}((z)_{1})=0\wedge(\forall 1\leq c\leq \operatorname{lth}(z))\,\pi_{2}^{(2)}(h(a))\neq\pi_{2}^{(2)}((z)_{c}).\] \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists a\leq t)\,\widehat{Q}( \langle i,j\rangle,a,z),\] \[f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi^{(2)} (h(\mu a\leq t.\;\widehat{Q}(\langle i,j\rangle,a,z))).\] (3) Set \[\widehat{Q}(\langle i,j\rangle,\langle a,b\rangle,z)\Leftrightarrow\] \[\pi_{2}^{(2)}(h(a))>0\wedge\pi_{2}^{(2)}(h(b))>0\wedge\pi_{1}^{(2 )}(\pi_{1}^{(2)}(h(a)))=i\wedge\pi_{1}^{(2)}(\pi_{1}^{(2)}(h(b)))=j\wedge\] \[(\forall 1\leq c\leq\operatorname{lth}(z))\,\langle\pi_{2}^{(2)}(h(a) )\stackrel{{\cdot}}{{-}}1,\pi_{2}^{(2)}(h(b))\stackrel{{ \cdot}}{{-}}1\rangle\neq 1+\pi_{2}^{(2)}((z_{c})),\] \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists x\leq t)\,\widehat{Q}( \langle i,j\rangle,x,z),\] \[f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}}\langle \pi_{2}^{(2)}(h(\pi_{1}^{(2)}(\mu x\leq t.\;\widehat{Q}(\langle i,j\rangle x,z ))\stackrel{{\cdot}}{{-}}1)),\] \[\pi_{2}^{(2)}(h(\pi_{2}^{(2)}(\mu x\leq t.\;\widehat{Q}(\langle i,j\rangle x,z))\stackrel{{\cdot}}{{-}}1))\rangle+1.\] (4) Set \[\widehat{Q}(\langle i,j\rangle,\langle a,b\rangle,z)\Leftrightarrow\] \[\pi_{1}^{(2)}(\pi_{1}^{(2)}(h(a)))=i\wedge pi_{1}^{(2)}(\pi_{1}^{ (2)}(h(b)))=j\wedge\pi_{2}^{(2)}(h(a))=\pi_{2}^{(2)}(h(b))\wedge\] \[\pi_{2}^{(2)}(h(a))>0\wedge(\forall 1\leq c\leq\operatorname{lth}(z)) \,1+\pi_{2}^{(2)}(\pi_{1}^{(2)}(h(a)))\neq\pi_{2}^{(2)}((z)_{c}),\] \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists x\leq t)\,\widehat{Q}( \langle i,j\rangle,x,z),\] \[f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}}1+\pi_{2} ^{(2)}(\pi_{1}^{(2)}(h(\pi_{1}^{(2)}(\mu x\leq t.\;\widehat{Q}(\langle i,j \rangle,x,z))))).\] (5) Setze \[\widehat{Q}(\langle i,j\rangle,\langle a,b\rangle,z)\Leftrightarrow\] \[\pi_{1}^{(2)}(\pi_{1}^{(2)}(h(a)))=i\wedge\pi_{1}^{(2)}(\pi_{1}^ {(2)}(h(a)))=j\wedge\pi_{2}^{(2)}(\pi_{1}^{(2)}(h(a)))+1=\] \[\pi_{2}^{(2)}(h(b))\wedge\pi_{2}^{(2)}((z)_{1})=0\wedge(\forall 1 \leq c\leq\operatorname{lth}(z))\,\pi_{2}^{(2)}(h(a))\neq\pi_{2}^{(2)}((z)_{c}),\] \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists x\leq t)\,\widehat{Q}( \langle i,j\rangle,x,z),\] \[f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}} \pi_{1}^{(2)}(h(\pi_{1}^{(2)}(\mu x\leq t.\;\widehat{Q}(\langle i,j\rangle,x,z)))).\] Let \(K^{A}\stackrel{{\mathrm{Def}}}{{=}}\{\,i\mid i\in W^{A}_{i}\,\}\) be the _self-reproducibility problem_. **Theorem 5.8**.: \(K^{A}\) _is c.e., but not computable._ Proof.: Let \(h\in\mathcal{R}^{2}_{A}\) again enumerate the extended graph of the universal function of \(\theta\). Then, \[i\in K^{A}\Leftrightarrow i\in\mathrm{range}(\theta_{i})\Leftrightarrow( \exists a)(\exists t)h(t)=\langle\langle i,a\rangle,i+1\rangle.\] With the projection theorem one obtains that \(K^{A}\) is c.e. Its non-computability follows as usual. **Corollary 5.9**.: _The set \(\{\,\langle i,x\rangle\mid x\in W^{A}_{i}\,\}\) is c.e., but not computable._ Proof.: As we have seen in Corollary 5.5, this set is c.e. If it would be computable, also \(\{\,\langle i,i\rangle\mid i\in W^{A}_{i}\,\}\) and hence \(K^{A}\) would be computable, which is not the case. As consequence of Theorem 5.8 we next obtain that in terms of the \(W^{A}\) indices given a computable set, one cannot uniformly pass to its complement. **Theorem 5.10**.: _There is no function \(\mathrm{comp}\in\mathcal{R}^{(1)}\) so that for all \(i\in\omega\), if \(W^{A}_{i}\) is computable then \(W^{A}_{\mathrm{comp}(i)}\) is its complement._ Proof.: Let \(h\in\mathcal{R}^{(1)}\) enumerate the set \(K^{A}\) and define \[Q(i,t,z)\Leftrightarrow(\exists a\leq t)\,h(a)=i,\] \[f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}1+\pi_{1}^{(2) }((z)_{\mathrm{th}(z)}).\] Then the function \(k\in\mathcal{R}^{(2)}\) constructed as in Lemma 5.6 is such that for \(i\in K^{A}\), \(\lambda t\). \(k(i,t)\) enumerates the extended graph of the identity function on \(\omega\). For all other \(i\), the extended graph of the nowhere defined function is enumerated. Let \(v\in\mathcal{R}^{(1)}\) be as in Condition (QGN -II) so that \(\mathrm{range}(\lambda t\). \(k(i,t))=\mathrm{graph}_{\mathrm{e}}(\theta_{v(i)})\). Then, \[W^{A}_{v(i)}=\begin{cases}\omega&\text{if }i\in K^{A},\\ \emptyset&\text{otherwise}.\end{cases}\] Now, assume that there is such a function \(\mathrm{comp}\in\mathcal{R}^{(1)}\). Then, \[i\in\omega\backslash K^{A}\Leftrightarrow W^{A}_{\mathrm{comp}(v(i))}\neq \emptyset\Leftrightarrow(\exists x)x\in W^{A}_{\mathrm{comp}(v(i))}.\] With Corollary 5.9 and the projection theorem it follows that the complement of \(K^{A}\) is c.e. Since \(K^{A}\) is c.e. as well, \(K^{A}\) is even computable, which is not the case. Next, we will derive the single-valuedness theorem (cf. [36]). A set \(C\subset\omega\) is called _single-valued_ if for every \(x\in\omega\) there is at most one \(y\in\omega\) so that \(\langle x,y\rangle\in C\). Each single-valued set is thus the graph of a partial function. The single-valuedness theorem states the existence of an enumeration of all single-valued c.e. sets, which can also be considered as an enumeration of all partial functions with c.e. graph. We therefore derive a further version in which an enumeration of those single-valued c.e. sets is constructed that are graphs of the functions in \(\mathcal{S}^{(1)}_{A}\). For \(C\subset\omega\) let, \[\mathrm{dom}(C)=\{\,x\mid(\exists y)\,\langle x,y\rangle\in C\,\}.\] **Theorem 5.11** (Single-valuedness Theorem I).: _There is a function \(\mathrm{sv}\in\mathcal{R}^{(1)}\) such that for \(i\in\omega\),_ 1. \(W^{A}_{\mathrm{sv}(i)}\) _is single-valued._ 2. \(W^{A}_{\mathrm{sv}(i)}\subseteq W^{A}_{i}\)_._ 3. \(\mathrm{dom}(W^{A}_{\mathrm{sv}(i)})=\mathrm{dom}(W^{A}_{i})\)_._ 4. _If_ \(W^{A}_{i}\) _is single-valued then_ \(W^{A}_{\text{sv}(i)}=W^{A}_{i}\)_._ Proof.: Let again \(h\in\mathcal{R}^{(1)}\) enumerate the extended graph of the universal function of \(\theta\), and define \[\widehat{Q}(i,a,z)\Leftrightarrow\pi^{(2)}_{1}(\pi^{(2)}_{1}(h(a )))=i\wedge\pi^{(2)}_{2}(h(a))>0\wedge(\forall 1\leqslant c\leqslant \operatorname{lth}(z))\left[\pi^{(2)}_{2}((z)_{c})>0\Rightarrow\right.\] \[\left.\pi^{(2)}_{1}(\pi^{(2)}_{2}(h(a))\overset{\cdot}{\dash With the help of the first single-valuedness theorem we are now able to derive the reduction principle. **Theorem 5.13** (Reduction Principle).: _Let \(B,C\subseteq\omega\). Then there disjoint c.e. subsets \(B^{\prime},C^{\prime}\) of \(B\) and \(C\), respectively, so that \(B^{\prime}\cup C^{\prime}=B\cup C\)._ Proof.: Let \(X=\langle B,\{0\}\rangle\cup\langle C,\{1\}\rangle\). Then \(X\) is c.e., say \(X=W^{A}_{i}\). Let \(X^{\prime}\stackrel{{\mathrm{Def}}}{{=}}W^{A}_{\mathrm{sv}(i)}\) and set \(B^{\prime}\stackrel{{\mathrm{Def}}}{{=}}\{\,a\mid\langle a,0 \rangle\in X^{\prime}\,\}\) as well as \(C^{\prime}\stackrel{{\mathrm{Def}}}{{=}}\{\,b\mid\langle b,1 \rangle\in X^{\prime}\,\}\). Then \(B^{\prime}\) and \(C\) are as wanted. In what follows. let \(\leqslant_{m}\), \(\leqslant_{1}\), \(\equiv_{m}\), \(\equiv_{1}\) and \(\equiv\), respectivly, denote \(m\)-reducibility, 1-reducibility, \(m\)-equivalence, 1-equivalence and computable isomorphism of sets and numberings, as usual. Since only total computable functions are involved in the corresponding definitions, these carry over unchanged to the theory under development here. The same holds for their well known properties as well as the definition of \(m\)- and 1-completeness. We don't want to go into detail about this. Let \[K^{A}_{0}\stackrel{{\mathrm{Def}}}{{=}}\{\,\langle i,x\rangle \mid x\in W^{A}_{i}\,\},\] \[K^{A}_{1}\stackrel{{\mathrm{Def}}}{{=}}\{\,\langle i,x\rangle\mid x\in\mathrm{dom}(\theta_{i})\,\},\] \[K^{A}_{2}\stackrel{{\mathrm{Def}}}{{=}}\{\,i\mid i \in dom(\theta_{i})\,\}.\] **Theorem 5.14**.: \(K^{A}\)_, \(K^{A}_{0}\), \(K^{A}_{1}\) and \(K^{A}_{2}\) are 1-complete._ Proof.: The proof of completeness proceeds as usual. By Corollary 5.9, \(K^{A}_{0}\) is c.e. Let \(B\) be a c.e. set, say \(B=W^{A}_{j}\). Then \(\lambda x.\)\(\langle j,x\rangle\) 1-reduces \(B\) to \(K^{A}_{0}\), Since \(K^{A}\) is c.e., it suffices to show that \(K^{A}_{0}\leqslant_{1}K^{A}\). Let \(h\in\mathcal{R}^{(1)}\) enumerate the set \(K^{A}_{0}\) and \[Q(i,t,z)\Leftrightarrow(\exists a\leqslant t)\,h(a)=1,\] \[f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi^{(2)}_{1}( z)+1.\] Then it follows for the function \(k\in\mathcal{R}^{(2)}\) constructed according to Lemma 4.2 that \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of the identity on \(\omega\), in case that \(i\in K^{A}_{0}\). Otherwise, it enumerates the extended graph of the nowhere defined function. By applying Condition (QGN II) we now obtain a function \(v\in\mathcal{R}^{(1)}\) so that \(\mathrm{range}(\lambda t.\)\(k(i,t))=\mathrm{graph}_{\mathrm{e}}(\theta_{v(i)})\). As we have already seen, we can assume \(v\) to be one-to-one. Observe that \[\langle j,x\rangle\in K^{A}_{0}\Leftrightarrow(\forall y)\,\theta_{v(\langle j,x\rangle)}(y)=y\Leftrightarrow v(\langle j,x\rangle)\in W^{A}_{v(j,x)} \Leftrightarrow v(\langle j,x\rangle)\in K^{A}.\] It follows that \(K^{A}_{0}\leqslant_{1}K^{A}\). Because of Condition (QGN I) there is an enumeration \(\hat{h}\in\mathcal{R}^{(1)}\) of the extended graph of the universal function of numbering \(\theta\). Then \[\langle i,x\rangle\in K^{A}_{i}\Leftrightarrow(\exists a)\,\pi^{(2)}_{1}(\pi ^{(2)}_{1}(\hat{h}(a)))=i\land\pi^{(2)}_{2}(\pi^{(2)}_{1}(\hat{h}(a)))=x\land \pi^{(2)}_{2}(\hat{a})>0,\] from which it follows that \(K^{A}_{1}\) is c.e. We show that \(K^{A}_{0}\leqslant_{1}K^{A}_{1}\). Let to this end the relation \(Q\) be as defined above and set \(f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}1\). Moreover, construct the function \(k\in\mathcal{R}^{(2)}\) according Lemma 4.2 on this basis. Then \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of the function \(\lambda x.\)\(0\), in case that \(i\in K^{A}_{0}\). Otherwise, it enumerates the extended graph of the nowhere defined function. Let \(v\in\mathcal{R}^{(1)}\) as in Condition (QGN II). Then \(\mathrm{range}(\lambda t.\)\(k(i,t))=\mathrm{graph}_{\mathrm{e}}(\theta_{v(i)})\). Moreover, we have that \[i\in K^{A}_{0}\Leftrightarrow(\forall y)\,\theta_{v(i)}=0\Leftrightarrow v(i) \in\mathrm{dom}(\theta_{v(i)})\Leftrightarrow\langle v(i),v(i)\rangle\in K^{A} _{1}.\] Hence, \(K^{A}_{0}\leqslant_{1}K^{A}_{1}\). Since \(K^{A}_{1}\) is c.e., the same holds for \(K^{A}_{2}\). In addition, we have \[i\in K^{A}\Leftrightarrow\langle i,i\rangle\in K^{A}_{0}\Leftrightarrow\langle v (i),v(i)\rangle\in K^{A}_{1}\Leftrightarrow v(\langle i,i\rangle)\in K^{A}_{2}.\] Thus, \(K^{A}\leqslant_{1}K^{A}_{2}\), which shows that also \(K^{A}_{2}\) is 1-complete. In the classical theory of c.e. sets, the sets mentioned in the theorem above \(K^{A}\), \(K^{A}_{0}\), \(K^{A}_{1}\) and \(K^{A}_{2}\) are also shown to be \(1\)-complete. Since it is the aim of the present paper to show that this theory can as well be developed on the basis of the functions in \(\mathcal{S}^{(1)}_{A}\) and quasi-Godel numberings, the \(1\)-completeness of \(K^{A}\) and \(K^{A}_{0}\) is what was expected. The \(1\)-completeness of \(K^{A}_{1}\) and \(K^{A}_{2}\), however, is less obvious because of the special form of the domains of the functions in \(\mathcal{S}^{(1)}_{A}\). **Corollary 5.15**.: \(K^{A}=_{1}K^{A}_{0}=_{1}K^{A}_{1}=_{1}K^{A}_{2}\)_._ The notion of productive set is now introduced as usual. \(C\subset\omega\) is _\(A\)-productive_, if there is some \(p\in\mathcal{S}^{(1)}_{A}\) such that for all \(i\in\omega\), if \(W^{A}_{i}\subseteq C\) then \(p(i)\downarrow\in C\backslash W^{A}_{i}\). Since \(\emptyset\subseteq C\) and \(\{\,i\mid\,W^{A}_{i}=\emptyset\,\}\) is infinite by the padding lemma, we have that \(\operatorname{dom}(p)\) is infinite as well. Therefore \(p\) cannot be an initial segment function. That is, \(p\in\mathcal{R}^{(1)}\). **Proposition 5.16**.: \(C\subset\omega\) _is \(A\)-productive if, and only if, there is a total function \(p\in\mathcal{R}^{(1)}\) so that \(p(i)\in C\backslash W^{A}_{i}\), for \(i\in\omega\) such that \(W^{A}_{i}\subseteq\omega\)._ As usual it can moreover be shown that \(p\) can even be chosen as one-to-one and onto, and that \(A\)-productiveness is inherited upwards under \(m\)-reducibility. **Theorem 5.17**.: _Every \(A\)-productive set has an infinite c.e. subset._ Proof.: Define \(k\in\mathcal{R}^{(2)}\) by \(\operatorname{k}(i,2t)\stackrel{{\mathrm{Def}}}{{=}}\langle t,0\rangle\) and \(k(i,2t+1)\stackrel{{\mathrm{Def}}}{{=}}\langle t,i+1\rangle\). Then \(k\) enumerates the extended graph of \(\lambda x.\)\(i\). Now, let \(v\in\mathcal{R}^{(1)}\) be as in Condition (QGN II) so that \(\theta_{v(i)}=\lambda x.\)\(i\) and hence \(W^{A}_{v(i)}=\{i\}\). Assume that \(C\subseteq\omega\) is \(A\)-productive with productive function \(p\in\mathcal{R}^{(1)}\) and let \(\operatorname{un}\in\mathcal{R}^{(2)}\) be as in Theorem 5.7 with \(W^{A}_{\operatorname{un}(i,j)}W^{A}_{i}\cup W^{A}_{j}\). Moreover, let \(i_{0}\) be a \(W^{A}\)-index of the empty set. Set \[g(0)=i_{0},\] \[g(a+1)=\operatorname{un}(v(p(g(a))),g(a)).\] Then \(g\in\mathcal{R}^{(1)}\). In addition, \[W^{A}_{g(a+1)}=\{p(g(a))\}\cup W^{A}_{g(a)}=\{p(g(a)),\ldots,p(g(0))\}\] and \(p(g(a))\in C\backslash W_{g(a)}\). Thus, by defining \(g^{\prime}=p\circ g\) we obtain a one-to-one total computable function and consequently, \(\operatorname{range}(g^{\prime})\) is an infinite c.e. subset of \(C\). **Theorem 5.18**.: 1. \(\omega\backslash K^{A}\) _is \(A\)-productive._ 2. \(C\)__\(A\)_-productive_ \(\Leftrightarrow(\omega\backslash K)\leq_{1}C\Leftrightarrow(\omega\backslash K^ {A})\leq_{m}C\)_._ Proof.: (1) follows by choosing \(\lambda x.i\) as productive function. (2) Since \(A\)-productiveness is inherited upwards under \(m\)-reducibility, we have that, if \(\omega\backslash K^{A})\leq_{m}C\) the \(C\) is \(A\)-productive. We will now show that for every \(A\)-productive set \(C\), \((\omega\backslash K)\leq_{1}C\). Let \(p\in\mathcal{R}^{(1)}\) be a one-to-one productive function of \(C\) and \(g\in\mathcal{R}(1)\) enumerate set \(K^{A}\). Moreover, let \(f(\langle i,j\rangle,t,z)\stackrel{{\mathrm{Def}}}{{=}}p(j)+1\) and \[Q(\langle i,j\rangle,t,z)\Leftrightarrow(\exists a\leq t)\,g(a)=i.\] For the function \(k\in\mathcal{R}^{(2)}\) constructed according to Lemma 4.2, it then holds that in case \(i\in K^{A}\), \(\lambda t.\)\(k(\langle i,j\rangle,t)\) enumerates the extended graph of the function \(\lambda x.\)\(p(j)\) and otherwise the extended graph of the nowhere defined function. If \(v\) is the function existing for this \(k\) according to (QGN II), then we have for \(i\in K\) that \(\theta_{v(\langle i,j\rangle)}(x)=p(j)\). Otherwise, \(\theta_{v(\langle i,j\rangle)}(x)\) is undefined. By the recursion theorem there is now a function \(g\in\mathcal{R}(1)\) with \(\theta_{g(i)}=\theta_{v(\langle i,g(i)\rangle)}\), and as we have seen there is even a one-to-one function \(g\) with this property. It follows that \[W^{A}_{g(i)}=\begin{cases}\{p(g(i))\}&\text{if }i\in K^{A},\\ \emptyset&\text{otherwise}.\end{cases}\] We therefore obtain \[i\in K^{A} \Rightarrow W^{A}_{g(i)}=\{p(g(i))\}\] \[\Rightarrow W^{A}_{g(i)}\nsubseteq C\quad\text{(as $p$ is a productive function of $C$)}\] \[\Rightarrow p(g(i))\notin C\] and \[i\notin K^{A}\Rightarrow W^{A}_{g(i)}=\emptyset\Rightarrow W^{A}_{g(i)} \subseteq C\Rightarrow p(g(i))\in C.\] Since \(p\circ g\) is one-to-one and total computable, this proves that \((\omega\backslash K)\leq_{1}C\). If \(C\subseteq\omega\) is a c.e. set the complement of which is \(A\)-productive, is called _\(A\)-creative_. From the above results it follows that \(K^{A}\) is \(A\)-creative. Note that Myhill's theorem on the coincidence of the notions of \(1\)-equivalence and computable isomorphism also holds in the approach to the theory of c.e. sets presented here: the proof in [36] uses only arguments which are admissible in our approach as well. Therefore, we obtain the following characterisation of the \(A\)-creative sets. **Theorem 5.19**.: _Let \(C\subseteq\omega\). Then the following four statements are equivalent:_ 1. \(C\) _is 1-complete._ 2. \(C\) _is_ \(m\)_-complete._ 3. \(C\) _is_ \(A\)_-creative._ 4. \(C\equiv K^{A}\)_._ Rogers [35] shows that a set is creative exactly if it is the self-reproducibility problem of a Godel numbering. With Theorem 5.19 we obtain a corresponding result for quasi-Godel numberings. **Theorem 5.20**.: _A set \(C\subseteq\omega\) is \(A\)-creative if, and only if, there is a quasi-Godel numbering \(\chi\) of \(\mathcal{S}^{(1)}_{A}\) such that \(C=\{\,i\mid i\in\operatorname{range}(\chi_{i})\,\}\)._ Proof.: We have already seen that for each quasi-Godel numbering \(\chi\) the set \(\{\,i\mid i\in\operatorname{range}(\chi_{i})\,\}\) is \(A\)-creative. Assume conversely that \(C\) is \(A\)-creative. Then \(C\equiv K^{A}\) by the preceding theorem. Therefore, there is a one-to-one and onto function \(f\in\mathcal{R}^{(1)}\) so that \(i\in C\) exactly if \(f(i)\in\operatorname{range}(\theta_{f(i)})\). Since \(f\) is total the modified substitution \(\operatorname{MSubset}^{(1,1)}_{A}(f;\theta_{i})\) coincides with the usual composition \(f\circ\theta_{i}\). By Theorem 4.7 there is thus a function \(g\in\mathcal{R}^{(1)}\) with \(\theta_{g(i)}=f\circ\theta_{i}\). Moreover, there is a function \(r\in\mathcal{R}^{(1)}\) so that \(\theta_{r(i)}=f^{-1}\circ\theta_{i}\). Let \(p=r\circ f\) and \(q=f^{-1}\circ g\). Then the numbering \(\chi\) we looking for is defined by \(\chi=\theta\circ p\). Obviously, we then have that \(\theta=\chi\circ q\). As is readily verified, \(\chi\) is a quasi-Godel numbering. In addition, we have that \[i\in\operatorname{range}(\chi_{i})\Leftrightarrow i\in \operatorname{range}(\theta_{p(i)}) \Leftrightarrow i\in\operatorname{range}(\theta_{r(f(i))})\] \[\Leftrightarrow i\in\operatorname{range}(f^{-1}\circ\theta_{f(i)}) \Leftrightarrow f(i)\in\operatorname{range}(\theta_{f(i)})\Leftrightarrow i \in C.\qed\] We hope that with the results in this section we have presented a convincing selection of theorems showing that the theory of c.e. sets can also be constructed on the basis of the theory of functions from \(\mathcal{S}^{(1)}_{A}\). In particular, all results apply here that are usually derived without referring to the domain characterisation of the c.e. sets such as Myhill's theorem mentioned above. In the other cases, the above proofs show how one can replace constructions in which the domain characterisation is commonly used by other constructions that are admissible in the theory developed here. We now want to show that the numbering \(W^{A}\) defined here via a quasi-Godel numbering, which we have shown has many properties of the commonly used numbering \(W\) defined at the beginning of Section 2, is not essentially different from \(W\). **Theorem 5.21**.: \(W^{A}\equiv W\)_._ Proof.: By the number-theoretic analogue of Myhill's theorem (cf. [7]) it suffices to prove that \(W^{A}\leq_{1}W\) and \(W\leq_{1}W^{A}\). We first show that \(W^{A}\leq_{1}W\). Since \(\theta\) satisfies Condition (QGN I), \(\lambda i,x\). \(\theta_{i}(x)\in\mathcal{P}^{(2)}\). Therefore, there is a function \(g\in\mathcal{R}^{(1)}\) with \(\theta_{i}=\varphi_{g(i)}\). Moreover, there is a function \(f\in\mathcal{R}^{(1)}\) so that \(\mathrm{dom}(\varphi_{f(i)})=\mathrm{range}(\varphi_{i})\). Since \(\varphi\) has a padding function, there also such functions that are one-to-one.. Thus, \(W^{A}_{i}=W_{f(g(i))}\), that is, \(W^{A}\leq_{1}W\). Next, we show that also \(W\leq_{1}W^{A}\). Let to this end \(p\in\mathcal{R}^{(1)}\) such that \(\mathrm{range}(\varphi_{p(i)}=\mathrm{dom}(\varphi_{i})\) and \(\varphi_{p(i)}\) is total, if \(\mathrm{dom}(\varphi_{i})\) is not empty, and \(\varphi_{i}\) is nowhere defined, otherwise. Moreover, let \(k\in\mathcal{R}^{(2)}\) enumerate the extended graph of \(\varphi_{(p(i)}\). Since \(\theta\) satisfies Condition (QGN II) and has a one-to-one padding function, there is a one-to-one function \(v\in\mathcal{R}^{(1)}\) with \(\theta_{v(i)}=\varphi_{p(i)}\). Therefore, \(\mathrm{range}(\theta_{v(i)})=\mathrm{range}(\varphi_{p(i)})=\mathrm{dom}( \varphi_{i})\), that is, \(W\leq_{1}W^{A}\). As follows from this result, the \(A\)-productive and \(A\)-creative sets, respectively, coincide with the productive and the creative ones. Note that the above theorem does not obviate the results on \(W^{A}\), such as Theorem 5.7. Theorem 5.21 is a metatheorem derived within classical computability theory, whereas the results in this section are results of the theory presented here, derived this same theory. In Section 4 we have seen that the sets \(\{\,i\mid\theta_{i}\in\textsc{Anf}^{(1)}_{A}\,\}\) and \(\{\,i\mid\theta_{i}\in\mathcal{R}^{(1)}\,\}\) are not computable. At the end of this section we want to determine the exact position of these sets in the arithmetic hierarchy. **Theorem 5.22**.: 1. \(\{\,i\mid\theta_{i}\in\textsc{Anf}^{(1)}_{A}\,\}\) _is_ \(\Sigma_{2}\)_-complete._ 2. \(\{\,i\mid\theta_{i}\in\mathcal{R}^{(1)}\,\}\) _is_ \(\Pi_{2}\) _complete._ Proof.: It suffices to prove (1). Let to this end \(h\in\mathcal{R}^{(1)}\) enumerate the extended graph of the universal function of numbering \(\theta\). Then we have that \[\theta_{i}\in\textsc{Anf}^{(1)}_{A}\Leftrightarrow(\exists t)( \exists a)\,[a\in A_{t}\wedge(\forall x<a)(\exists c)[\pi_{1}^{(2)}(h(c))= \langle i,x\rangle\wedge\pi_{2}^{(2)}(h(c))>0]\wedge\] \[[(\forall y\geq a)(\forall b)\,[\pi_{1}^{(2)}(h(b))=\langle i,y \rangle\Rightarrow\pi_{2}^{(2)}(h(b))=0]].\] It follows that \(\{\,i\mid\theta_{i}\in\textsc{Anf}^{(1)}_{A}\,\}\in\Sigma_{2}\). It remains to show that for \(C\in\Sigma_{2}\), \(C\leq_{1}\{\,i\mid\theta_{i}\in\textsc{Anf}^{(1)}_{A}\,\}\). Let \(C\in\Sigma_{2}\). Then there is a ternary computable predicate \(Z\) such that \[i\in C\Leftrightarrow(\exists x)(\forall y)\,Z(i,x,y).\] Set \(f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}1\) and \[Q(i,t,z)\Leftrightarrow(\exists a\in A_{t})\,[\mathrm{arg}(z)<a\wedge(\forall x <a)(\exists y\leq t)\,-Z(i,x,y)],\] and let \(k\in\mathcal{R}^{(2)}\) be the function constructed from this according to Lemma 4.2. As can be seen from the construction, for the one-to-one function \(v\in\mathcal{R}^{(1)}\) that exists according to (QGN II) and Corollary 4.14 one then has that \[\theta_{v(i)}(x)=\begin{cases}0&\text{if there is some $a\in A$ so that $x<a$ and for all $x^{\prime}<x$}\\ &\text{there is some $y$ so that $-Z(i,x^{\prime},y)$,}\\ \text{undefined}&\text{otherwise.}\end{cases}\] If \(i\in C\) then \(\{\,x\mid(\forall y)\,Z(i,x,y)\,\}\) is not empty. Let \(\hat{x}\) be the smallest element of this set and \(\hat{a}=\max\{\,a\leq\hat{x}\mid a\in A\lor a=0\,\}\). Then \(\theta_{v(i)}(x)\) is undefined, for all \(x\geq\hat{a}\), and \(\theta_{v(i)}(x)=0\), for all \(x<\hat{a}\). Thus, \(\theta_{v(i)}\in\textsc{Anf}^{(1)}_{A}\). If, on the other hand, \(i\notin C\), then there is some \(y\in\omega\) with \(\neg Z(i,x,y)\), for all \(x\in\omega\). It follows in this case that for all \(x\in\omega\), \(\theta_{v(i)}=0\). That is, \(\theta_{v(i)}\in\mathcal{R}^{(1)}\). So, we have \[i\in C\Leftrightarrow\theta_{v(i)}\in\textsc{Anf}^{(1)}_{A}.\] That is, \(C\leq_{1}\{\,i\mid\theta_{i}\in\textsc{Anf}^{(1)}_{A}\,\}\). More results in the computability theory with quasi-Godel numberings In this section we will continue the development of a computability theory for the classes \(\mathcal{S}_{A}^{(n)}\) that we started in Section 4. Let \(\kappa\colon\omega\to\omega^{n}\) again be an effective and one-to-one map that enumerates \(\omega^{n}\) initial segment-wise.Then \(\kappa(c)<a^{(n)}\), exactly id \(c<a^{n}\). Moreover, let \(f_{A}\in\mathcal{R}^{(1)}\) enumerate \(A\), and define \[\widehat{\alpha}_{\langle a,b\rangle}(\vec{x})\stackrel{{\rm Def }}{{=}}\begin{cases}(a)_{\kappa^{-1}(\vec{x})}&\text{if }\kappa^{-1}(\vec{x})<\min\{\text{ lth}(a)+1,f_{A}(b)^{n}\},\\ 0&\text{if }\text{lth}(a)<\kappa^{-1}(\vec{x})<f_{A}(b)^{n},\\ \text{undefined}&\text{otherwise},\end{cases}\] and \[\alpha_{0}^{(n)}\stackrel{{\rm Def}}{{=}}\lambda\vec{x}. \text{ undefined},\] \[\alpha_{i+1}^{(n)}\stackrel{{\rm Def}}{{=}}\widehat {\alpha}_{\mu j.\ \widehat{\alpha}_{j}^{(n)}\notin\{\alpha_{0}^{(n)},\ldots, \alpha_{i}^{(n)}\}}.\] **Lemma 6.1**.: _Let \(\theta^{(n)}\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(n)}\). Then the following five statements hold:_ 1. \(\alpha^{(n)}\) _is a one-to-one numbering of_ \(\textsc{Anf}_{A}^{(n)}\)_._ 2. \(\operatorname{graph_{e}}(\lambda i,\vec{x}.\ \alpha_{i}^{(n)}(\vec{x}))\) _is computable._ 3. _Let_ \(\lg(i)\) _be the edge length of_ \(\operatorname{dom}(\alpha_{i}^{(n)})\)_. Then_ \(\lg\in\mathcal{R}^{(1)}\)_._ 4. \(\alpha^{(n)}\leq_{1}\theta^{(n)}\)_._ 5. \(\{\langle i,j\rangle\mid\operatorname{graph}(\alpha_{i}^{(n)})\subseteq \operatorname{graph}(\theta_{j}^{(n)})\,\}\) _is c.e._ Proof.: (1) follows by the construction. We next show (2) and (3). As is easily seen, the sequence number \[[\![\widehat{\alpha}_{i}^{(n)}]\!]\stackrel{{\rm Def }}{{=}}[\![\langle\kappa(0),\widehat{\alpha}_{i}^{(n)}(\kappa(0))\rangle, \ldots,\langle\kappa(m),\widehat{\alpha}_{i}^{(n)}(\kappa(m))\rangle]\!]\] where \(m\stackrel{{\rm Def}}{{=}}(f_{A}(\pi_{2}^{(2)}(i)))^{n}-1\), is computable from \(i\). Thus, if \(g\in\mathcal{R}^{(1)}\) with \[g(i)=\mu j.\ [\![\widehat{\alpha}_{j}^{(n)}]\!]\notin\{[\![\text{empty sequence}]\!],\ldots,[\![\widehat{\alpha}_{g(\nu)}^{(n)}]\!]\mid 1 \leq\nu\leq i\,\},\] for \(i>0\), then \(\alpha_{i}^{(n)}=\widehat{\alpha}_{g(i)}^{(n)}\), for \(i>0\). As is also readily seen, \(\operatorname{graph_{e}}(\lambda i,\vec{x}.\ \widehat{\alpha}_{i}^{(n)}(\vec{x}))\) is computable. Thus the same holds for \(\operatorname{graph_{e}}(\lambda i,\vec{x}.\ \alpha_{i}^{(n)}(\vec{x}))\). Since \(\lg(0)=0\) and for \(i>0\), \(\lg(i)=f_{A}(\pi_{2}^{(2)}(g(i)))\) we further obtain that \(\lg\in\mathcal{R}^{(1)}\). (4) Let \(h\in\mathcal{R}^{(1)}\) enumerate \(\operatorname{graph_{e}}(\lambda i,\vec{x}.\ \alpha_{i}^{(n)}(\vec{x}))\). Moreover, define \[\widehat{Q}(i,c,z) \Leftrightarrow\pi_{2}^{(2)}(h(c))>0\ \wedge\ \pi_{1}^{(2)}(h(c))=\langle i \rangle*\arg(z),\] \[Q(i,t,z) \Leftrightarrow(3c\leq t)\,\widehat{Q}(i,c,z),\] \[f(i,t,z) \stackrel{{\rm Def}}{{=}}\pi_{2}^{(2)}(h(\mu c.\leq t.\,\widehat{Q}(i,c,z),\] and construct \(k\in\mathcal{R}^{(2)}\) as in Lemma 4.2. Then \(\lambda t,\ k(i,t)\) enumerates the extended graph of \(\alpha_{i}^{(n)}\). According to (QGN II) and Corollary 4.14 there is then a one-to-one function \(v\in\mathcal{R}^{(1)}\) such that \(\alpha_{i}^{(n)}=\theta_{v(i)}^{(n)}\), (5) Let, in addition, \(h^{\prime}\in\mathcal{R}^{(1)}\) be an enumeration of \(\operatorname{graph_{e}}(\lambda i,\vec{x}.\ \theta_{i}^{(n)}(\vec{x}))\). Then we have that \[\operatorname{graph}(\alpha_{i}^{(n)})\subseteq\operatorname{graph}(\theta_{ j}^{(n)})\] \[\Leftrightarrow \left(\forall\vec{x}<\lg(i)\right)^{n}\alpha_{i}^{(n)}(\vec{x})= \theta_{j}^{(n)}(\vec{x})\] \[\Leftrightarrow \left(\forall\vec{x}<\lg(i)\right)^{n}\left(\exists t\right) \left(\exists t^{\prime}\right)\pi_{1}^{(2)}(h(t))=\langle i,\vec{x}\rangle \wedge\pi_{1}^{(2)}(h^{\prime}(t^{\prime}))=\langle j,\vec{x}\rangle\wedge\] \[\pi_{2}^{(2)}(h(t))>0\wedge\pi_{2}^{(2)}(h(t))=\pi_{2}^{(2)}(h^{ \prime}(t^{\prime}))\] \[\Leftrightarrow \left(\exists t\right)\!(\exists t^{\prime})\!(\forall c<\lg(i)) \,\pi_{1}^{(2)}(h((t)_{c}))=\langle i,\kappa(c)\rangle\wedge\pi_{1}^{(2)}(h^ {\prime}((t^{\prime})_{c}))=\langle j,\kappa(c)\rangle\wedge\] \[\pi_{2}^{(2)}(h((t)_{c}))>0\wedge\pi_{2}^{(2)}(h((t)_{c}))=\pi_{2 }^{(2)}(h^{\prime}((t^{\prime})_{c})).\] The statement is now a consequence of the projection lemma. Note that in the proof of Property (5) only Condition (QGN I) was used. For \(X\subseteq\mathcal{S}_{A}^{(n)}\), let \(I_{\theta^{(n)}}(X)\) again be the index set of \(X\) with respect to \(\theta^{(n)}\). **Theorem 6.2** (Rice, Shapiro).: _For \(X\subseteq\mathcal{S}_{A}^{(n)}\), \(I_{\theta^{(n)}}(X)\) is c.e. if, and only if, there is a c.e. set \(C\subseteq\omega\) so that_ \[X=\{\,r\in\mathcal{S}_{A}^{(n)}\mid(\exists i\in C)\,\mathrm{graph}(\alpha_{i} ^{(n)})\subseteq\mathrm{graph}(r)\,\}.\] Proof.: Let us assume first that \(X\) has the special form. Then \[j\in I_{\theta^{(n)}}(X)\Leftrightarrow(\exists i\in C)\,\mathrm{graph}( \alpha_{i}^{(n)})\subseteq\mathrm{graph}(\theta_{j}^{(n)}).\] With Lemma 6.1(5) and the projection lemma it follows that \(I_{\theta^{(n)}}(X)\) is c.e. Conversely, suppose that \(I_{\theta^{(n)}}(X)\) is c.e. The proof now proceeds in three steps. _Claim 1_ (\(\forall r\in X\))(\(\exists s\in X\)\(\cap\) An\({}_{A}^{(n)}\)) graph(\(s\)) \(\subseteq\) graph(\(r\)). Without restriction assume that \(r\) is total. Moreover, suppose that there is no \(s\in X\)\(\cap\) An\({}_{A}^{(n)}\) with \(\mathrm{graph}(s)\subseteq\mathrm{graph}(r)\). To derive a contradiction, it suffices to show that \(\omega\backslash K^{A}\leqslant_{m}I_{\theta^{(n)}}(X)\) in this case. By our assumption it would follow that \(\omega\backslash K^{A}\) is c.e., which is not the case as we have already seen. Let \(h\in\mathcal{R}^{(1)}\) enumerate \(K_{A}\). We will show that there is a function \(q\in\mathcal{R}^{(1)}\) with \[\theta_{q(i)}^{(n)}(\vec{x})=\begin{cases}r(\vec{x})&\text{if there are $a\in A$ and $c\in\omega$ so that $\vec{x}<a^{(n)}$, $a\leqslant c$ and $h(c^{\prime})\neq i$,,}\\ &\text{for all $c^{\prime}\leqslant c$,}\\ \text{undefined}&\text{otherwise.}\end{cases}\] Then we have that \(i\in\omega\backslash K^{A}\), exactly if \(q(i)\in I_{\theta^{(n)}}(X)\). To see this, note that if \(i\in\omega\backslash K^{A}\) then \(h(c)\neq i\), for all \(c\in\omega\). Hence, \(\theta_{q(i)}^{(n)}=r\) in this case and thus \(q(i)\in I_{\theta^{(n)}}(X)\), as \(r\in X\). If \(i\in K^{A}\), there is some \(c\in\omega\) with \(h(c)=1\). Let \(\hat{c}\) be the smallest such \(c\) and \(\hat{a}\stackrel{{\mathrm{Def}}}{{=}}\max\{\,a\leqslant\hat{c}\mid a \in A\lor a=0\,\}\). Then we have for all \(\vec{x}\leqslant\hat{a}^{(n)}\) that \(\theta_{q(i)}^{(n)}(\vec{x})=r(\vec{x})\). For all other \(\vec{x}\in\omega^{n}\), \(\theta^{(n)}(\vec{x})\) is undefined. Consequently, \(\theta_{q(i)}^{(n)}\in\) An\({}_{A}^{(n)}\) and \(\mathrm{graph}(\theta_{q(i)}^{(n)})\subseteq graph(r)\). By our assumption this means that \(\theta_{q(i)}^{(n)}\notin X\), that is \(q(i)\notin I_{\theta^{(n)}}(X)\). For the construction of \(q\) we again apply Lemma 4.2 and Condition (QGN II), as we did already many times. We only state predicate \(Q\) and function \(f\) needed in the construction: \[Q(i,\langle b,c\rangle,z)\Leftrightarrow f_{A}(b)\leqslant c\wedge(\forall c^{ \prime}\leqslant c)\,h(c^{\prime})\neq i\wedge\bigwedge_{\nu=1}^{n}\pi_{\nu}^{(n )}(\arg(z))<f_{A}(b),\] \[f(i,\langle b,c\rangle,z)\stackrel{{\mathrm{Def}}}{{=}}r(\arg(z) )+1.\] _Claim 2_ (\(\forall s,r\in\mathcal{S}_{A}^{(n)}\)) [\([s\in X\)\(\wedge\) graph\(s)\(\subseteq\) graph\((r)]\Rightarrow r\in X\)]. Again we assume to the contrary that there are \(s,r\in\mathcal{S}^{(n)}_{A}\) so that \(s\in X\), \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\), but \(r\notin X\). Let \(j\) be a \(\theta^{(n)}\)-index of \(s\). Then we construct a function \(p\in\mathcal{R}^{(1)}\) so that \[\theta^{(n)}_{p(i)}(\vec{x})=\begin{cases}s(\vec{x})&\text{if in a simultaneous search in the extended graph of $\theta^{(n)}$ and $K^{A}$,}\\ &\text{respectively, $\langle\!\langle j,\vec{x}\rangle\!,s(\vec{x})+1\rangle$ will not be found later than $i$,}\\ r(\vec{x})&\text{if $i$ will be found earlier,}\\ \text{undefined}&\text{otherwise.}\end{cases}\] Note that since \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\), in case that \(i\in K^{A}\) we always have that \(\theta^{(n)}_{p(i)}=r\), independently when \(\langle\!\langle j,\vec{x}\rangle\!,s(\vec{x})+1\rangle\) will be found. By our assumption \(r\notin X\). Thus it follows for \(i\in K^{A}\) that \(p(i)\notin I_{\theta^{(n)}}(X)\). On the other hand, if \(i\in\omega\backslash K^{A}\), then \(\theta^{(n)}_{p(i)}=s\). Because \(s\in X\), we have in this case that \(p(i)\in I_{\theta^{(n)}}(X)\). This shows that \(\omega\backslash K^{A}\leqslant_{m}I_{\theta^{(n)}}(X)\), which is impossible as we have seen. It remains to construct the function \(p\). Let to this end \(j^{\prime}\) be a \(\theta^{(n)}\)-index of \(r\) and \(h^{\prime}\in\mathcal{R}^{(1)}\) be an enumeration of the extended graph of the universal function of numbering \(\theta^{(n)}\). The existence of \(h^{\prime}\) follows from Condition (QGN I). Define \[\widehat{Q}_{1}(a,b,z)\Leftrightarrow\pi^{(2)}_{2}(h^{\prime}(b ))>0\land\pi^{(2)}_{1}(h^{\prime}(b))=\langle a\rangle*\arg(z),\] \[\widehat{Q}_{2}(i,b,z)\Leftrightarrow\] \[\qquad\qquad[\widehat{Q}_{1}(j,b,z)\land(\forall c<b)\,h(c) \neq i]\lor\] \[\qquad\qquad\qquad[\widehat{Q}_{1}(j^{\prime},b,z)\land(\exists b ^{\prime}\leqslant b)\,[h(b^{\prime})=i\land\neg(\exists c\leqslant b^{ \prime})\,\widehat{Q}_{1}(j,c,z)]],\] \[Q(i,t,z)\Leftrightarrow(\exists b\leqslant t)\,\widehat{Q}_{2}(i,b,z),\] \[f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi^{(2)}_{2}( h^{\prime}(\mu b\leqslant t.\,\widehat{Q}_{2}(i,b,z))),\] and construct function \(k\in\mathcal{R}^{(2)}\) as in Lemma 4.2. Then \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of function \(d\) with \[d(\vec{x})=\begin{cases}s(\vec{x})&(\exists b)\,\pi^{(2)}_{2}(h^{\prime}(b))> 0\land\pi^{(2)}_{1}(h^{\prime}(b)=\langle j,\vec{x}\rangle\land(\forall c<b )\,h(c)\neq i],\\ r(\vec{x})&(\exists b)\,h(b)=i\land\neg(\exists c\leqslant b)\,[\pi^{(2)}_{2 }(h^{\prime}(c))>0\land\pi^{(2)}_{1}(h^{\prime}(c))=\langle j,\vec{x}\rangle] ],\\ \text{undefined}&\text{otherwise.}\end{cases}\] As follows from the definition, \(d\in\mathcal{S}^{(n)}_{A}\). By Condition (QGN II) there is then a \(p\in\mathcal{R}^{(1)}\) with \(\theta^{(n)}_{p(i)}=d\). Consequently, \(p\) has the properties mentioned above. _Claim 3_: \(X=\{\,r\in\mathcal{S}^{(n)}_{A}\mid(\exists i\in C)\operatorname{graph}( \alpha^{(n)}_{i})\subseteq\operatorname{graph}(r)\,\}.\) Let \(g\in\mathcal{R}^{(1)}\) with \(\alpha^{(n)}=\theta^{(n)}\circ g\) and \(C=g^{-1}[I_{\theta\mathfrak{s}(n)}(X)]\). Then \(CF\) is c.e. By Claim 1 we have for \(r\in\mathcal{S}^{(n)}_{A}\) that \[r\in X \Rightarrow(\exists s\in X\cap\operatorname{Anf}^{(n)}_{A}) \operatorname{graph}(s)\subseteq\operatorname{graph}(r)\] \[\Rightarrow(\exists i)\,[\alpha^{(n)}_{i}\in X\land\operatorname {graph}(\alpha^{(n)}_{i})\subseteq\operatorname{graph}(r)]\] \[\Rightarrow(\exists i\in C)\operatorname{graph}(\alpha^{(n)}_{i}) \subseteq\operatorname{graph}(r).\] Conversely, we have with Claim 2 that \[(\exists i\in C)\operatorname{graph}(\alpha^{(n)}_{i})\subseteq \operatorname{graph}(r)\] \[\Rightarrow(\exists s\in X)\operatorname{graph}(s)\subseteq \operatorname{graph}(r)\] \[\Rightarrow r\in X.\qed\] Next, we will study different kinds of computable operators on \(\widehat{\mathcal{S}}^{(n)}_{A}\) ands their relationship. **Definition 6.3** (cf. [36]).: _An operator \(\widehat{G}\colon\widehat{\mathcal{S}}_{A}^{(n)}\to\widehat{\mathcal{S}}_{A}^{(m)}\) is computable, if there is a c.e. set \(C\subseteq\omega\) so that for \(r\in\widehat{\mathcal{S}}_{A}^{(n)}\),_ \[\operatorname{graph}(\widehat{G}(r))=\{\,\langle\langle\vec{y}\rangle,z\, \rangle\mid(\exists a)\,[\langle\langle\langle\vec{y}\rangle,z\rangle,a \rangle\in C\,\wedge\,(\forall c\leq\operatorname{lth}(a))\,(a)_{c}\in \operatorname{graph}(r)\,\}].\] _We say that \(C\) defines the operator \(\widehat{G}\)._ As follows from the definition, \(\operatorname{graph}(\widehat{G}(r))\) is c.e., if \(\operatorname{graph}(r)\) is c.e. Thus, we have for computable operators \(\widehat{G}\colon\widehat{\mathcal{S}}_{A}^{(n)}\to\widehat{\mathcal{S}}_{A}^{(m)}\) that \(\widehat{G}[\mathcal{S}_{A}^{(n)}]\subseteq\mathcal{S}_{A}^{(m)}\). If one assigns to the zero-ary tuple a code number \(\langle\ \rangle\) the above definition and the subsequent results for \(m=0\) also include the case of the recursive functionals. However, it should be noted here that \(\widehat{\mathcal{S}}_{A}^{(0)}\) and \(\mathcal{S}_{A}^{(0)}\) contain the zero-digit constant functions, each of which we identify with the respective constant, as well as the zero-ary nowhere defined function. **Theorem 6.4**.: _Let \(\widehat{G}\colon\widehat{\mathcal{S}}_{A}^{(n)}\to\widehat{\mathcal{S}}_{A}^{ (m)}\) and \(G\) be its restriction to \(\mathcal{S}_{A}^{(n)}\). Then \(\widehat{G}\) is computable if, and only if the following three conditions are satisfied:_ 1. _For all_ \(s,s^{\prime}\in\textsc{Anf}_{A}^{(n)}\)_,_ \[\operatorname{graph}(s)\subseteq\operatorname{graph}(s^{\prime})\Rightarrow \operatorname{graph}(G(s))\subseteq\operatorname{graph}(G(s^{\prime})).\] 2. _For all_ \(r\in\widehat{\mathcal{S}}_{A}^{(n)}\)_,_ \[\operatorname{graph}(\widehat{G}(r))=\bigcup\{\operatorname{graph}(G(s))\mid s \in\textsc{Anf}_{A}^{(n)}\wedge\operatorname{graph}(s)\subseteq \operatorname{graph}(r)\,\}.\] 3. \(\{\,\langle i,j\rangle\mid\operatorname{graph}(\alpha_{j}^{(m)})\subseteq \operatorname{graph}(G(\alpha_{i}^{(n)}))\,\}\) _is c.e._ Proof.: Assume that \(\widehat{G}\) is computable. Then it follows from the definition that for \(s\in\textsc{Anf}_{A}^{(n)}\) and \(r\in\widehat{G}_{A}^{(n)}\) with \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\), \(\operatorname{graph}(G(s))\subseteq\operatorname{graph}(\widehat{G}(r))\). Thus, (1) holds. Moreover, \[\bigcup\{\operatorname{graph}(G(s))\mid s\in\textsc{Anf}_{A}^{(n)}\wedge \operatorname{graph}(s)\subseteq\operatorname{graph}(r)\,\}\subseteq \operatorname{graph}(\widehat{G}(r)).\] For the converse inclusion note that \(\operatorname{graph}(\widehat{G}(r))\) is the union of all \(\operatorname{graph}(q)\), where \(q\in\textsc{Anf}_{A}^{(m)}\) with \(\operatorname{graph}(q)\subseteq\operatorname{graph}(\widehat{G}(r))\). Therefore, let \(q\) be such an initial segment function. In the enumeration of \(\operatorname{graph}(\widehat{G}(r))\) each element of \(\operatorname{graph}(q)\) corresponds to a finite number of questions to \(\operatorname{graph}(r)\). Since \(\operatorname{graph}(q)\) is finite, there is thus some \(s\in\textsc{Anf}_{A}^{(n)}\) with \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\) and \(\operatorname{graph}(q)\subseteq\operatorname{graph}(G(s))\), which shows that also Condition 2 holds. For Condition (3) observe that \[\operatorname{graph}(\alpha_{j}^{(m)})\subseteq\operatorname{graph }(G(\alpha_{i}^{(n)}))\Leftrightarrow\] \[(\forall\vec{y}<(\lg(j))^{m})(\exists z)(\exists a)\,\langle \langle j,\vec{y}\rangle,z+1\rangle\in\operatorname{graph}_{\text{e}}( \lambda b,\vec{x}.\ \alpha_{b}^{(m)}(\vec{x}))\wedge\langle\langle\vec{y},z\rangle,a \rangle\in C\,\wedge\] \[(\forall c<\operatorname{lth}(a))\,\langle\langle i\rangle*\pi_{ 1}^{(2)}((a)_{c}),1+\pi_{2}^{(2)}((a)_{c})\rangle\in\operatorname{graph}_{ \text{e}}(\lambda b,\vec{x}.\ \alpha_{b}^{(n)}(\vec{x})).\] Recall that the unbounded existential quantifiers in the right-hand side can be brought in front of the expression by using an effective sequence encoding. Moreover, the extended graphs in the expression are c.e. With the projection lemma we hence obtain (3). Now, conversely, suppose that \(\widehat{G}\colon\widehat{\mathcal{S}}_{A}^{(n)}\to\widehat{\mathcal{S}}_{A}^{ (m)}\) satisfies Conditions (1)-(3). With (2) we have for \(r\in\widehat{\mathcal{S}}_{A}^{(n)}\) that \[\operatorname{graph}(\widehat{G}(r))\] \[\qquad=\bigcup\{\operatorname{graph}(G(s))\mid s\in\textsc{Anf} _{A}^{(n)}\wedge\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\,\}\] \[\qquad=\bigcup\{\operatorname{graph}(q)\mid q\in\textsc{Anf}_{A}^{ (m)}\wedge(\exists s\in\textsc{Anf}_{A}^{(n)})\operatorname{graph}(s)\subseteq \operatorname{graph}(r)\,\wedge\] \[\begin{split}&\operatorname{graph}(q)\subseteq\operatorname{graph}(G(s)) \,\}\\ =\bigcup\{\operatorname{graph}(\alpha_{j}^{(m)})\mid(\exists i) \operatorname{graph}(\alpha_{i}^{(n)})\subseteq\operatorname{graph}(r)\ \wedge\\ \operatorname{graph}(\alpha_{j}^{(m)})\subseteq\operatorname{ graph}(G(\alpha_{i}^{(n)}))\,\}\end{split}\] Let the graph encdoding \(\llbracket\alpha_{b}^{(n)}\rrbracket\) be as in the proof of Lemma 6.1. Then it follows \[\begin{split}&\langle\langle\vec{y}\rangle,z\rangle\in \operatorname{graph}(\widehat{G}(r))\\ &\Leftrightarrow(\exists j)(\exists i)\,\langle\langle\vec{y} \rangle,z\rangle\in\operatorname{graph}(\alpha_{j}^{(m)})\ \wedge\\ &\operatorname{graph}(\alpha_{j}^{(m)})\subseteq\operatorname{ graph}(G(\alpha_{i}^{(n)}))\wedge\operatorname{graph}(\alpha_{i}^{(n)})\subseteq \operatorname{graph}(r)\\ &\Leftrightarrow(\exists a)\,[(\exists j)(\exists i)\,\langle \langle j,\vec{y}\rangle,z+1\rangle\in\operatorname{graph}_{\mathrm{e}}( \lambda b,\vec{x}.\ \alpha_{b}^{(m)}(\vec{x}))\wedge a=\llbracket\alpha_{i}^{(n)} \rrbracket\wedge\\ &\operatorname{graph}(\alpha_{j}^{(m)})\subseteq\operatorname{ graph}(G(\alpha_{i}^{(n)}))\rrbracket\wedge(\forall c<\operatorname{ lth}(a))\,(a)_{c}\in\operatorname{graph}(r).\end{split}\] Set \[\begin{split} C\stackrel{{\mathrm{Def}}}{{=}}\{\langle \langle\langle\vec{y}\rangle,z\rangle,a\rangle\mid(\exists j)(\exists i)\, \langle\langle j,\vec{y}\rangle,z+1\rangle\in\operatorname{graph}_{\mathrm{e} }(\lambda b,\vec{x}.\ \alpha_{b}^{(m)}(\vec{x}))\ \wedge\\ &\operatorname{graph}(\alpha_{j}^{(m)})\subseteq\operatorname{ graph}(G(\alpha_{i}^{(n)}))\wedge a=\llbracket\alpha_{i}^{(n)}\rrbracket\,\}.\end{split}\] Then \(C\) defines \(\widehat{G}\). Since \(\llbracket\alpha_{i}^{(n)}\rrbracket\) can be computed form \(i\) and because of Lemma 6.1(2) and Condition (3), we further have that \(C\) is c.e. Thus, \(\widehat{G}\) is computable. The second type of operator we are going to consider is at the basis of the Russian school of constructive mathematics. Contrary to computable operators these operators are only defined for functions in \(\mathcal{S}_{A}^{(n)}\). **Definition 6.5**.: _An operator \(G\colon\mathcal{S}_{A}^{(n)}\to\mathcal{S}_{A}^{(m)}\) is called Markov-computable if there is a function \(g\in\mathcal{R}^{(1)}\) so that_ \[G(\theta_{i}^{(n)})=\theta_{g(i)}^{(m)}.\] _We say that \(g\) realises the operator \(G\)._ Our aim is to derive an analogue of the Myhill/Shepherdson theorem [31] for the function classes considered here. We break the proof down in several steps. **Lemma 6.6**.: _The restriction of a computable operator \(\widehat{G}\colon\widehat{\mathcal{S}}_{A}^{(n)}\to\widehat{\mathcal{S}}_{A} ^{(m)}\) to \(\mathcal{S}_{A}^{(n)}\) is Markov-computable._ Proof.: We use Lemma 4.2 to construct a function \(k^{\prime}\in\mathcal{R}^{(2)}\) such that \(\lambda t\). \(k^{\prime}(i,t)\) enumerates the extended graph of the function \(G(\theta_{i}^{(n)})\), where \(G\) is the restriction of \(\widehat{G}\) to \(\mathcal{S}_{A}^{(n)}\). The existence of a function realising \(G\) is then a consequence of Condition (QGN II). Let \(k\in\mathcal{R}^{(2)}\) be as inTheorem 4.4. Then \(\lambda t\). \(k(i,t)\) enumerates the graph of \(\theta_{i}^{(n)}\). Moreover, let the c.e. set \(C\) define \(\widehat{G}\). Set \[\begin{split}&\widehat{G}(i,\langle\langle\langle\vec{y}\rangle,x \rangle,a\rangle,t,z)\Leftrightarrow\\ &\vec{y}=\arg(z)\wedge(\forall c\leqslant\operatorname{lth}(a)) \,(\exists t^{\prime}\leqslant t)\,k(i,t^{\prime})=\langle\pi_{1}^{(2)}((a)_{ c}),1+\pi_{2}^{(2)}((a)_{c})\rangle,\\ & Q(i,t,z)\Leftrightarrow(\exists\langle\langle\langle\vec{y} \rangle,x\rangle,a\rangle\in C_{t})\,\widehat{Q}(i,\langle\langle\langle\vec{y} \rangle,x\rangle,a\rangle,t,z),\\ & f(i,t,z)\stackrel{{\mathrm{Def}}}{{=}}\pi_{2}^{(2)} (\pi_{1}^{(2)}(\mu\langle\langle\langle\vec{y}\rangle,x\rangle,a\rangle\in C _{t}.\ Q(i,\langle\langle\langle\vec{y}\rangle,x\rangle,a\rangle,t,z))). \end{split}\] By comparing this definition with the condition that \(C\) defines \(\widehat{G}\) one sees that the function \(k^{\prime}\in\mathcal{R}^{(2)}\) constructed as in Lemma 4.2 has the required property. **Lemma 6.7**.: _Every Markov-computable operator \(G\colon\mathcal{S}^{(n)}_{A}\to\mathcal{S}^{(m)}_{A}\) is monotone. That is, for \(r,s\in\mathcal{S}^{(n)}_{A}\),_ \[\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\Rightarrow \operatorname{graph}(G(s))\subseteq\operatorname{graph}(G(r)).\] Proof.: Assume to the contrary that there are \(r,s\in\mathcal{S}^{(n)}_{A}\) s that \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\), but \(\operatorname{graph}(G(s))\nsubseteq\operatorname{graph}(G(r))\). Since for every \(q\in\mathcal{S}^{(n)}_{A}\), \[\operatorname{graph}(q)=\bigcup\{\,\operatorname{graph}(p)\mid p\in \textsc{Anf}^{(n)}_{A}\wedge\,\operatorname{graph}p\subseteq\operatorname{ graph}(q)\,\}.\] Therefore, there is some \(p\in\textsc{Anf}^{(n)}_{A}\) with \[\operatorname{graph}(p)\subseteq\operatorname{graph}(G(s))\quad\text{and} \quad\operatorname{graph}(p)\nsubseteq\operatorname{graph}(G(r)).\] Let us suppose for the moment that a function \(\hat{k}\in\mathcal{R}^{(2)}\) can be constructed so that \(\lambda t\). \(\hat{k}(i,t)\) enumerates the extended graph of \(s\), if \(i\in\omega\backslash K_{A}\), and the extended graph of \(r\), otherwise. Then it follows for the function \(v\in\mathcal{R}^{(1)}\) existing for this \(k\) by (QGN II) that \[\theta^{(n)}_{v(i)}=\begin{cases}s&\text{if }i\in\omega\backslash K^{A},\\ r&\text{otherwise}.\end{cases}\] If \(G\) is realised by \(g\in\mathcal{R}^{(1)}\), we obtain \[\operatorname{graph}(p)\subseteq\operatorname{graph}(\theta^{(n)}_{g(v(i))}) \Leftrightarrow\theta^{(n)}_{v(i)}=s\Leftrightarrow i\in\omega\backslash K^{A}.\] By Lemma 6.1(5) the set \(\{\,i\mid\operatorname{graph}(p)\subseteq\operatorname{graph}(\theta^{(n)}_{ g(v(i))})\,\}\) is c.e. and hence \(\omega\backslash K^{A}\) is c.e. as well, which is false. Thus we have a contradiction. It remains to show that a function \(\hat{k}\) as above can indeed be constructed. Let again \(k\in\mathcal{R}^{(1)}\) be as in Lemma 4.4. Then \(\lambda t\). \(k(i,t)\) enumerates the extended graph of \(\theta^{(n)}_{i}\). Moreover, let \(j,j^{\prime}\), respectively, be \(\theta^{(n)}\)-indices of \(s\) and \(r\), and let \(h\in\mathcal{R}^{(1)}\) enumerate \(K^{A}\). Then define \(\hat{k}\) by \[\hat{k}(i,t)\stackrel{{\text{Def}}}{{=}}\begin{cases}k(j,t)& \text{if }h(t^{\prime})\neq i,\,\text{for all }t^{\prime}\leqslant t,\\ k(j^{\prime},t)&\text{otherwise}.\end{cases}\] Then \(\hat{k}\) is as wanted. **Lemma 6.8**.: _Let \(G\colon\mathcal{S}^{(n)}_{A}\to\mathcal{S}^{(m)}_{A}\) be Markov-computable. Then, for every \(r\in\mathcal{S}^{(n)}_{A}\),_ \[\operatorname{graph}(G(r))=\bigcup\{\,\operatorname{graph}(G(s))\mid s\in \textsc{Anf}^{(n)}_{A}\wedge\operatorname{graph}(s)\subseteq\operatorname{ graph}(r)\,\}.\] Proof.: By the previous lemma the set \(\{\,\operatorname{graph}(G(s))\mid s\in\textsc{Anf}^{(n)}_{A}\wedge \operatorname{graph}(s)\subseteq\operatorname{graph}(r)\,\}\) is a chain with respect to inclusion. For \(r\in\textsc{Anf}^{(n)}_{A}\), \(\operatorname{graph}(r)\) is contained in this set. Hence, the statement holds trivially. It remains to consider the case that \(r\in\mathcal{R}^{(n)}\). Because of Lemma 6.7 it suffices to show that \[\operatorname{graph}(G(r))\subseteq\bigcup\{\,\operatorname{graph}(G(s))\mid s \in\textsc{Anf}^{(n)}_{A}\wedge\operatorname{graph}(s)\subseteq\operatorname{ graph}(r)\,\}.\] Assume to the contrary that this inclusion is wrong for some \(r\in\mathcal{R}^{(n)}\). Then there exists \(q\in\textsc{Anf}^{(m)}_{A}\) so that \(\operatorname{graph}(q)\subseteq\operatorname{graph}(G(r))\), but \[\operatorname{graph}(q)\nsubseteq\bigcup\{\,\operatorname{graph}(G(s))\mid s \in\textsc{Anf}^{(n)}_{A}\wedge\operatorname{graph}(s)\subseteq \operatorname{graph}(r)\,\}. \tag{4}\] Statement (4) holds exactly if for all \(s\in\textsc{Anf}^{(n)}_{A}\) with \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\), \(\operatorname{graph}(q)\nsubseteq\operatorname{graph}(G(s))\). Assume for the moment that there is some \(v\in\mathcal{R}^{(1)}\) with \(\theta^{(n)}_{v(i)}\in\textsc{Anf}^{(n)}_{A}\) and \[\operatorname{graph}(\theta^{(n)}_{v(i)})\subseteq\operatorname{graph}(r),\] if \(i\in K^{A}\), and \(\theta^{(n)}_{v(i)}=r\), if \(i\notin K^{A}\). Let \(g\in\mathcal{R}^{(1)}\) realise \(G\). Then we obtain \[\operatorname{graph}(q)\subseteq\operatorname{graph}(\theta^{(m)}_{g(v(i))}) \Leftrightarrow\theta^{(n)}_{v(i)}=r\Leftrightarrow i\in\omega\backslash K^{A}.\] As seen in the last proof, this is impossible. So, the assumption made at the beginning is wrong. We will now consider the construction of function \(v\). Let to this end \(h,f_{A}\in\mathcal{R}^{(1)}\), respectively, be enumerations of \(K^{A}\) and \(A\). Moreover, let \(\hat{k},k\in\mathcal{R}^{(2)}\) be defined by \[\hat{k}(i,t)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}\hat{k}(i,t-1)&\text{if for $t^{\prime}<t$ and $t^{\prime\prime}\leq t$, $h(t^{\prime})=i$ and $t=f_{A}(t^{\prime\prime})^{n}$},\\ \langle\langle\kappa(t)\rangle,r(\kappa(t))+1\rangle&\text{otherwise,}\end{cases}\] \[k(i,2t)\stackrel{{\mathrm{Def}}}{{=}}\langle t,0\rangle,\] \[k(i,2t+1)\stackrel{{\mathrm{Def}}}{{=}}\hat{k}(i,t).\] If \(i\notin K^{A}\), \(\lambda t\). \(k(i,t)\) enumerates the extended graph of \(r\); otherwise, \(\lambda t\). \(\hat{k}(i,t)\) step by step lists \[\langle\langle\kappa(0)\rangle,r(\kappa(0))+1\rangle,\langle\langle\kappa(1) \rangle,r(\kappa(1))+1\rangle,\ldots\] until \(i\) has been found in \(K^{A}\). If so, it continues this enumeration until an initial segment of \(\omega^{n}\) with an edge length in \(A\) has been enumerated by \(\kappa(0),\ldots,\kappa(t)\). Therefore, in this case, \(\lambda t\). \(k(i,t)\) enumerates the extended graph of a function \(s\in\textsc{Anf}^{(n)}_{A}\) with \(\operatorname{graph}(s)\subseteq\operatorname{graph}(r)\). It follows that the function \(v\in\mathcal{R}^{(1)}\) existing for this \(k\) by Condition (QGN II) has the appropriate properties. If \(\alpha^{(n)}=\theta^{(n)}\circ f\), for some \(f\in\mathcal{R}^{(n)}\) and the Markov-computable operator \(G\colon\mathcal{S}^{(n)}_{A}\to\mathcal{S}^{(m)}_{A}\) is realised by \(g\in\mathcal{R}^{(n)}\), then \(G(\alpha^{(n)}_{i})=\theta^{(n)}_{g(f(i))}\). With Lemma 6.1(5) we also obtain that \(\{\,\langle i,j\rangle\,|\operatorname{graph}(\alpha^{(m)}_{i})\subseteq \operatorname{graph}(G(\alpha^{(n)}_{j}))\,\}\) is c.e. We have now completed all the necessary steps to derive the result we were aiming for. **Theorem 6.9** (Myhill, Shepherdson).: 1. _The restriction of every computable operator_ \(\widehat{G}\colon\widehat{\mathcal{S}}^{(n)}_{A}\to\widehat{\mathcal{S}}^{( m)}_{A}\) _to_ \(\mathcal{S}^{(n)}_{A}\) _is Markov-computable._ 2. _Every Markov-computable operator_ \(G\colon\mathcal{S}^{(n)}_{A}\to\mathcal{S}^{(m)}_{A}\) _is the restriction to_ \(\mathcal{S}^{(n)}_{A}\) _of a computable operator_ \(\widehat{G}\colon\widehat{\mathcal{S}}^{(n)}_{A}\to\widehat{\mathcal{S}}^{( m)}_{A}\)_._ Proof.: (1) has been shown in Lemma 6.6. (2) By Lemma 6.7 the set \(\{\,\operatorname{graph}(G(s))\mid s\in\textsc{Anf}^{(n)}_{A}\wedge \operatorname{graph}(s)\subseteq\operatorname{graph}(r)\,\}\) with \(r\in\mathcal{S}^{(n)}_{A}\) is a chain with respect to set inclusion. Thus, the union over this chain is single-valued, that is, the graph of an \(m\)-ary function. Let \(\widehat{G}(r)\) be this function. Then we have \[\operatorname{graph}(\widehat{G}(r))=\bigcup\{\,\operatorname{graph}(G(s))\mid s \in\textsc{Anf}^{(n)}_{A}\wedge\operatorname{graph}(s)\subseteq \operatorname{graph}(r)\,\}.\] The chain is either finite or infinite. In the first case \(\operatorname{graph}(\widehat{G}(r))\) is its maximal element. Since for all \(s\in\textsc{Anf}^{(n)}_{A}\), \(G(s)\in\mathcal{S}^{(m)}_{A}\), it follows that also \(\widehat{G}(r)\in\mathcal{S}^{(m)}_{A}\). In the other case, every element of the chain is the graph of a function in \(\mathcal{S}^{(m)}_{A}\). The domains of this functions in the chain therefore form an increasing chain of subsets of \(\omega^{m}\). it follows that \(\widehat{G}(r)\) is total in this case. Hence, in both cases, \(\widehat{G}(r)\in\widehat{\mathcal{S}}^{(m)}_{A}\). With Lemma 6.7 and the remark preceding this theorem it follows that \(\widehat{G}\) satisfies Conditions (1)-(3) in Theorem 6.4. Hence, \(\widehat{G}\) is computable. In case that \(r\in\mathcal{S}^{(n)}_{A}\) it moreover follows with Lemma 6.8 that \(\widehat{G}(r)=G(r)\). That is, \(G\) is the restriction of \(\widehat{G}\) to \(\mathcal{S}^{(n)}_{A}\). Quasi-Godel numberings and the Rogers semi-lattice of computable numberings In this section we examine quasi-Godel numberings under numbering-theoretical aspects and sort out the relationship of the numbered sets \((\mathcal{S}_{A}^{(1)},\theta)\) to the numbered set \((\mathcal{P}^{(1)},\varphi)\). Here, \(\theta\) is again a quasi-Godel and \(\varphi\) a Godel numbering. We limit ourselves to unary functions, since every \(n\)-ary function \(f\in\mathcal{S}_{A}^{(n)}\) converts into the unary function \(r\circ\kappa\in\mathcal{S}_{A^{\prime}}^{(1)}\) with \(A^{\prime}=\{\,a^{n}\mid a\in A\,\}\) by means of a one-to-one and initial segment-wise enumeration \(\kappa\) of \(\omega^{n}\). **Proposition 7.1**.: _Let \(\eta,\gamma\) be numberings of \(\mathcal{S}_{A}^{(1)}\). Then the following three statements hold;_ 1. _If_ \(\gamma\) _is a quasi-Godel numbering and_ \(\eta\leqslant_{m}\gamma\)_, then_ \(\eta\) _satisfies Condition (QGN I)._ 2. _If_ \(\eta\) _is a quasi-Godel numbering and_ \(\eta\leqslant_{m}\gamma\)_, then_ \(\gamma\) _satisfies Condition (QGN II)._ 3. _If_ \(\eta,\gamma\)_, respectively, satisfy (QGN I) and (QGN II), then_ \(\eta\leqslant_{m}\gamma\)_._ Proof.: (1) Since \(\eta\leqslant_{m}\gamma\), there is some \(g\in\mathcal{R}^{(1)}\) with \(\eta=\gamma\circ g\). By assumption \[\operatorname{graph_{e}}(\lambda i,x.\ \gamma_{i}(x))\] is c.e. Hence this is also true for \(\operatorname{graph_{e}}(\lambda i,x.\ \gamma_{g(i)}(x))\). (2) Let \(g\in\mathcal{R}^{(1)}\) with \(\eta=\gamma\circ g\). Moreover, let \(k\in\mathcal{R}^{(2)}\) and \(r\in\mathcal{S}_{A}^{(1)}\) so that \(\lambda t.\ k(i,t)\) enumerates the extended graph of \(r\). Since \(\eta\) satisfies (QGN II), there is some \(v\in\mathcal{R}^{(1)}\) with \(r=\eta_{v(i)}\). Set \(v^{\prime}\stackrel{{\text{Def}}}{{=}}g\circ v\). Then \(r=\gamma_{v^{\prime}(i)}\). Whence also \(\gamma\) satisfies (QGN II). (3) Let \(k\in\mathcal{R}^{(2)}\) be as in Theorem 4.4. Then \(\lambda t.\ k(i,t)\) enumerates the extended graph of function \(\eta_{i}\). Since \(\gamma\) satisfies (QGN II) there is some \(v\in\mathcal{R}^{(1)}\) with \(\eta_{i}\gamma_{v(i)}\). Thus, \(\eta\leqslant_{m}\gamma\). It follows that, if \(\gamma\) is a quasi Godel numbering of \(\mathcal{S}_{A}^{(1)}\), then \(\eta\) is also a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\), exactly if \(\eta\equiv_{m}\gamma\). In particular, all quasi-Godel numberings of \(\mathcal{S}_{A}^{(1)}\) are \(m\)-equivalent. As we will see next, they are maximal among all numberings of \(\mathcal{S}_{A}^{(1)}\) satisfying (QGN I), and minimal among those satisfying (QGN II). **Theorem 7.2**.: _Let \(\eta\) be a numbering of \(\mathcal{S}_{A}^{(1)}\). Then the following three statements are equivalent:_ 1. \(\eta\) _is a quasi-Godel numbering._ 2. \(\eta\) _satifies (QGN I) and for all numberings_ \(\gamma\) _of_ \(\mathcal{S}_{A}^{(1)}\)_, if_ \(\gamma\) _satisfies (QGN I) then_ \(\gamma\leqslant_{m}\eta\)_._ 3. \(\eta\) _satifies (QGN II) and for all numberings_ \(\gamma\) _of_ \(\mathcal{S}_{A}^{(1)}\)_, if_ \(\gamma\) _satisfies (QGN II) then_ \(\eta\leqslant_{m}\gamma\)_._ Proof.: Assume (1). Then (2) follows with Proposition 7.1(3). Conversely, suppose (2) and let \({}^{A}\psi\) be a quasi-Godel numbering as in Theorem 3.2. Then \({}^{A}\psi\leqslant_{m}\eta\), by our assumption. With Proposition 7.1(2) it follows that \(\eta\) satisfies (QGN II). Since it also satisfies (QGN I), \(\eta\) is a quasi-Godel numbering. In a similar way it follows that (1) and (2) are equivalent. At the end of Section 4 we pointed out that quasi-Godel numberings are pre-complete. This will be strengthened next. **Theorem 7.3**.: _Let \(\theta\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\). Then the following two statements hold:_ 1. \(\theta\) _is complete with the nowhere defined function as distinguished element. This means that for any_ \(p\in\mathcal{P}^{(1)}\) _there exists_ \(g\in\mathcal{R}^{(1)}\) _such that_ \[\theta_{g(i)}(x)=\begin{cases}\theta_{p(i)}(x)&\text{if }i\in\operatorname{dom}(p),\\ \text{undefined}&\text{otherwise.}\end{cases}\] 2. \(\theta\) _is cylindrical, that is,_ \(\theta\equiv\theta^{c}\)_, where_ \(\theta^{c}_{\langle i,j\rangle}=\theta_{j}\)_._ Proof.: (1) Let \(k\in\mathcal{R}^{(2)}\) be as in Theorem 4.4. Then \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of \(\theta_{i}\). Moreover, let \(k^{\prime}\in\mathcal{R}^{(2)}\) be defined by \[k^{\prime}(i,2t)\stackrel{{\mathrm{Def}}}{{=}} \langle t,0\rangle,\] \[k^{\prime}(i,2t+1)\stackrel{{\mathrm{Def}}}{{=}} \begin{cases}k(i,t\leftarrow\mu t^{\prime}.\ p(i)\!\downarrow_{t^{\prime}})& \text{if for some }t^{\prime}<t,\,p(i)\!\downarrow_{t^{\prime}},\\ \langle t,0\rangle&\text{otherwise}.\end{cases}\] Then \(\lambda t.\)\(k^{\prime}(i,t)\) enumerates the extended graph of \(\theta_{i}\), if \(i\in\mathrm{dom}(p)\). Otherwise, \(\lambda t.\)\(k^{\prime}(i,t)\) enumerates the extended graph of the nowhere defined function. Let \(g\in\mathcal{R}(1)\) be the function existing by (QGN II). Then we have for \(i\in\mathrm{dom}p\) that \(\theta_{g(i)}=\theta_{p(i)}\). In the other case, \(\theta_{g(i)}\) is nowhere defined. (2) Let \(\theta^{c}\) with \(\theta^{c}_{\langle i,j\rangle}=\theta_{j}\) be the cylinder of \(\theta\). Then \(\theta\leqslant_{1}\theta^{c}\). Since by the Ershov-Myhill isomorphism theorem [7, p. 295] 1-equivalence coincides with computable isomorphism, it remains to show that also \(\theta^{c}\leqslant_{1}\theta\). Let to this end \(p\in\mathcal{R}^{(2)}\) be the padding function existing by Theorem 4.13. Then \[\theta^{c}_{\langle i,j\rangle}=\theta_{j}=\theta_{p(i,j)}.\] Thus, \(\theta^{c}=\theta\circ g\) with \(g(\langle i,j\rangle)=p(j,i)\). That is, \(\theta^{c}\leqslant_{1}\theta\). As we have seen above, all quasi-Godel numberings are \(m\)-equivalent. Since such numberings possess padding functions, they are even 1-equivalent. With the Ershov-Myhill isomorphism theorem we thus obtain an extension of Rogers' isomorphism theorem [35] for Godel numberings to quasi-Godel numberings. **Theorem 7.4** (Isomorphism Theorem).: _All quasi-Godel numberings are computably isomorphic._ It follows that a numbering of \(\mathcal{S}^{(1)}_{A}\) is a quasi-Godel numbering, if it is computably isomorphic to a quasi-Godel numbering. In Section 2 we fixed a Godel numbering \(\varphi\) to construct a special \(\varphi\)-standard numbering of \(\mathcal{S}^{(1)}_{A}\) which then turned out to be a quasi-Godel numbering. Thus, every quasi-Godel numbering of \(\mathcal{S}^{(1)}_{A}\) is computable isomorphic to a special \(\varphi\)-standard numbering of \(\mathcal{S}^{(1)}_{A}\). This result can be strengthened again. **Theorem 7.5**.: _For every quasi-Godel numbering \(\theta\) there is a Godel numbering \(\xi\) so that \(\theta\) is a special \(\xi\)-standard numbering._ Proof.: Let \({}^{A}\varphi\) be the special standard numbering von \(\mathcal{S}^{(1)}_{A}\) used in Section 2. Then \({}^{A}\varphi=\varphi\circ f\), for some \(f\in\mathcal{R}^{(1)}\). Moreover, there is a one-to-one and onto function \(g\in\mathcal{R}^{(1)}\) with \(\theta={}^{A}\varphi\circ g\). Set \(\xi\stackrel{{\mathrm{Def}}}{{=}}\varphi\circ g\). Then \(\xi\) is a Godel numbering, and as is readily verified, \(\xi\circ(g^{-1}\circ f\circ g)\) is a special \(\xi\)-standard numbering of \(\mathcal{S}^{(1)}_{A}\). Moreover, \(\theta=\xi\circ(g^{-1}\circ f\circ g)\). It follows that the construction of a quasi-Godel numbering of \(\mathcal{S}^{(1)}_{A}\) in Section 2 has universal character: every quasi-Godel numbering is obtained in this way. In a next step we will examine which other properties the quasi-Godel numbered set \((\mathcal{S}^{(1)}_{A},\theta)\) has as a subobject of \((\mathcal{P}^{(1)},\varphi)\). Here, a subset \(Z\subseteq\mathcal{P}^{(1)}\) with a numbering \(\zeta\) is called _subobject_ of \((\mathcal{P}^{(1)},\varphi)\) if \(\zeta\leqslant_{m}\varphi\) (cf. [7, 8]). \((Z,\zeta)\) is an _sn-subobject_ of \((\mathcal{P}^{(1)},\varphi)\) if, in addition, there is some \(g\in\mathcal{R}^{(1)}\) so that \(\zeta\circ g\) is a special \(\varphi\)-standard n numbering of \(Z\). \((Z,\zeta)\) is called _r-subobject_ or _retract_ of \((\mathcal{P}^{(1)},\varphi)\) if, in addition, there is an idempotent onto function \(H\colon\mathcal{P}^{(1)}\to Z\) and some \(h\in\mathcal{R}^{(1)}\) such that \(H(\varphi_{i})=\zeta_{h(i)}\). \((Z,\zeta)\) is an _n-subobject_ of \((\mathcal{P}^{(1)},\varphi)\) if, in addition, there exist \(f\in\mathcal{R}^{(1)}\) such that for all \(i\in I_{\varphi}(Z)\), \(\varphi_{i}=\zeta_{f(i)}\). In this case the numbering \(zeta\) is called _\(\varphi\)-normal_ (cf. [28]). \((Z,\zeta)\) is called _principal subobject_ of \(({\cal P}^{(1)},\varphi)\) and \(\zeta\)\(\varphi\)_-principal numbering_, if for every numbering \(\eta\) of \(Z\) with \(\eta\leqslant_{m}\varphi\) one has that \(\eta\leqslant_{m}\zeta\). As seen above, the quasi-Godel numbering \(\theta\) is computably isomorphic to a special \(\varphi\)-standard numbering of \({\cal S}^{(1)}_{A}\). This shows **Corollary 7.6**.: \(({\cal S}^{(1)}_{A},\theta)\) _is an sn-subobject of \(({\cal P}^{(1)},\varphi)\),_ In Section 3 we have considered a Turing program \(M^{(1)}_{A}\) that calls arbitrary Turing programs \(P\) so that \(M^{(1)}_{A}(P)\) is a program that computes the functions in \({\cal S}^{(1)}_{A}\). By considering the semantics of the programs we obtain an idempotent onto map \(M_{A}\colon{\cal P}^{(1)}_{A}\to{\cal S}^{(1)}_{A}\) such that \(M_{A}(\varphi_{i})={}^{A}\varphi_{i}\). As we have seen above, \(\theta\) is computably isomorphic to \({}^{A}\varphi_{i}\). **Theorem 7.7**.: \(({\cal S}^{(1)}_{A},\theta)\) _is a retract of \(({\cal P}^{(1)},\varphi)\)._ As is shown by Ershov [7, SS4], every retract of \(({\cal P}^{(1)},\varphi)\) is an n-subobject and each n-subobject is a principal subobject of \(({\cal P}^{(1)},\varphi)\). Thus, every quasi-Godel numbering \(\theta\) of \({\cal S}^{(1)}_{A}\) is \(\varphi\)-normal and a \(\varphi\)-principal numbering. Since every numbering of \({\cal S}^{(1)}_{A}\) that satisfies (QGN I) is \(m\)-reducible to \(\varphi\), it follows with Theorem 7.2 that each \(\varphi\)-principal numbering of \({\cal S}^{(1)}_{A}\) is a quasi-Godel numbering. As a consequence, every \(\varphi\)-normal numbering of \({\cal S}^{(1)}_{A}\) is a quasi-Godel numbering. **Theorem 7.8**.: _Let \({\rm HPT}^{\varphi}_{A}\), \({\rm NOR}^{\varphi}_{A}\) and \({\rm QGN}^{\varphi}_{A}\), respectively, be the set of \(\varphi\)-principal, \(\varphi\)-normal and quasi-Godel numberings of \({\cal S}^{(1)}_{A}\). Then_ \[{\rm HPT}^{\varphi}_{A}={\rm NOR}^{\varphi}_{A}={\rm QGN}^{\varphi}_{A}.\] If \(B\) is a c.e. superset of \(A\) and \(\gamma\) a quasi-Godel numbering of \({\cal S}^{(1)}_{B}\), then the above results remain true if \(\varphi\) is replaced by \(\gamma\) and \(({\cal P}^{(1)},\varphi)\) by \(({\cal S}^{(1)}_{B},\gamma)\). In particular we obtain an analogue of Theorem 7.7. **Theorem 7.9**.: \(({\cal S}^{(1)}_{A},\theta)\) _is a retract of \(({\cal S}^{(1)}_{B},\gamma)\)._ So far in this work it was shown that the function classes \({\cal S}^{(n)}_{A}\) have a quasi-G"odel numbering \(\theta^{(n)}\) for every infinite c.e. set \(A\) and that for this a sufficiently rich computability theory can be developed in which the same results apply as in the computability theory for all partial computable functions. If there are classes of functions that include the total computable functions but do not contain all partial computable functions and for which a satisfactory computability theory can be developed, one naturally wonders whether there are \(\subset^{*}\)-minimal classes of this kind. Here, \(X\subset^{*}Y\), if \(X\subset Y\) and \(Y\backslash X\) is infinite. We do not want to treat this problem in full generality, but want to examine whether the family BB of sets contains \(\subset^{*}\)-minimal elements, where \[{\rm BB}\stackrel{{\rm Def}}{{=}}\{\,{\cal R}^{(1)}\cup T\mid T \subseteq{\rm AnF}^{(1)}_{\omega}\mbox{ and }{\cal R}^{(1)}\cup T\mbox{ has a quasi-Godel numbering}\,\}\] Set \(h(0,j)\stackrel{{\rm Def}}{{=}}j\) and \(h(i+1,j)\stackrel{{\rm Def}}{{=}}2^{h(i,j)}\). Then each set \(A_{i}\stackrel{{\rm Def}}{{=}}\{\,h(i,j)\mid j\in\omega\,\}\) with \(i\in\omega\) is computable and infinite. Since \(A_{i+1}\subset^{*}A_{i}\), we obtain that BB contains infinite descending chains with respect to \(\subset^{*}\). If every function class in BB were of the form \({\cal R}^{(1)}\cup{\rm AnF}^{(1)}_{A}\) with an infinite c.e. set \(A\), we would of course know that BB contains no \(\subset^{*}\)-minimal elements, since from any infinite c.e. set one can remove infinitely many elements without destroying the property of being both infinite and c.e. However, we do not know whether there are other sets \(T\subseteq{\rm AnF}^{(1)}_{\omega}\) for which \({\cal R}^{(1)}\cup T\) has a quasi-Godel numbering. Menzel and Sperschneider [29] show that the family of sets \[\{\,{\cal R}^{(1)}\cup T\mid T\subseteq{\rm AnF}^{(1)}_{\omega}\mbox{ and }{\cal R}^{(1)}\cup T\mbox{ enumerable in }\varphi\,\}\] does not contain \(\subset^{*}\)-minimal elements. Here, for a set \(X\) with numbering \(\delta\), \(Y\subseteq X\) is _enumerable in \(\delta\)_ if \(Y=\delta[E]\), for some c.e. set \(E\subseteq\omega\). If \({\cal R}^{(1)}\cup T\) now has a quasi-Godel numbering, then this is \(m\)-reducible to \(\varphi\). So \({\cal R}^{(1)}\cup T\) is also enumerable in \(\varphi\). This leads us to the following result: **Theorem 7.10**.: BB _has no \(\subset\)*-minimal elements._ Let \(\mathrm{BN}_{A}\) be the set of numberings of \(\mathcal{S}_{A}^{(1)}\) that satisfy Condition (QGN I). As was shown at the beginning of this section, the quasi-Godel numberings of \(\mathcal{S}_{A}^{(1)}\) are just those numberings in \(\mathrm{BN}_{A}\) that are maximal with respect to \(\leq_{m}\). The numberings in \(\mathrm{BN}_{A}\) are exactly those numberings of \(\mathcal{S}_{A}^{(1)}\) that have a computable universal function (which is however in \(\mathcal{P}^{(2)}\backslash\mathcal{S}_{A}^{(2)}\)). Therefore, these numberings are called _computable_. For two numberings \(\eta\) and \(\gamma\) of \(\mathcal{S}_{A}^{(1)}\), \(\eta\oplus\gamma\) defined by \[(\eta\oplus\gamma)(2i)\stackrel{{\mathrm{Def}}}{{=}}\eta(i) \quad\text{and}\quad(\eta\oplus\gamma)(2i+1)\stackrel{{\mathrm{ Def}}}{{=}}\gamma(i)\] is the direct sum of \(\eta\) and \(\gamma\). Let \([\eta]\) be the equivalence class of all numberings of \(\mathcal{S}_{A}^{(1)}\) that are \(m\)-equivalent with \(\eta\). As is well known, the \(m\)-reducibility relation can be lifted to the equivalence classes yielding a partial order on the set of these classes denoted by \(\leq\). As shown in [7], the collection of all equivalence classes is an upper semi-lattice, usually called _Rogers semi-lattice_, with respect to this order with \([\eta\oplus\gamma]\) as least upper bound of \([\eta]\) and \([\gamma]\). Because the direct sum of two computable numberings is computable again, we obtain for the set \(\left[\mathrm{BN}_{A}\right]\stackrel{{\mathrm{Def}}}{{=}} \mathrm{BN}_{A}/{\leq_{m}}\): **Proposition 7.11**.: 1. \(([\mathrm{BN}_{A}],\leq)\) _is a Rogers sub-semi-lattice of the Rogers semi-lattice of all numberings of_ \(\mathcal{S}_{A}^{(1)}\)_._ 2. \(\mathrm{QGN}_{A}\) _is the greatest element in_ \(([\mathrm{BN}_{A}],\leq)\)_._ As we will see later in this section, \(([\mathrm{BN}_{A}],\leq)\) is not a lattice. The next result is an analogue of a theorem by Goetze [9, 10] for the Rogers semi-lattice of the computable numberings of \(\mathcal{P}^{(1)}\). **Theorem 7.12**.: _Every countable partially ordered set can be isomorphically embedded into the Rogers semi-lattice of computable numberings of \(\mathcal{S}_{A}^{(1)}\)._ We derive this result in several steps. Let to this end \(\theta\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\), \(a\) be the least positive element of \(A\) and CS be the set of all computable subsets of \(\omega\). Then \((\mathrm{CS},\subseteq)\) is a lattice with the empty set as least element. For every \(C\in\mathrm{CS}\) let the numbering \(\theta^{C}\) be defined by \[\theta_{i}^{C}(x)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}\pi_{ 1}^{(a+1)}(i)\stackrel{{-}}{{\cdot}}1&\text{if $x=0$ and $\pi_{1}^{(a+1)}(i)\in C+1$,}\\ \par\vdots&\\ \pi_{a}^{(a+1)}(i)\stackrel{{-}}{{\cdot}}1&\text{if $x=a-1$ and $\pi_{1}^{(a+1)}(i)\in C+1$,}\\ \theta_{\pi_{a+1}^{(a+1)}(i)}(x)&\text{if $x\geq a$ and $\pi_{1}^{(a+1)}(i)\in C+1$, or $\pi_{1}^{(a+1)}(i)\notin C+1$ and}\\ &\theta_{\pi_{a+1}^{(a+1)}(i)}(0)\notin C$,}\\ \text{undefined}&\text{otherwise.}\end{cases}\] Here, \(C+1\stackrel{{\mathrm{Def}}}{{=}}\{\,b+1\mid b\in C\,\}\). Note that by the choice of \(a\) for all \(j\) with \(0\in\mathrm{dom}(\theta_{j})\), \(\theta_{j}\) is defined at least on an initial segment of \(\omega\) of length \(a\). Therefore, \(\theta_{i}^{C}\in\mathcal{S}_{A}^{(1)}\). **Lemma 7.13**.: _For all \(C\in\mathrm{CS}\), \(\theta^{C}\in\mathrm{BN}_{A}\)._ Proof.: Since \(\theta\) satisfies (QGN I) there is an enumeration \(h\in\mathcal{R}^{(1)}\) of the extended graph of the universal function of \(\theta\). Then we have \[\langle\langle i,x\rangle,z\rangle\in \mathrm{graph_{e}}(\lambda i,x.\ \theta_{i}^{C}(x))\Leftrightarrow\] \[z=0\vee\bigvee_{\nu=1}^{n}[\![x=\nu-1\wedge\pi_{1}^{(a+1)}(i) \in C+1\wedge z=\pi_{\nu}^{(a+1)}(i)]\vee\] \[[(\exists t)\,h(t)=\langle\langle\pi_{a+1}^{(a+1)}(i),x\rangle,z \rangle\wedge[[x\geq a\wedge\pi_{1}^{(a+1)}(i)\in C+1]\vee\] \[\begin{array}{ll}&\left[\pi_{1}^{(a+1)}(i)\notin C+1\wedge(\hat{2}t^{\prime}) \left[\pi_{1}^{(2)}(h(t^{\prime}))=\langle\pi_{a+1}^{(a+1)}(i),0\rangle\wedge \right.\\ &\left.\pi_{2}^{(2)}(h(t^{\prime}))>0\wedge\pi_{2}^{(2)}(h(t^{\prime}))\notin C +1\right]\right]]].\end{array}\] It follows that \(\operatorname{graph}_{\mathrm{e}}(\lambda i,x.\ \theta_{i}^{C}(x))\) is c.e. it remains to show that \(\operatorname{range}(\theta^{C})=\mathcal{S}_{A}^{(1)}\). Let to this end \(r\in\mathcal{S}_{A}^{(1)}\) with \(r=\theta_{j}\). Then \[i\stackrel{{\mathrm{Def}}}{{=}}\begin{cases}\langle r(0)+1, \ldots,r(a)+1,j\rangle&\text{if $\mathrm{dom}(r)$ is not empty},\\ \langle 0,\ldots,0,j\rangle&\text{otherwise}\end{cases}\] is a \(\theta^{C}\)-index of \(r\). Note that by Theorem 7.2(3), \(\theta^{\varnothing}\leqslant_{m}\theta\). On the other hand, \(\theta_{i}=\theta^{\varnothing}_{\langle 0,\ldots,0,i\rangle}\). Thus, \(\theta\equiv_{m}\theta^{\varnothing}\). **Lemma 7.14**.: \((\{\,\left[\theta^{C}\right]\mid C\in\mathrm{CS}\,\},\leqslant)\) _is a lattice that is dually isomorphic zu \((\mathrm{CS},\subseteq)\). The mapping \(J(C)\stackrel{{\mathrm{Def}}}{{=}}[\theta^{C}]\) is a dual isomorphism._ Proof.: Let \(C,E\in\mathrm{CS}\). First, we show that \(\theta^{E}\leqslant_{m}\theta^{C}\), if \(E\subseteq C\). Consider the following algorithm: _Input:_\(i\) 1. If \(\pi_{1}^{(a+1)}(i)\in C+1\) then set \(\hat{i}:=i\) and go to Step (4) 2. If \(\pi_{1}^{(a+1)}(i)\in E+1\backslash C+1\) then find \(j\) so that \[\theta_{j}(x)=\begin{cases}\pi_{1}^{(a+1)}(i)\mathbin{\dot{\cdot}}1&\text{if $x =0$},\\ \vdots&\\ \pi_{a}^{(a+1)}(i)\mathbin{\dot{\cdot}}1&\text{if $x=a-1$},\\ \theta_{\pi_{a+1}^{(a+1)}(i)}(x)&\text{if $x\geqslant a$}\end{cases}\] and set \(\hat{i}:=\langle\pi_{1}^{(a+1)}(i),\ldots,\pi_{a}^{(a+1)}(i),j\rangle\) and go to Step (4). 3. (After the procedure has finished with this step, we know that \(\pi_{1}^{(a+1)}(i)\notin E+1\).) Find \(j^{\prime}\) with \[\theta_{j^{\prime}}(x)=\begin{cases}\theta_{\pi_{a+1}^{(a+1)}(i)}(x)&\text{if $ \theta_{\pi_{a+1}^{(a+1)}(i)}(0)\notin E$},\\ \text{undefined}&\text{otherwise}\end{cases}\] and set \(\hat{i}:=\langle\pi_{1}^{(a+1)}(i),\ldots,\pi_{a}^{(a+1)}(i),j^{\prime}\rangle\). 4. STOP _Output:_\(\hat{i}\). As we will see in the next step, the indices \(j\) and \(j^{\prime}\) in Steps (2) and (3), respectively, can be computed from \(i\). Therefore, the above is really an algorithm. Let \(q\in\mathcal{R}^{(1)}\) be the function it computes. Then \(\theta^{E}=\theta^{C}\circ q\), that is \(\theta^{E}\leqslant_{m}\theta^{C}\). By Theorem 4.4 there is function \(k\in\mathcal{R}^{(2)}\) such that for every \(i\in\omega\), \(\lambda i,t.\ k(i,t)\) enumerates the extended graph of \(\theta_{i}\). Let \[k^{\prime}(i,t)\stackrel{{\mathrm{Def}}}{{=}}\begin{cases} \langle t,\pi_{1}^{(a+1)}(i)\rangle&\text{if $t=0$},\\ \vdots&\\ \langle t,\pi_{a}^{(a+1)}(i)\rangle&\text{if $t=a-1$},\\ k(\pi_{a+1}^{(a+1)}(i),t\mathbin{\dot{\cdot}}a)&\text{if $t\geqslant a$, and $\pi_{2}^{(2)}(k(\pi_{a+1}^{(a+1)}(i),t\mathbin{\dot{\cdot}}a))=0$ or}\\ &\pi_{1}^{(2)}(k(\pi_{a+1}^{(a+1)}(i),t\mathbin{\dot{\cdot}}a))\geqslant a,\\ \langle 0,0\rangle&\text{otherwise},\end{cases}\] \[Q(i,t)\Leftrightarrow\pi_{1}^{(2)}(k(\pi_{a+1}^{(a+1)}(i),t))=0 \wedge\pi_{2}^{(2)}(k(\pi_{a+1}^{(a+1)}(i),t))\notin E+1,\] \[k^{\prime\prime}(i,t)\stackrel{{\mathrm{Def}}}{{=}} \begin{cases}k(\pi_{a+1}^{(a+1)}(i),t-\mu t^{\prime}\leqslant t.\;Q(i,t^{ \prime}))&\text{if for some $t^{\prime}\leqslant t$, $Q(i,t^{\prime})$,}\\ \langle t,0\rangle&\text{otherwise.}\end{cases}\] Then \(\lambda t.\)\(k^{\prime}(i,t)\) enumerates the extended graph of the function \(\theta_{j}\) defined in Step (2) of the above algorithm and \(\lambda i,t.\)\(k^{\prime\prime}(i,t)\) the extended graph of the function \(\theta_{j^{\prime}}\) defined in Step (3). Let \(v^{\prime},v^{\prime\prime}\in\mathcal{R}^{(1)}\) be the functions now existing by Condition (QGN II). Then \(\theta_{j}=\theta_{v^{\prime}(i)}\) and \(\theta_{j^{\prime}}=\theta_{v^{\prime\prime}(i)}\). Thus, we can effectively find indices \(j\) and \(j^{\prime}\) from given \(i\) with the required properties. Next, we show that also conversely, \(\theta^{E}\leqslant_{m}\theta^{C}\) implies \(C\subseteq E\). To this end we need the following result: Let \(B\) be a computable set. Then the set \(\{\,i\mid\theta^{B}(0)=y\,\}\) is computable, exactly if \(y\in B\). (5) which we will prove in a later step. Assume that \(\theta^{E}\leqslant_{m}\theta^{C}\) and suppose that \(y\in C\). Then we ned to show that \(y\in E\). Since \(y\in C\), it follows with (5) that the set \(\{\,i\mid\theta^{C}(0)=y\,\}\) is computable. Because \(\theta^{E}\leqslant_{m}\theta^{C}\), it follows that \(\{\,i\mid\theta^{E}(0)=y\,\}\) is computable as well. Hence \(y\in E\), again by (5). Thus, we have shown that \(J\) is a dual isomorphism. Since \((\mathrm{CS},\subseteq)\) is a lattice, the same holds for \((\{\,[\theta^{C}]\mid C\in\mathrm{CS}\,\},\leqslant)\). Finally, we derive the statement in (5). If \(y\in B\) then \(\theta_{i}^{B}(0)=y\), exactly if \(\pi_{1}^{(a+1)}(i)=y+1\). Therefore, the set \(\{\,i\mid\theta_{i}^{B}(0)=y\,\}\) is computable. If \(y\notin B\), then we have that \[\theta_{i}^{B}(0)=y\Leftrightarrow\pi_{1}^{(a+1)}(i)\notin B+1\wedge\theta_{ \pi_{a+1}^{(a+1)}(i)}(0)=y.\] By Rice's theorem the set \(\{\,j\mid\theta_{j}(0)=y\,\}\) is not computable, for every \(y\in\omega\). As \(B\) is computable it follows that the set \(\{\,i\mid\theta_{i}^{B}(0)=y\,\}\) cannot be computable. Goetze [10] has shown that every countable partially ordered set can be isomorphically embedded in the lattice of all computable set. By Lemma 7.14 it therefore follows that every such set can be isomorphically embedded in the Rogers semi-lattice of \(\mathcal{S}_{A}^{(1)}\). This concludes the proof of Theorem 7.12 In the next section we will show that \(\mathcal{S}^{(1)}\) is an effectively given domain with properties as considered in [41, Theorem 3.1]. As a consequence we obtain that the Rogers semi-lattice of computable numberings of \(\mathcal{S}_{A}^{(1)}\) contains a Friedberg numbering, that is a one-to-one computable numbering. As in Khutoretskii [24, Corollary 2] one even obtains that there are infinitely many Friedberg numberings of \(\mathcal{S}_{A}^{(1)}\) which are pairwise incomparable with respect to \(m\)-reducibility. Pour-El [32] shows that the equivalence class \([\gamma]\) generated by a Friedberg numbering of \(\mathcal{S}_{A}^{(1)}\) is minimal in \((\mathrm{BN}_{A},\leqslant)\). **Theorem 7.15**.: \((\mathrm{BN}_{A},\leqslant)\) _is not a lattice._ \((\mathrm{BN}_{A},\leqslant)\) also contains minimal elements that are not generated by Friedberg numberings, A numbering \(\eta\) of \(\mathcal{S}_{A}^{(1)}\) is called _positive_, if \(\{\,\langle i,j\rangle\mid\eta_{i}=\eta_{j}\,\}\) is c.e. Ershov [7, p. 303] shows that the equivalence class generated by a positive numbering is minimal in the Rogers semi-lattice of numberings of a given set. **Theorem 7.16**.: \(\mathcal{S}_{A}^{(1)}\) _has a positive computable numbering to which no Friedberg numbering can be reduced._ Proof.: The general construction is given by Khutoretskii [24, Example 1]. Here we verify the assumptions made. Let \(f\in\mathcal{R}^{(1)}\). The we have to show that the sets \[\{\,g\in\mathcal{S}_{A}^{(1)}\mid\mathrm{graph}(g)\subseteq\mathrm{graph}(f) \,\}\quad\text{and}\quad\{\,g\in\mathcal{S}_{A}^{(1)}\mid\mathrm{graph}(g) \upmodels\mathrm{graph}(f)\,\}\] can be numbered in a one-to-one way so that the universal functions of these numberings have a c.e. graph. This, however, is a consequence of Mal'cev [27, Theorem 5], if the classes can be enumerated in a quasi-Godel numbering \(\theta\) of \(\mathcal{S}_{A}^{(1)}\) and the sets \[\{\,i\mid\operatorname{graph}(\alpha_{i}^{(1)})\subset\operatorname{graph}(f) \,\}\quad\text{and}\quad\{\,i\mid\operatorname{graph}(\alpha_{i}^{(1)}) \notin\operatorname{graph}(f)\,\}\] are c.e. Note that \[\operatorname{graph}(\alpha_{i}^{(1)}\subset\operatorname{graph}(f) \Leftrightarrow(\forall x<\lg(i))\,\alpha_{i}^{(1)}(x)=f(x).\] Since \(f\in\mathcal{R}^{(1)}\) it follows that both sets are even computable. Let \(C=\{\,i\mid\operatorname{graph}(\alpha_{i}^{(1)})\subset\operatorname{graph} (f)\,\}\) and \(g\in\mathcal{R}^{(1)}\) with \(\alpha^{(1)}=\theta\circ g\). Moreover, let \(j\) be a \(\theta\)-index of \(f\). Then \[\{\,g\in\mathcal{S}_{A}^{(1)}\mid\operatorname{graph}(g)\subseteq \operatorname{graph}(f)\,\}=\theta[\{j\}\cup g[C]].\] Thus, \(\{\,g\in\mathcal{S}_{A}^{(1)}\mid\operatorname{graph}(g)\subseteq \operatorname{graph}(f)\,\}\) is enumerable in \(\theta\). Because of Condition (QGN I) there is some \(h\in\mathcal{R}^{(1)}\) that enumerates the extended graph of the universal function of \(\theta\). Then we have that \[\operatorname{graph}(\theta_{i})\nsubseteq\operatorname{graph}(f) \Leftrightarrow(\exists x)(\exists t)\,\pi_{1}^{(2)}(h(t))=\langle i,x\rangle \wedge\pi_{2}^{(2)}(h(t))>0\wedge\pi_{2}^{(2)}(h(t))\neq f(x)+1.\] Thus, \(\{\,i\mid\operatorname{graph}(\theta_{i})\nsubseteq\operatorname{graph}(f)\,\}\) is c.e. and hence \(\{\,g\in\mathcal{S}_{A}^{(1)}\mid\operatorname{graph}(g)\nsubseteq \operatorname{graph}(f)\,\}\) enumerable in \(\theta\). ## 8 \(\widehat{\mathcal{S}_{A}^{(1)}}\) as effectively given domain Let \((D,\sqsubseteq)\) be a poset. \(D\) is _pointed_ if it contains a least element \(\bot\). A subset \(L\) of \(D\) is _directed_, if it is non-empty and every pair of elements in \(L\) has an upper bound in \(L\). \(D\) is a _directed-complete partial order(dcpo)_, if every directed subset \(L\) of \(D\) has a least upper bound \(\bigsqcup L\) in \(D\). Let \((D,\sqsubseteq)\) and \((D^{\prime},\sqsubseteq^{\prime})\) be posets. Then a map \(G\colon D\to D^{\prime}\) is _Scott-continuous_, if it is monotone and for any directed subset \(L\) of \(D\) with existing least upper bound, \(G(\bigsqcup L)=\bigsqcup^{\prime}G[L]\). Assume that \(x,y\) are elements of a poset \(D\). Then \(x\) is said to _approximate_\(y\), written \(x\ll y\), if for any directed subset \(L\) of \(D\) the least upper bound of which exists in \(D\), the relation \(y\sqsubseteq\bigsqcup L\) always implies the existence of some \(u\in L\) with \(x\sqsubseteq u\). Moreover, \(x\) is _compact_ if \(x\ll x\). A subset \(B\) of \(D\) is a _basis_ of \(D\), if for each \(x\in D\) the set \(B_{x}=\{\,u\in B\mid u\ll x\,\}\) contains a directed subset with least upper bound \(x\). Note that the set of all compact elements of \(D\) is included in every basis of \(D\). A directed-complete pointed partial order \(D\) is said to be _continuous_ (or a _domain_) if it has a basis and it is called _algebraic_ (or an _algebraic domain_) if its compact elements form a basis. Note we here assume that a domain is always pointed which is not usually the case in the literature. Standard references for domain theory and its applications are [47, 13, 2, 42, 3, 12]. **Lemma 8.1**.: _Let \(D\) and \(D^{\prime}\) be domains. Then the following statements hold:_ 1. _The approximation relation_ \(\ll\) _is transitive._ 2. \(x\ll y\Rightarrow x\sqsubseteq y\)_._ 3. \(u\sqsubseteq x\ll y\sqsubseteq z\Rightarrow x\ll z\)_._ 4. \(\bot\ll x\)_._ 5. \(B_{x}\) _is directed with respect to_ \(\ll\)_._ 6. \(G\colon D\to D^{\prime}\) _is Scott-continuous, exactly if for all_ \(x\in D\)_,_ \(G(x)=\bigsqcup^{\prime}G[B_{x}]\) Note that by Property (3) we have that \(x\ll y\) exactly if \(x\sqsubseteq y\), in case that \(x\) or \(y\) is compact. **Definition 8.2**.: _Let \(D\) be a continuous domain with countable basis \(B\) and a numbering \(\beta\colon\omega\to B\). Then \(D\) is said to be effectively given if the set \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\beta_{j}\,\}\) is c.e._ If \(D\) is effectively given, then an element \(x\) is _computable_ if \(\beta^{-1}[B_{x}]\) is c.e. Let \(D_{c}\) be the set of all computable elements of \(D\). If \(D^{\prime}\) is another domain, say with basis \(B^{\prime}\) and numbering \(\beta^{\prime}\) so that \(D^{\prime}\) is effectively given, then a map \(G\colon D\to D^{\prime}\) is _computable_, of \(G\) is Scott-continuous and \(\{\,\langle i,j\rangle\mid\beta^{\prime}_{j}\ll G(\beta_{i})\,\}\) is c.e. Note that for computable maps \(G\), \(G[D_{c}]\subseteq D^{\prime}_{c}\). **Definition 8.3**.: _Let \(D\) be an effectively given domain. Then a numbering \(\eta\colon\omega\to D_{c}\) of the computable elements of \(D\) is called admissible if it satisfies the following two requirements:_ 1. \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\eta_{j}\,\}\) _is c.e._ 2. _There is a function_ \(d\in\mathcal{R}^{(1)}\) _such that for all_ \(i\in\omega\)_, if_ \(\beta[W_{i}]\) _is directed then_ \(\eta_{d(i)}=\bigsqcup\beta[W_{i}]\)_._ Weihrauch and Deil [45] have shown that for every effectively given domain an admissible numbering can be constructed. **Proposition 8.4** (Weihrauch, Deil, 1980).: _Let \(D\) be an effectively given domain and let \(\eta\) be an admissible and \(\gamma\) an arbitrary numbering of its computable elements. Then the following statements hold:_ 1. \(\eta\) _is complete with special element_ \(\bot\)_._ 2. \(\gamma\) _satisfies (A I)_ \(\Leftrightarrow\gamma\leqslant_{m}\eta\)_._ 3. \(\gamma\) _satisfies (A II)_ \(\Leftrightarrow\eta\leqslant_{m}\gamma\)_._ 4. \(\gamma\) _is admissible_ \(\Leftrightarrow\eta\equiv_{m}\gamma\Leftrightarrow\eta\equiv\gamma\)_._ Since \(\eta\) is complete, it follows with a result of Ershov [7, p. 332] that \(\eta\) is cylindrical. As follows from the definition of an effectively given domain, all basic elements are computable. Moreover, if \(\gamma\) is any numbering of \(D_{c}\) satisfying (A II) then \(\beta\leqslant_{m}\gamma\). **Lemma 8.5**.: _Let \(D\) be an effectively given domain with basis \(B\) and numbering \(\beta\) of the basis elements. Moreover, let \(\eta\) be a numbering of \(D_{c}\) that satisfies (A I). Then there is a function \(r\in\mathcal{R}^{(2)}\) such that for all \(i,j\in\omega\) with \(\beta_{i}\ll\eta_{j}\) the following statements hold:_ 1. \(\varphi_{r(i,j)}\in\mathcal{R}^{(1)}\)_,_ 2. \(\beta(i)\ll\beta(\varphi_{r(i,j)}(0))\)_,_ 3. \(\beta(\varphi_{r(i,j)}(a))\ll\beta(\varphi_{r(i,j)}(a+1))\) (\(a\in\omega\))_,_ 4. \(\eta(j)=\bigsqcup_{a}\beta(\varphi_{r(i,j)}(a))\)_._ Proof.: Let \(E_{ij}\,\stackrel{{\rm Def}}{{=}}\,\{\,a\mid\beta(i)\ll\beta(a) \ll\eta(j)\,\}\). Then \(E_{ij}\) is c.e. Thus, there is some function \(q\in\mathcal{R}^{(2)}\) so that \(E_{ij}=W_{q(i,j)}\). Since \(B_{\eta(j)}\) is directed with respect to \(\ll\), the same holds for \(\beta[E_{ij}]\). It follows that \(\bigsqcup\beta[E_{ij}]\) exists. Moreover, \(\bigsqcup\beta[E_{ij}]\sqsubseteq\eta(j)\). Let \(i,j\) be such that \(\beta(i)\ll\eta(j)\). Since \(B_{\eta(j)}\) is directed with respect to \(\ll\), we have that for any \(u\in B_{\eta(j)}\) there is some \(u^{\prime}\in\beta[E_{ij}]\) with \(u\ll u^{\prime}\). Thus \(E_{ij}\) is non-empty and in addition, \(\eta(j)\sqsubseteq\bigsqcup\beta[E_{ij}]\). Hence, \(\eta(j)=\bigsqcup\beta[E_{ij}]\). In the sequel let \(s\in\mathcal{R}^{(1)}\) such that \(\varphi_{s(a)}\) is a total function enumerating \(W_{a}\), if \(W_{a}\) is non-empty. Moreover, let \(k\in\mathcal{R}^{(2)}\) with \(W_{k(a,c)}=W_{a}\cap W_{c}\), Then define \(g\in\mathcal{R}^{(3)}\) by \[g(i,j,0)\stackrel{{\rm Def}}{{=}}\varphi_{s(q(i,j)) }(0),\] \[g(i,j,a+1)\stackrel{{\rm Def}}{{=}}\varphi_{s(k(q(g(i,j,a),j),q(\varphi_{s(q(i,j))}(a+1),j)))}(0).\] Then \(g(i,ja+1)\in E_{g(i,j,a)j}\cap E_{ij}\), where \(\bar{\imath}\) is the \((a+1)\)-st element of \(E_{ij}\) in the enumeration \(\varphi_{s(q(i,j))}\). Because \(\beta[E_{ij}]\) is directed with respect to \(\ll\), we have that \(E_{g(i,j,a)j}\cap E_{\bar{\imath}j}\) is non-empty. Therefore, \(g(i,j,a+1)\) is defined. Furthermore, for all \(a\), \(\beta(i)\ll\beta(g(i,j,a+1))\), from which we obtain that \(\bigsqcup\beta[E_{ij}]\sqsubseteq\bigsqcup_{a}\beta(g(i,j,a))\). Conversely, since for all \(a\), \(g(i,j,a)\in E_{ij}\), we also have that \(\bigsqcup_{a}\beta(g(i,j,a))\sqsubseteq\bigsqcup\beta[E_{ij}]\). Thus, \(\bigsqcup_{a}\beta(g(i,j,a))=\bigsqcup\beta[E_{ij}]=\eta(j)\). Now, let \(r\in\mathcal{R}_{(2)}\) with \(\varphi_{r(i,j)}(a)=g(i,j,a)\). Then \(r\) is as required. For the next consequence choose \(i\) such that \(\beta(i)=\bot\). **Corollary 8.6**.: _For every \(x\in D_{c}\), there is a function \(p\in\mathcal{R}^{(1)}\) so that_ 1. \(\beta(p(a))\ll\beta(p(a+1))\quad(a\in\omega\)_),_ 2. \(x=\bigsqcup_{a}\beta(p(a))\)_._ After these more technical results which we will need later, we now start investigating the relationship of the function classes \(\widehat{\mathcal{S}}^{(n)}_{A}\) with domains. Again we will restrict ourselves to considering only the classes \(\widehat{\mathcal{S}}^{(1)}_{A}\). **Theorem 8.7**.: _Let \(A\subseteq\omega\) be a c.e. infinite set and for \(f,g\in\widehat{\mathcal{S}}^{(1)}_{A}\) set_ \[f\sqsubseteq g\Leftrightarrow\operatorname{graph}(f)\subseteq \operatorname{graph}(g).\] _Then \((\widehat{\mathcal{S}}^{(1)}_{A},\sqsubseteq)\) is an effectively given algebraic domain such that:_ 1. _The nowhere defined function is the least element._ 2. _The initial segment functions in_ \(\textsc{Anf}^{(1)}_{A}\) _are exactly the compact elements._ 3. _The functions in_ \(\mathcal{S}^{(1)}_{A}\) _are the computable elements._ 4. _For numberings of_ \(\mathcal{S}^{(1)}_{A}\)_, Conditions (QGN I) and (A I) as well as (QGN II) and (A II) are equivalent. In particular, the quasi-Godel numberings are exactly the admissible numberings._ 5. _The computability notions for operators_ \(G\colon\widehat{\mathcal{S}}^{(1)}_{A}\to\widehat{\mathcal{S}}^{(1)}_{A}\) _in Definition_ 6.3 _and in this section coincide._ Proof.: If \(L\subseteq\widehat{\mathcal{S}}^{(1)}_{A}\) is directed with respect to \(\sqsubseteq\), then \(L\) has to be a chain. Thus the union of the graphs of the functions in \(L\) is again the graph of a function. This function must be in \(\widehat{\mathcal{S}}^{(1)}_{A}\). It is the least upper bound of \(L\) with respect to \(\sqsubseteq\). Thus, \(\widehat{\mathcal{S}}^{(1)}_{A}\) is directed-complete. Obviously the nowhere defined function is the least element. Every function in \(\widehat{\mathcal{S}}^{(1)}_{A}\) is the least upper bound of its restrictions to initial segments of \(\omega\) with length in \(A\). So, the functions in \(\textsc{Anf}^{(1)}_{A}\) form a basis. (2) All functions in \(\textsc{Anf}^{(1)}_{A}\) are compact. To see this let \(p\in\textsc{Anf}^{(1)}_{A}\) and \(L\) be a directed subset of \(\widehat{\mathcal{S}}^{(1)}_{A}\) with \(p\sqsubseteq\bigsqcup L\). Then \(\operatorname{graph}(p)\subseteq\bigcup\{\operatorname{graph}(q)\mid q\in L\}\). Since \(\operatorname{graph}(p)\) is finite, it is covered by the union of the graphs of finitely many functions in \(L\). Because this union is again contained in the graph of some function \(r\in L\), as \(L\) is directed, we have that \(p\sqsubseteq r\). Conversely, if \(f\in\widehat{\mathcal{S}}^{(1)}_{A}\) is compact, then consider the directed set \(L\) of all functions \(s\in\textsc{Anf}^{(1)}_{A}\) with \(s\sqsubseteq f\). Then \(f\sqsubseteq\bigsqcup L\). By compactness there is some \(s\in L\) with \(f\sqsubseteq s\). Since \(s\in\textsc{Anf}^{(1)}_{A}\), the same must hold for \(f\). Thus, the domain \(\widehat{\mathcal{S}}^{(1)}_{A}\) is algebraic. With Lemma 6.1(4) and Statement (4) we obtain that \(\{\,\langle i,j\rangle\mid\alpha^{(1)}_{i}\sqsubseteq\alpha^{(1)}_{j}\,\}\) is c.e. Thus, the domain \(\widehat{\mathcal{S}}^{(1)}_{A}\) is effectively given with respect to the numbering \(\alpha^{(1)}\) of the basis. (3) Let \(f\in\mathcal{S}^{(1)}_{A}\). Then it follows with Lemma 6.1(5) that \(\{\,i\mid\alpha^{(1)}_{i}\sqsubseteq f\,\}\) is c.e. Hence, \(f\in(\widehat{\mathcal{S}}^{(1)}_{A})_{c}\). Conversely, if \(f\in(\widehat{\mathcal{S}}^{(1)}_{A})_{c}\) then \(\{\,i\mid\alpha^{(1)}_{i}\sqsubseteq f\,\}\). Since \(\operatorname{graph}(f)=\bigcup\{\operatorname{graph}(s)\mid s\in\textsc{ Anf}^{(1)}_{A}\wedge s\sqsubseteq f\,\}\) we have, \[\langle x,y\rangle\in\operatorname{graph}(f)\Leftrightarrow(\exists i)\, \alpha^{(1)}_{A}\sqsubseteq f\wedge\lg(i)>x\wedge\langle\langle i,x\rangle,y+1 \rangle\in\operatorname{graph}_{e}(\lambda a,z.\;\alpha^{(1)}_{a}(z)).\] Because the extended graph of the universal function of \(\alpha^{(1)}\) is computable, by Lemma 6.1(2), it follows that \(\operatorname{graph}(f)\) is c.e. With Lemma 5.2(2) we therefore have that \(f\in\mathcal{S}^{(1)}_{A}\). (4) As in the proof of Lemma 6.1(5) it is only required that \(\theta\) satisfies Condition (QGN I), it follows that every numbering of \(\mathcal{S}^{(1)}_{A}\) with Property (QGN I) satisfies Condition (A I). Conversely, assume that \(\theta\) is a numbering of \(\mathcal{S}^{(1)}_{A}\) satisfying Condition (A I). Then it follows as in the proof of Statement (3) that \(\operatorname{graph_{e}}(\lambda j,x.\ \theta_{j}(x))\) is c.e. Thus, \(\theta\) meets Requirement (QGN I). Next assume that \(\theta\) has Property (QGN II) and let \(\alpha^{(1)}[W_{i}]\) be directed. Then there exists some \(r\in\widehat{\mathcal{S}}^{(1)}_{A}\) with \(r=\bigsqcup\alpha^{(1)}[W_{i}]\). Hence, \[\operatorname{graph}(r)=\{\,\langle x,y\rangle\mid(\exists j)\,j\in W_{i} \wedge\langle\langle j,x\rangle,y+1\rangle\in\operatorname{graph_{e}}(\lambda a,z.\ \alpha^{(1)}_{a}(z))\,\},\] which implies that \(r\in\mathcal{S}^{(1)}_{A}\). As we moreover see, \(\operatorname{graph_{e}}(r)\) can be uniformly enumerated in \(i\). Therefore, by Condition (QGN II), there is a function \(v\in\mathcal{R}^{(1)}\) so that \(\theta_{v(i)}=r=\bigsqcup\alpha^{(1)}[W_{i}]\). Thus, \(\theta\) fulfils Requirement (A II). Since, for any \(k\in\mathcal{R}^{(2)}\), the set \(\{\,j\mid\operatorname{graph_{e}}(\alpha^{(1)})\subseteq\operatorname{range}( \lambda t.\ k(i,t))\,\}\) is c.e., uniformly in \(i\), it conversely follows that \(\theta\) has Property (QGN II), once it satisfies Requirement (A II). (5) Let \(C\subset\omega\) be a c.e. set that defines \(G\). Then \[\operatorname{graph}(\alpha^{(1)}_{i})\subseteq\operatorname{ graph}(G(\alpha^{(1)}_{j}))\] \[\Leftrightarrow(\forall x<\lg(i))(\exists a)\,\langle\langle \langle x\rangle,\alpha^{(1)}_{i}(x)\rangle,a\rangle\in C\wedge(\forall c \leq\operatorname{lth}(a))\,(a)_{c}\in\operatorname{graph}(\alpha^{(1)}_{j})\] \[\Leftrightarrow(\forall x<\lg(i))(\exists y)(\exists a)\,\langle \langle i,x\rangle,y+1\rangle\in\operatorname{graph_{e}}(\lambda b,z.\ \alpha^{(1)}_{b}(z))\wedge\langle\langle\langle x\rangle,y\rangle,a\rangle \in C\wedge\] \[\qquad(\forall c\leq\operatorname{lth}(a))\,\langle\langle j, \pi^{(2)}_{1}((a)_{c})\rangle,\pi^{(2)}_{2}((a)_{c})+1\rangle\in\operatorname {graph_{e}}(\lambda b,z.\ \alpha^{(1)}_{b}(z)).\] By Lemma 6.1(2) \(\operatorname{graph_{e}}(\lambda a,z.\ \alpha^{(1)}_{a}(z))\) is c.e. With the Tarski-Kuratowski algorithm [36] we can now bring this expression in a \(\Sigma_{1}\)-form from which we see that \(\{\,\langle i,j\rangle\mid\operatorname{graph}(\alpha^{(1)}_{i})\subseteq \operatorname{graph}(G(\alpha^{(1)}_{j}))\,\}\) is c.e. Conversely assume that \(V\stackrel{{\text{\tiny Def}}}{{=}}\{\,\langle i,j\rangle\mid \operatorname{graph}(\alpha^{(1)}_{i})\subseteq\operatorname{graph}(G(\alpha^ {(1)}_{j}))\,\}\) is c.e. Moreover, Let \(r\in\widehat{\mathcal{S}}^{(1)}_{A}\). By the continuity of \(G\) have that \[\langle\langle x\rangle,z\rangle \in\operatorname{graph}(G(r))\] \[\Leftrightarrow(\exists i)(\exists a)\,\alpha^{(1)}_{i}\sqsubseteq G (\alpha^{(1)}_{a})\wedge\langle x,z\rangle\in\operatorname{graph}(\alpha^{(1) }_{i})\wedge\alpha^{(1)}_{a}\sqsubseteq r\] \[\Leftrightarrow(\exists\bar{a})(\exists a)(\exists i)\langle \langle i,x\rangle,z+1\rangle\in\operatorname{graph_{e}}(\lambda b,u.\ \alpha^{(1)}_{b}(u))\wedge\langle i,a\rangle\in V\wedge\] \[\qquad\operatorname{lth}(\bar{a})=\lg(a)\wedge(\forall c\leq \lg(a))\,\pi^{(2)}_{1}((\bar{a})_{c})=c\wedge\] \[\qquad\langle\langle a,c\rangle,\pi^{(2)}_{2}((\bar{a})_{c})+1 \rangle\in\operatorname{graph_{e}}(\lambda b,u.\ \alpha^{(1)}_{b}(u))\wedge(\bar{a})_{c}\in \operatorname{graph}(r).\] Therefore by setting \[C\stackrel{{\text{\tiny Def}}}{{=}}\{\,\langle \langle\langle x\rangle,z\rangle,\bar{a}\rangle\mid(\exists a)(\exists i) \langle\langle\langle i,x\rangle,z\rangle\in\operatorname{graph_{e}}(\lambda b,u.\ \alpha^{(1)}_{b}(u))\wedge\langle i,a\rangle\in V\wedge\] \[\qquad\operatorname{lth}(\bar{a})=\lg(a)\wedge\pi^{(2)}_{1}((\bar {a})_{c})=c\wedge\langle\langle a,c\rangle,\pi^{(2)}_{2}((\bar{a})_{c})+1 \rangle\in\operatorname{graph_{e}}(\lambda b,u.\ \alpha^{(1)}_{b}(u))\,\},\] we obtain that \(C\) is c.e. and defines \(G\). The next result shows that each of the algebraic domains \(\widehat{\mathcal{S}}^{(1)}_{A}\) can be computably mapped onto any other effectively given domain. For the proof we need an extension of a result by Weihrauch and Schafer [46]. **Proposition 8.8**.: _Let \(D\) be an effectively given domain with basis \(B\) and numbering \(\beta\) of the base elements. Then there is a computable operator \(D\colon\widehat{\mathcal{S}}^{(1)}_{\omega}\to\widehat{\mathcal{S}}^{(1)}_{\omega}\) such that the following statements hold for \(f,g\in\widehat{\mathcal{S}}^{(1)}_{\omega}\),_ 1. _If_ \(f\in\mathcal{R}^{(1)}\) _then also_ \(G(f)\in\mathcal{R}^{(1)}\) 2. _If_ \(f\in\textsc{Anf}^{(1)}_{\omega}\) _then also_ \(G(f)\in\textsc{Anf}^{(1)}_{\omega}\)_._ 3. _For all_ \(a\in\mathrm{dom}(G(f))\)_,_ \(\beta(G(f)(a))\ll\beta(G(f)(a+1))\)_._ 4. _If_ \(\beta[\mathrm{range}(f)]\) _is directed then_ \(\bigsqcup_{a}\beta(G(f)(a))\subseteq\bigsqcup\beta[\mathrm{range}(f)]\)_._ 5. _If_ \(f\in\mathcal{R}^{(1)}\) _and_ \(\beta[\mathrm{range}(f)]\) _is directed then_ \(\bigsqcup_{a}\beta(G(f)(a))=\bigsqcup\beta[\mathrm{range}(f)]\)_._ 6. _If_ \(f\subseteq g\) _then_ \(\bigsqcup_{a}\beta(G(f)(a))\subseteq\bigsqcup_{a}\beta(G(g)(a))\)_._ 7. _For_ \(f\in\textsc{Anf}^{(1)}_{\omega}\)_,_ \(\bigsqcup_{a}\beta(G(f)(a))\in B\)_._ Proof.: Since \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\beta_{j}\,\}\) is c.e., there are functions \(k\in\mathcal{R}^{(2)}\) and \(h\in\mathcal{R}^{(1)}\) so that \(\lambda t\). \(k(t,j)\) enumerates \(\{\,i\mid\beta_{i}\ll\beta_{j}\,\}\) and \(h\) enumerates \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\beta_{j}\,\}\). Now, we define functions \(q,p,r\in\mathcal{R}^{(1)}\): If \(\mathrm{lth}(a)\leqslant 1\), we set \[p(a)\stackrel{{\mathrm{Def}}}{{=}}q(a)\stackrel{{ \mathrm{Def}}}{{=}}i_{\perp}\quad\text{and}\quad r(a)\stackrel{{ \mathrm{Def}}}{{=}}1.\] Here, \(i_{\perp}\) is a \(\beta\)-index of \(\perp\). In case that \(a=\llbracket a_{0},\ldots,a_{m+1}\rrbracket\) and \(q(a^{\prime})\) as well as \(p(a^{\prime})\) and \(r(a^{\prime})\) are already defined for \(a^{\prime}=\llbracket a_{0},\ldots,a_{m}\rrbracket\), say \(r(a^{\prime})=\langle t,c\rangle\), then we define \(q(a)\), \(p(a)\) and \(r(a)\) as follows: if there is \(\langle i,j\rangle\leqslant m+1\) with \[\{\langle k(c,a_{t}),k(j,a_{i})\rangle,\langle p(a^{\prime}),k(j,a_{i}) \rangle\}\subset\{h(0),\ldots,h(m+1)\},\] then set \[q(a)\stackrel{{\mathrm{Def}}}{{=}}p(a^{\prime}),\] \[p(a)\stackrel{{\mathrm{Def}}}{{=}}k(j,a_{i}),\,\text{ for the smallest number $\langle i,j\rangle$ with this property},\] \[r(a)\stackrel{{\mathrm{Def}}}{{=}}r(a^{\prime})+1.\] If there is no such number \(\langle i,j\rangle\), then set \[q(a)\stackrel{{\mathrm{Def}}}{{=}}\pi_{1}^{(2)}( \mu\langle b,a\rangle.\ \{\langle q(a^{\prime}),b\rangle,\langle b,p(a^{\prime})\rangle\}\subseteq \{h(0),\ldots,h(n)\}),\] \[p(a)\stackrel{{\mathrm{Def}}}{{=}}p(a^{\prime}),\] \[r(a)\stackrel{{\mathrm{Def}}}{{=}}r(a^{\prime}).\] As follows form the definition, \(\beta_{q(a^{\prime})}\ll\beta_{p(a^{\prime})}\). By Lemma 8.1(5) there is thus always a number \(\langle b,n\rangle\) with the above property. Hence, \(q(a)\) is defined also in this case. In addition to the above let \(v\in\mathcal{R}^{(1)}\) with \[v(\llbracket\text{empty sequence}\rrbracket)\stackrel{{ \mathrm{Def}}}{{=}}\llbracket\text{empty sequence}\rrbracket,\] \[v(\llbracket a_{0},\ldots,a_{m}\rrbracket)\stackrel{{ \mathrm{Def}}}{{=}}\llbracket\langle 0,a_{0}\rangle,\ldots,\langle m,a_{m} \rangle\rrbracket.\] Moreover, let \[C\stackrel{{\mathrm{Def}}}{{=}}\{\,\langle\langle\mathrm{lth}(a), q(a)\rangle,v(a)\rangle\mid a\in\omega\,\}.\] Then \(C\) is c.e. As is readily seen, \(C\) defines a computable operator \(G\) on \(\widehat{\mathcal{S}}^{(1)}_{\omega}\). For \(f\in\widehat{\mathcal{S}}^{(1)}_{\omega}\) with non-empty domain it follows that \(\mathrm{dom}(f)=\mathrm{dom}(G(f))\). \(G\) maps the nowhere defined function onto the function that for \(0\) has value \(i_{\perp}\), and is undefined, otherwise. This implies Statements (1) and (2). Let \(\overline{f}(m)\stackrel{{\mathrm{Def}}}{{=}}\llbracket f(0), \ldots,f(m)\rrbracket\). Then we obtain from the definition of \(G\) that for \(a^{\prime}=\overline{f}(m)\) and \(a=\overline{f}(m+1)\), \(G(f)(m)=q(a^{\prime})\) and \(G(F)(m+1)=q(a)\). Since \(\beta_{q(a^{\prime})}\ll\beta_{q(a)}\), this proves Statement (3). In particular, we have that \(\bigsqcup_{a}\beta(G(f)(a))\) exists. Moreover, Statements (6) and (7) follow, because \(G\) is monotone by Theorem 6.4 and by Statement (2), \(\mathrm{dom}(G(f))\) is finite, for functions \(f\in\textsc{Anf}_{\omega}^{(1)}\). For Statements (4) and (5), finally, let \(f\in\mathcal{R}^{(1)}\) so that \(\beta[\mathrm{range}(\underline{f})]\) is directed. First, we show that the sequence \((r(\overline{f}(m)))_{m\in\omega}\) is unbounded. Assume that \(r(\overline{f}(m))=\langle t,c\rangle\). Since the set \(\{\,u\in B\mid u\ll\bigsqcup\beta[\mathrm{range}(f)]\,\}\) is directed with respect to \(\ll\) by Lemma 8.1(5), there is some \(m^{\prime}\geqslant m\) and a number \(\langle i,j\rangle\leqslant m+1\) so that \[\{\langle k(c,f(t)),k(j,f(i))\rangle,\langle p(\overline{f}(m)),k(j,f(i)) \rangle\}\subseteq\{h(0),\ldots,h(m^{\prime}+1)\}. \tag{6}\] Suppose that \(u\in B\) with \(u\ll\beta(f(t))\). The there is some \(c\) with \(\beta(k(c,f(t))=u\). Moreover, because the sequence \((r(\overline{f}(m)))_{m\in\omega}\) is unbounded, there is some \(m^{\prime}\) with \(r(\overline{f}(m^{\prime}))=\langle t,c\rangle\) as well as some smallest number \(\langle i,j\rangle\leqslant m^{\prime}+1\) so that the inclusion in (6) holds. Thus, \(u\ll\beta(k(j,f(i)))\). Consequently, since \(k(j,f(i))=G(f)(\widehat{m})\), for some \(\widehat{m}\geqslant m^{\prime}+1\), it follows that \(\bigsqcup\beta[\mathrm{range}(f)]\subseteq\bigsqcup_{m}\beta(G(f)(m))\). As follows from the above definitions, we have for \(g\in\widehat{\mathcal{S}}_{\omega}^{(1)}\) that for every \(m\) there is some \(i\) with \(\beta(G(g)(m))\ll\beta(g(i))\). Therefore, \(\bigsqcup_{m}\beta(G(g)(m))\subseteq\bigsqcup\beta[\mathrm{range}(G)]\), if \(\beta[\mathrm{range}(G)]\) is directed. This shows Statement (4) and, with what we derived before, also Statement (5). **Theorem 8.9**.: _Let \(A\subseteq\omega\) be a c.e. infinite set and \(D\) be an effectively given domain with basis \(B\) and numbering \(\beta\) of the base elements. Then there is a computable onto map \(\Gamma\colon\widehat{\mathcal{S}}_{A}^{(1)}\to D\) so that for \(f,g\in\widehat{\mathcal{S}}_{A}^{(1)}\),_ 1. _If_ \(f\in\textsc{Anf}_{A}^{(1)}\) _then_ \(\Gamma(f)\in B\)_._ 2. _If_ \(\beta[\mathrm{range}(f)]\) _is directed then_ \(\Gamma(f)\subseteq\bigsqcup\beta[\mathrm{range}(f)]\)_._ 3. _If_ \(f\in\mathcal{R}^{(1)}\) _and_ \(\beta[\mathrm{range}(f)]\) _is directed then_ \(\Gamma(f)=\bigsqcup\beta[\mathrm{range}(f)]\)_._ Proof.: Let \(G\colon\widehat{\mathcal{S}}_{\omega}^{(1)}\to\widehat{\mathcal{S}}_{\omega}^{(1)}\) be the computable operator constructed in the preceding Lemma. Then we define for \(f\in\widehat{\mathcal{S}}_{A}^{(1)}\), \[\Gamma(f)\stackrel{{\mathrm{Def}}}{{=}}\bigsqcup_{a}\beta(G(f)(a )).\] Since the basis \(B\) is countable, every domain element is the least upper bound of a sequence that is monotonically increasing with respect to \(\ll\). Because of Property (5) of the preceding lemma it therefore follows that \(\Gamma\) is onto. We will show that \(\Gamma\) is continuous. Let to this end \(u\in B\). Since \(G\) is continuous by Theorem 6.4, we have \[u\ll\Gamma(f) \Leftrightarrow u\ll\bigsqcup_{a}\beta(G(f)(a))\] \[\Leftrightarrow(\exists a)\,u\ll\beta(G(f)(a))\] \[\Leftrightarrow(\exists a)\,u\ll\beta((\bigsqcup\{G(s)\mid s\in \textsc{Anf}_{A}^{(1)}\wedge s\subseteq f\,\})(a))\] \[\Leftrightarrow(\exists a)(\exists s\in\textsc{Anf}_{A}^{(1)})\,s \subseteq f\wedge a\in\mathrm{dom}(G(s))\wedge u\ll\beta(G(s)(a)).\] In the same way we obtain that \[u\ll\bigsqcup\{\Gamma(s)\mid s\in\textsc{Anf}_{A}^{(1)}\wedge s \subseteq f\,\}\Leftrightarrow\\ (\exists a)(\exists s\in\textsc{Anf}_{A}^{(1)})\,s\subseteq f \wedge a\in\mathrm{dom}(G(s))\wedge u\ll\beta(G(s)(a)).\] Since every domain element \(x\) is uniquely determined by the set \(B_{x}\), it follows that \(\Gamma\) is continuous. It remains to show that \(\{\,\langle i,j\rangle\mid\beta_{j}\ll\Gamma(\alpha_{i}^{(1)})\,\}\) is c.e. We have that \[\beta_{j}\ll\Gamma(\alpha_{i}^{(1)})\] \[\Leftrightarrow(\exists a)\,\beta_{j}\ll\beta(G(\alpha_{i}^{(1)})( a))\] \[\Leftrightarrow(\exists a)(\exists b)\,\beta_{j}\ll\beta_{b}\wedge b=G( \alpha^{(1)}_{i})(a)\] \[\Leftrightarrow(\exists a)(\exists b)(\exists c)(\exists m)\,\beta_{j} \ll\beta_{b}\wedge b=\alpha^{(1)}_{m}(c)\wedge\operatorname{graph}(\alpha^{(1) }_{m})\subseteq\operatorname{graph}(G(\alpha^{(1)}_{i}))\] \[\Leftrightarrow(\exists a)(\exists b)(\exists c)(\exists m)\, \beta_{j}\ll\beta_{b}\wedge\langle\langle m,c\rangle,b+1\rangle\in \operatorname{graph_{e}}(\lambda\hat{i},n.\ \alpha^{(1)}_{i}(n))\ \wedge\] \[\operatorname{graph}(\alpha^{(1)}_{m})\subseteq\operatorname{ graph}(G(\alpha^{(1)}_{i})).\] Since the set of all \(\langle i,m\rangle\) with \(\operatorname{graph}(\alpha^{(1)}_{m})\subseteq\operatorname{graph}(G(\alpha^{ (1)}_{i}))\) is c.e. by Theorem 6.4 and \(\operatorname{graph_{e}}(\lambda\hat{i},n.\ \alpha^{(1)}_{i}(n))\) is c.e. by Lemma 6.1, the set of all \(\langle i,j\rangle\) with \(\beta_{j}\ll\Gamma(\alpha^{(1)}_{i})\) is c.e. as well. Therefore, \(\Gamma\) is computable. Properties (1)-(3) are a consequence of the corresponding properties of \(G\), Assume that the domain \(D\) in the previous result is also algebraic. Then we have for \(u\in B\) and \(x\in D\) that \(u\ll x\) exactly if \(u\sqsubseteq x\). As is readily seen, in this case for every base element \(u\) of the domain one can find an initial segment function in \(\textsc{Anf}^{(1)}_{A}\) that is mapped onto \(u\) under \(\Gamma\). This initial segment map can be chosen in such a way that it has only \(\beta\)-indices of \(\bot\) and \(u\) as values. This makes the idea of extending representations as they are used in computable analysis e.g. (cf. [48]) to maps from \(\widehat{\mathcal{S}}^{(1)}_{A}\) to domains precise that was mentioned at the beginning of this paper: initial segment functions are used as names for approximating base elements, or the Scott-open sets they determine, and total functions are names for the limit elements they approximate. By the monotoniciy of \(\Gamma\) the relation of approximation between the functions in \(\widehat{\mathcal{S}}^{(1)}_{A}\), that is the names, is transferred to the domain. Next, we will show that via the mapping \(\Gamma\) numberings of \(D_{c}\) can be generated from numberings of \(\mathcal{S}^{(1)}_{A}\). **Theorem 8.10**.: _Let \(A\subseteq\omega\) be a c.e. infinite set and \(D\) be an effectively given domain. Moreover, let \(\theta\) be a numbering of \(\mathcal{S}^{(1)}_{A}\). Define_ \[\eta_{i}\stackrel{{\operatorname{Def}}}{{=}}\Gamma(\theta_{i}) \quad(i\in\omega).\] _Then \(\eta\) is a numbering of the computable elements of \(D\) such that:_ 1. _If_ \(\theta\) _satisfies (QGN I) then_ \(\eta\) _meets Condition (A I)._ 2. _If_ \(\theta\) _satisfies (QGN II) then_ \(\eta\) _meets Condition (A II)._ 3. _If_ \(\theta\) _is quasi-Godel numbering then_ \(\eta\) _is admissible._ Proof.: By Corollary 8.6, \(\Gamma\) maps \(\mathcal{R}^{(1)}\) onto \(D_{c}\). Therefore, the mapping \(\eta\) defined above is a numbering of \(D_{c}\). (1) Because of the continuity of \(\Gamma\) we have that \[\beta_{i}\ll\eta_{j} \Leftrightarrow\beta_{i}\ll\Gamma(\theta_{j})\] \[\Leftrightarrow\beta_{i}\ll\Gamma(\bigsqcup\{\alpha^{(1)}_{n}\mid \alpha^{(1)}_{n}\equiv\theta_{j}\,\})\] \[\Leftrightarrow\beta_{i}\ll\bigsqcup\{\Gamma(\alpha^{(1)}_{n}) \mid\alpha^{(1)}_{n}\sqsubseteq\theta_{j}\,\})\] \[\Leftrightarrow(\exists n)\,\beta_{i}\ll\Gamma(\alpha^{(1)}_{n}) \wedge\alpha^{(1)}_{n}\sqsubseteq\theta_{j}.\] Since \(\Gamma\) is computable, the set \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\Gamma(\alpha^{(1)}_{n})\,\}\) is c.e. By Lemma 6.1 the same is true for \(\{\,\langle n,j\rangle\mid\alpha^{(1)}_{n}\sqsubseteq\theta_{j}\,\}\), if \(\theta\) satisfies (QGN I). Thus, also \(\{\,\langle i,j\rangle\mid\beta_{i}\ll\eta_{j}\,\}\) is c.e. in this case. That is, \(\eta\) meets the (A I) requirement. (3) Let \(\theta\) be a quasi-Godel numbering. We need to construct a function \(v\in\mathcal{R}^{(1)}\) so that \(\theta_{v(i)}\) enumerates \(W^{A}_{i}\) in such a way that \(\theta_{v(i)}\in\mathcal{R}^{(1)}\), if \(W^{A}_{i}\) is not empty. By (QGN I) there is a function \(h\in\mathcal{R}^{(1)}\) enumerating the universal function of \(\theta\). Define \[\widehat{Q}(i,a,z)\Leftrightarrow\pi^{(2)}_{1}(h(a))=\langle i,\pi^{(2)}_{1}( (z)_{\operatorname{lth}(z)})\rangle\wedge\pi^{(2)}_{2}(h(a))>0,\] \[Q(i,t,z) \Leftrightarrow(3a\leq t)\,\widehat{Q}(i,a,z),\] \[f(i,t,z) \stackrel{{\mathrm{Def}}}{{=}}\pi_{2}^{(2)}(h(\mu a\leq t.\,\widehat{Q}(i,a,z))).\] Now, construct \(k\in\mathcal{R}^{(2)}\) according to Lemma 4.2 so that \(\lambda t.\)\(k(i,t)\) enumerates the extended graph of a function \(g\in\mathcal{R}^{(1)}\) that enumerates \(W_{i}^{A}\), if \(W_{i}^{A}\) is not empty, and the extended graph of the nowhere defined function, otherwise. By Condition (QGN II) there is then a function \(v\in\mathcal{R}^{(1)}\) that has the properties we were looking for. Assume that \(\beta[W_{i}]\) is directed. Then \(W_{i}\) is not empty. As we have seen, \(W\) and \(W^{A}\) are computably isomorphic. Thus, there is some \(s\in\mathcal{R}^{(1)}\) so that \(W_{i}=W_{s(i)}^{A}\). Set \(d\stackrel{{\mathrm{Def}}}{{=}}v\circ s\). Then it follows wth Theorem 8.9(3) that \[\eta_{d(i)}=\Gamma(\theta_{d(i)})=\bigsqcup\beta[\mathrm{range}(\theta_{d(i)}) ]=\bigsqcup\beta[W_{s(i)}^{A}]=\bigsqcup\beta[W_{i}].\] Hence, \(\eta\) satisfies Requirement (A II). As seen in Step (3), \(\eta\) also satisfies (A I). Therefore, \(\eta\) is admissible. (2) Let \(\psi\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\), and let \(\theta\) satisfy (QGN II). By Theorem 7.2 there is some \(g\in\mathcal{R}^{(1)}\) with \(\psi=\theta\circ g\). Let \(\gamma\) be the numbering of \(D_{c}\) defined by \(\gamma_{i}\stackrel{{\mathrm{Def}}}{{=}}\Gamma(\psi_{i})\). Then we have that \(\gamma\leq_{m}\eta\). Since \(\gamma\) is admissible by what was shown in the previous step, it follows with Proposition 8.4 that \(\eta\) satisfies (A II). As we see, via the mapping \(\Gamma\) numberings of \(D_{c}\) with Properties (A I) and\(\backslash\)or (A II) can be obtained from numberings of \(\mathcal{S}_{A}^{(1)}\). A natural question now is whether all numberings of \(D_{c}\) with these properties can be obtained in this way. Set \[\widehat{\Gamma}(\theta)_{i}\stackrel{{\mathrm{Def}}}{{=}} \Gamma(\theta_{i}).\] Then \(\widehat{\Gamma}\) is a mapping from the set of all numberings of \(\mathcal{S}_{A}^{(1)}\) into the set of all numberings of \(D_{c}\). As follows from the definition, \(\widehat{\Gamma}\) is monotone with respect to \(\leq_{m}\). **Theorem 8.11**.: _Let \(A\subseteq\omega\) be a c.e. infinite set and \(D\) be an effectively given domain. Then the following statements hold:_ 1. _For every admissible numbering_ \(\eta\) _of_ \(D_{c}\) _there is a quasi-Godel numbering_ \(\theta\) _of_ \(\mathcal{S}_{A}^{(1)}\) _with_ \(\eta=\widehat{\Gamma}(\theta)\)_._ 2. _For every numbering_ \(\eta\) _of_ \(D_{c}\) _satisfying (A I) there is a numbering_ \(\theta\) _of_ \(\mathcal{S}_{A}^{(1)}\) _satisfying (QGN I) with_ \(\eta=\widehat{\Gamma}(\theta)\)_._ Proof.: (1) Let \(\psi\) be a quasi-Godel numbering of \(\mathcal{S}_{A}^{(1)}\) and \(\eta\) an admissible numbering of \(D_{c}\). By the previous theorem, \(\widehat{\Gamma}(\psi)\) is a admissible numbering as well. Hence, \(\eta\) and \(\widehat{\Gamma}(\psi)\) are computably isomorphic, by Proposition 8.4. Thus, there is a computable permutation \(g\in\mathcal{R}^{(1)}\) with \(\eta=\widehat{\Gamma}(\psi)\circ g\), that is, \(\eta_{i}=\Gamma(\psi_{g(i)})\). Let \(\theta\stackrel{{\mathrm{Def}}}{{=}}\psi\circ g\). Then also \(\theta\) is a quasi-Godel numbering. Moreover, \(\eta=\widehat{\Gamma}(\theta)\). (2) As has already been mentioned, \(D_{c}\) has an admissible numbering, say \(\gamma\). Let \(\psi\) be the quasi-Godel numbering with \(\gamma=\widehat{\Gamma}(\psi)\), existing by Statement (1). If \(\eta\) is a numbering of \(D_{c}\) satisfying (A I), then \(\eta\leq_{m}\gamma\), by Proposition 8.4. Therefore, there is some \(g\in\mathcal{R}^{(1)}\) with \(\eta=\gamma\circ g\). Define \(\theta\stackrel{{\mathrm{Def}}}{{=}}\psi\circ g\). Then it follows with Proposition 7.1 that \(\theta\) satisfies Condition (QGN I). Moreover, \(\widehat{\Gamma}(\theta)=\eta\). The question whether an analogue of Statement (2) is true for numberings of \(D_{c}\) that satisfy Requirement (A II) remains open. We call numberings of \(D_{c}\) that satisfy (A I) _computable_. As in the case of the numberings of \(\mathcal{S}_{A}^{(1)}\) the \(m\)-equivalence classes of the numberings of \(D_{c}\) form a Rogers semi-lattice in which the \(m\)-equivalence classes of the computable numberings form a Rogers sub-semi-lattice with the class of admissible numberings as greatest element. Let again \(\eta\oplus\gamma\) be the direct sum of the numberings \(\eta\) and \(\gamma\) of \(D_{c}\). Then it follows for numberings \(\psi\) and \(\theta\) of \(\mathcal{S}_{A}^{(1)}\) that \[\widehat{\Gamma}(\psi\oplus\theta)=\widehat{\Gamma}(\psi)\oplus\widehat{\Gamma }(\theta).\] Moreover, let \(\widehat{\Gamma}_{=}\) be the quotient map, that is, \(\widehat{\Gamma}_{=}([\psi])=[\widehat{\Gamma}(\psi)]\). **Corollary 8.12**.: \(\widehat{\Gamma}_{=}\) _is a homomorphism of the Rogers semi-lattice of the numberings of \(\mathcal{S}_{A}^{(1)}\) in the Rogers semi-lattice of the numberings of \(D_{c}\) which maps the Rogers sub-semi-lattice of computable numberings of \(\mathcal{S}_{A}^{(1)}\) onto the Rogers sub-semi-lattice of computable numberings of \(D_{c}\)._ _._ ## 9 Conclusion Non-termination is a typical phenomenon of algorithms. It cannot be read of the program text whether and in which case it will happen. Approaches to avoid it have been studied and require advanced methods. The question we dealt with in this paper was whether there is a class of algorithms that compute the total (computable) functions--in which one is actually only interested in--, and if non-termination occurs then the area of such inputs has a well-defined structure. We presented such a class. The typical algorithm we had in mind when starting this research was list searching or the computation of approximations of infinite objects like the real numbers. The algorithms in this class are such that their domain of definition is either the set of all natural numbers, or a finite initial segment of this set of a length in a given set of possible lengths. It is shown that even though besides the total functions there are now only finite partial functions, a rich computability theory can be developed in which the same important results hold as in the classical theory dealing with all possible algorithms. In particular, the theory of computably enumerable set remains unchanged, except that the domain characterisation of these set is no longer useful. What is presented is a development of computability theory based on the notion of enumeration. The main ingredient in the new approach is the notion of a quasi-Godel numbering which takes on the role of Godel numberings in the classical approach. Every Godel numbering is also a quasi-Godel numbering. Besides developing computability theory on the basis of quasi-Godel numberings, meta-investigations have been carried out: the new quasi-Godel numbered function classes are a retract of the Godel numbered class of all partial computable functions. Moreover, it is the class of computable elements of an effectively given algebraic domain (in the sense of Scott-Ershov) that can be computably mapped onto any other effectively given domain so that the finite functions in the function classes considered here are mapped to base elements. This extends the use of representations in computable analysis in such a way that now also the basic open sets used for approximations obtain a finite function as name. Via the mapping every quasi-Godel numbering induces an admissible numbering of the computable domain elements. It was shown that every admissible numbering can obtained in this way.
2310.13687
A new purpose for the $W$-boson mass measurement: searching for New Physics in lepton+$MET$
We show that the $m_W$ measurement is a direct probe of New Physics (NP) contributing to lepton and missing transverse momentum ($\ell+MET$), independently from indirect tests via the electroweak fit. Such NP modifies the kinematic distributions used to extract $m_W$, necessitating a simultaneous fit to $m_W$ and NP. This effect can in principle bias the $m_W$ measurement, but only to a limited extent for our considered models. Given that, we demonstrate that the agreement at high-precision with SM-predicted shapes results in bounds competitive to, if not exceeding, existing ones for two examples: anomalous $W$ decay involving a $L_{\mu} - L_{\tau}$ gauge boson and $\tilde{\nu}_{l} \tilde{l}$ production in the MSSM.
Kaustubh Agashe, Sagar Airen, Roberto Franceschini, Doojin Kim, Ashutosh V. Kotwal, Lorenzo Ricci, Deepak Sathyan
2023-10-20T17:46:24Z
http://arxiv.org/abs/2310.13687v1
# A new purpose for the \(W\)-boson mass measurement: ###### Abstract We show that the \(m_{W}\) measurement is a _direct_ probe of New Physics (NP) contributing to \(\ell+\textit{MET}\), independently from _indirect_ tests via the electroweak fit. Such NP modifies the kinematic distributions used to extract \(m_{W}\), necessitating a simultaneous fit to \(m_{W}\) and NP. This effect can in principle bias the \(m_{W}\) measurement, but only to a limited extent for our considered models. Given that, we demonstrate that the agreement at high-precision with SM-predicted shapes results in bounds competitive to, if not exceeding, existing ones for two examples: anomalous \(W\) decay involving a \(L_{\mu}-L_{\tau}\) gauge boson and \(\hat{\nu}_{l}\hat{l}\) production in the MSSM. + Footnote †: preprint: MI-HET-817 + Footnote †: preprint: MI-HET-817 + Footnote †: preprint: MI-HET-817 ## I Introduction The mass of the \(W\) boson plays a crucial role in our understanding of nature. The discrepancy between the recent and most precise measurement by CDF [1] and the SM prediction might already be a hint of new physics (NP) beyond the Standard Model (BSM). Theoretical explanations commonly invoke new contributions to the electroweak (EW) fit [2] in order to shift the value of the SM prediction (see for instance [3; 4]) and explain the anomaly. Yet, the more recent re-measurement by ATLAS [5; 6] adds to the puzzle, confirming the SM-predicted value and the previous measurements by LHCb, D\(\emptyset\) and LEP [7; 8; 9]. Whether in the future the CDF anomaly will be confirmed cannot be foreseen. The only fact that we have today is the striking precision of \(10^{-4}\) of these measurements and of the corresponding theory SM predictions. This precision might even improve in the near future due to an ongoing intense experimental [5; 10] and theoretical effort (see e.g. Refs. [11; 12; 13; 14; 15; 16; 17] for recent works). The \(m_{W}\) experimental value is extracted from the simultaneous fit of different measured kinematic distributions (see below) in leptonic decays of singly-produced \(W\)-bosons to the SM predictions. Both ATLAS and CDF find perfect agreement with their best-fit SM distributions. We show in this _letter_ that the data used for the \(m_{W}\) measurement can simultaneously be a powerful direct probe for any NP that contributes to the same final state. The key observation is that NP produces kinematic distributions that are sufficiently different with respect to those in the SM. Hence, the same analysis can be used for the extraction of both \(m_{W}\) and NP parameters. The correct procedure thus requires a global fit, which might in principle shift the measurement of \(m_{W}\), with NP providing new nuisance parameters. This paradigm is general, having already been attempted in [18; 19; 20; 21; 22; 23; 24] for the top quark, in the context of NP copiously produced via strong interactions. Fainter signals of NP charged only under the electroweak interaction are more challenging. Yet we will show how the extraordinary precision of the \(m_{W}\) measurement can put competitive bounds on motivated new physics scenarios, and in some cases to _exceed_ present bounds, e.g. those for long-sought SUSY sleptons. This strategy is in addition to the classic test based on EW fit of the SM to which we are accustomed since LEP [25]. In this _letter_, we focus solely on the \(m_{W}\) measurement. We classify the possible NP that can contaminate the measured sample and quantify the sensitivity to two concrete, well-known BSM scenarios (see Fig. 1). Figure 1: NP contributions to the \(W\)-boson mass sample in the \(\ell+\textit{MET}\) channel. Left: invisibly-decay \(L_{\mu}-L_{\tau}\)\(Z^{\prime}\)-boson. Right: slepton-sneutrino production in the MSSM. ## II Invisible new physics behind the semi-invisible W-boson The \(W\)-boson mass measurement is special. The remarkable precision, reached by hadron colliders, relies only on the partially visible leptonic decays. The masses of other heavy SM bosons are instead extracted from fully visible and clean final states (e.g., \(h\to\gamma\gamma\), \(Z\to\ell^{+}\ell^{-}\)), hence resonance reconstruction is possible in a narrow region. For hadronic \(W\)-boson decays, resonance reconstruction is plagued by the challenges of QCD observables. The semi-invisible final state of leptonic \(W\)-decays, namely \(\ell+\textit{MET}\), is cleaner, but it presents a good hideout for invisible NP. Given that the \(W\)-boson decay cannot be fully reconstructed, the measurement of the \(m_{W}\) is a result of the fit to the lepton \(p_{T}^{\ell}\) and the transverse mass \(m_{T}\) distributions.1 Hence, any BSM that contributes to the same final state, modifying these kinematic distributions, can affect the \(m_{W}\) measurement. Such NP can be classified in three possibilities: Footnote 1: CDF also fits the missing transverse momentum \(p_{T}^{\rm miss}\) distribution. 1. [label=()] 2. anomalous \(W\)-boson production, 3. \(\ell+\textit{MET}\) not from an on-shell \(W\)-boson, \(\ell=(e,\mu)\). The first (second) possibility includes all BSM models that modify the \(W\)-boson decay (production), yet resulting in \(\ell+\textit{MET}\). Option (C) collects all BSM models that can produce an \(\ell+\textit{MET}\) final state, without the involvement of any on-shell \(W\)-boson. This category includes the production of new particles, decaying into \(\ell+\textit{MET}\), and new interactions among quark/gluons and leptons.2 Footnote 2: Examples of this are dim-6 quark-lepton four fermion operators that mediate \(qq\to\ell\,\nu_{t}\) processes. The latter are usually very well constrained by high-energy measurements [28; 29; 30]. Here we explore two simple, yet relevant, case studies that cover options (A) and (C). In Sec. III, we focus on anomalous \(W\)-boson decay in the invisibly-decaying \(L_{\mu}-L_{\tau}\) gauge boson scenario (Fig. 1 left). This represents a proof-of-principle of our idea, highlighting the relevant points with rather simple phenomenology. Nevertheless, we find that the \(m_{W}\) measurement represents a competitive probe for this model (see Fig. 2a). In Sec. IV we focus on category (C), using \(\tilde{\nu}\tilde{\ell}\) production in SUSY as an example. This production mechanism is not currently investigated at the LHC. Remarkably, our results in Fig. 2b show that the \(m_{W}\) measurement can cover an unexplored parameter space of slepton searches. In a follow-up paper [31], we will study additional examples of category (A) and an illustration of category (B): a \(Z^{\prime}\)-boson gauging baryon number (see [32] and references therein). Overall, our two papers thus represent a _comprehensive_ study of probing NP giving \(\ell+\textit{MET}\) Figure 2: LHC 95% CL projected sensitivity to (a) \(L_{\mu}-L_{\tau}\) and (b) MSSM slepton-sneutrino production. All the lines include detector simulations. Pileup (\(\langle\mu\rangle=50\)), simulated through the dedicated Delphes ATLAS card, is included unless indicated otherwise. In the SUSY projections, we include the no pileup (\(\langle\mu\rangle=0\)) lines only for the competitive run-2 projections. Present bounds are obtained from [26] and [27] respectively for the left and right figure. using \(m_{W}\) analysis. Ref. [33] studied a specific example of category (B) only. Moreover, in the following, we describe a more general approach than Ref. [33] for the associated analyses. ## III A proof-of-principle: \(L_{\mu}-L_{\tau}\) gauge boson The first model that we consider is the \(L_{\mu}-L_{\tau}\)\(Z^{\prime}\)[34]: \[\mathcal{L}_{\rm int}=g_{Z^{\prime}}Z^{\prime}_{\rho}J^{\rho}_{\mu-\tau}+g_{D} Z^{\prime}_{\rho}J^{\rho}_{D}\,, \tag{1}\] where \(g_{Z^{\prime}}\) and \(g_{D}\) are the couplings of \(Z^{\prime}\)-boson to SM and dark-sector states, respectively. The \(U(1)_{L_{\mu}-L_{\tau}}\) current reads \[J^{\rho}_{\mu-\tau}\ =\ (\bar{\nu}_{\mu}\gamma^{\rho}\nu_{\mu}+\bar{\mu}\gamma^ {\rho}\mu-\bar{\nu}_{\tau}\gamma^{\rho}\nu_{\tau}-\bar{\tau}\gamma^{\rho}\tau). \tag{2}\] The term \(Z^{\prime}_{\rho}J^{\rho}_{D}\) describes the interaction of the \(Z^{\prime}\)-boson with some invisible, unspecified dark-sector states. The key assumptions, that \(g_{D}\gg g_{Z^{\prime}}\) and the dark sector contains states sufficiently lighter than \(m_{Z^{\prime}}\), guarantee that the \(Z^{\prime}\)-boson decays predominantly invisibly. This model has been extensively studied as a possible portal to dark matter or as an extension to SM. The 2-dimensional parameter space \((g_{Z^{\prime}},m_{Z^{\prime}})\) is tested by a variety of searches, from K-/B-factories, \(g-2\), to neutrino beam-dump experiments [26; 35].3 In this model belonging to category (A), the \(W\)-boson has a 3-body decay into \(\mu\,\nu_{\mu}\,Z^{\prime}\) (Fig. 1 left), modifying the kinematic distributions of \(\ell+\textit{MET}\) final state.4 Footnote 3: Additional constraints arise when \(m_{Z^{\prime}}\) is of Stuckenberg origin [36]. Footnote 4: Additional signal events come from \(\tau\to Z^{\prime}\mu\,\nu_{\mu}\,\nu_{\tau}\). For simplicity we don’t include them in our analysis. We obtain the kinematic distributions through a Monte Carlo (MC) simulation via [email protected] [37] + PYTHIA8.212 [38] + Delphesv3.4 [39] (ATLAS card). We employed LHAPDF [40], PDF ID:244800 [41]. The 3-body decay (versus 2-body) softens the \(p_{T}\) and \(m_{T}\) distributions, as seen in Fig. 3 for a benchmark value of \((m_{Z^{\prime}},g_{Z^{\prime}})=(10\,\,{\rm GeV},0.12)\).5 Footnote 5: NP also modifies \(W\)-boson total decay width. This effect is expected to be negligible given the projected bound on the NP parameters. Therefore we fix the width to its SM value. The effect of the width on the \(m_{W}\) determination within the SM is only a few MeV. [5; 15]. As shown in Fig. 3, for \(g_{Z^{\prime}}\sim\mathcal{O}(0.1)\), the expected \(S/B\) ratio is \(\mathcal{O}(10^{-3})\). Sensitivity to these effects strongly relies on the various sources of uncertainties, which is exactly the main target for the experimental collaborations that reached percent [1] and even sub-percent uncertainties [5; 6], aimed at measuring \(m_{W}\). Also backgrounds are extensively studied and they are only a few% in the region of interest. In this _letter_ we will not attempt a complete study of the various sources of uncertainties in the presence of NP. We just comment on the possible effect of our NP hypothesis on the sample of \(Z\to\ell\ell\) events which are heavily used for detector calibration [1; 6] and for tuning the boson production model on data [15]. Thus a contamination of NP in the \(Z\to\ell\ell\) sample might affect the calibration of the MCs, "calibrating away" signs of NP [42]. However, by isolating pure \(Z\)-boson events with appropriate kinematic cuts, such as those imposed by ATLAS [6]: \(80<m_{\ell\ell}/{\rm GeV}<100\), the possible contamination of NP in the calibration sample is limited to \(\mathcal{O}(10^{-4})\), still for \(g_{Z^{\prime}}\sim\mathcal{O}(0.1)\). We estimate the sensitivity and the impact of our NP hypothesis on the \(m_{W}\) measurement through a binned \(\chi^{2}\) analysis for the \(p_{T}^{\ell}\) and \(m_{T}\) distributions. Our analysis is aligned as much as possible with the ATLAS measurement [5; 6], only slightly extending the fit range aiming at maximal sensitivity (see Tab. 1). We then construct the following \(\chi^{2}\): \[\chi^{2}(\Delta_{m_{W}},\Delta_{\rm NP})=\sum_{i=1}^{N_{\rm bins}} \frac{\left(N^{i}_{ev}(\Delta_{m_{W}},\Delta_{\rm NP})-\overline{N}^{i}_{ev} \right)^{2}}{\sigma_{stat}^{2}+\sigma_{sys}^{2}}\,, \tag{3}\] where \(N^{i}_{ev}(\Delta_{m_{W}},\Delta_{\rm NP})\) is the expected number of events in the the bin \(i\) as function of \(m_{W}\) (\(\Delta_{m_{W}}=m_{W}-\overline{m}_{W}\)) and the NP parameters. We centered our \(\chi^{2}\) at \(\Delta_{\rm NP}=0\) and \(\Delta_{m_{W}}=0\) because we are assuming data to realize the SM expectation for the W-boson mass \(\overline{m}_{W}\). We stress that we are testing the New Physics hypothesis with no prior on \(\overline{m}_{W}\), as both \(\Delta_{\rm NP}\) and \(m_{W}\) are floated. On the contrary, the authors of [33]_fixed_\(m_{W}\) in the hypothesis to the EW fit prediction. The simultaneous fit to \(m_{W}\) and NP that we perform here is thus a more general test of NP and has the added value to be independent of the EW fit results and the assumptions therein. The qualitatively new aspect of \(\Delta_{m_{W}}\,\) being a floated parameter in Eq. (3) implies that with the same analysis we extract \(m_{W}\) and test NP. The 2-dimensional fit in the \((\Delta_{m_{W}},\Delta_{\rm NP})\) is reported in Fig. 4 for \(m_{Z^{\prime}}=10\) GeV. By assuming 0.5% per-bin uncorrelated systematics and including the effect of pileup through Delphes, the ATLAS measured uncertainty is roughly reproduced.6 Pileup has an impact on the \(m_{T}\) distribution and on the resulting \(m_{W}\) sensitivity. The \(p_{T}^{\ell}\) distribution, on the contrary, is largely insensitive to pileup, hence we use it to draw more firm conclusions on features of our 2D-fit. Footnote 6: The average number of pileup events per bunch crossing is \(\langle\mu\rangle=50\). The systematics on the kinematic distributions shown in [5] are below 0.5%. Therefore, we also consider per-bin systematics of 0.1%. The expected sensitivity to \(m_{W}\) (at zero \(g_{Z^{\prime}}\)) is slightly stronger than the current ATLAS 7 TeV \(\mathcal{L}=4.6\,\,\mathrm{fb}^{-1}\) measurement [5]. This is mainly because we are not including any source of correlated systematics, and we are assuming much larger statistics from a 13 TeV run with \(\mathcal{L}=300\,\,\mathrm{fb}^{-1}\). The distortion of the \(p_{T}^{\ell}\) exclusion line (blue) at large values of \(g_{Z^{\prime}}\) implies a preference towards positive \(\Delta_{m_{W}}\,\). This suggests that NP might in principle impact the sensitivity to \(m_{W}\), possibly producing a shift in the extracted value and/or affecting the estimate of the associated uncertainty on \(m_{W}\). Yet, the effect shown in Fig. 4 is limited to only \(\sim 10\,\,\mathrm{MeV}\). However, a quantitative assessment of this effect requires the inclusion of the proper experimental setup and is beyond the scope of this _letter_. The sensitivity to \(g_{Z^{\prime}}\) at \(\Delta_{m_{W}}=0\) is only marginally affected by pileup, showing the robustness of the sensitivity to NP. For completeness, we report in Fig. 5a in the supplemental material an analogous study for CDF [1]. In this case, the effect of the NP in the \(m_{W}\) determination is less pronounced, due to a sharper Jacobian peak related to the better control of the hadronic activity at CDF which anchors the \(m_{W}\) fit more robustly. We now turn to the test of the NP hypothesis. Assuming no prior knowledge on \(\overline{m}_{W}\), the correct procedure to put bounds on NP is to marginalize on \(\Delta m_{W}\) for each value of the NP parameters. This is shown in Fig. 2a for LHC (\(\mathcal{L}=300\,\,\mathrm{fb}^{-1}\)) sensitivity projection. Prior knowledge on \(\overline{m}_{W}\) (either from other measurements or from theory predictions) might impact the sensitivity to NP, as shown in Fig. 4. For this analysis, positively and negatively charged-muon events are added together, and \(\chi^{2}\) for \(p_{T}^{\ell}\) and \(m_{T}\) are combined without correlation. Here, the sensitivity projections for CDF are also reported. The reach for \(m_{Z^{\prime}}\simeq 10\) GeV is competitive with the best probe for this model from a dedicated experiment (CCFR) [26; 43]. Yet, it is remarkable that for a 10 GeV \(Z^{\prime}\)-boson, the \(m_{W}\) measurement has the power to probe couplings \(\sim few\times 0.01\), provided sufficient control of the systematics. Interestingly, less constrained models such as the "neutrinophilic scalar" of [44] or the "Dirac neutrino portal" [45] fall in category (A). For the neutrinophilic scalar, we expect the \(m_{W}\) measurement to be the best probe [31]. ## IV MSSM: Slepton-SNeUTRINO production We now turn to the minimal supersymmetric standard model (MSSM) [46], which offers a simple irreducible "background" for the \(m_{W}\) measurement: "left-handed" \(SU(2)_{L}\) doublet slepton-sneutrino production, with subsequent decay into lepton plus only invisible particles (see Fig. 1 right), \[pp\to\tilde{\ell}(\to\ell\,\tilde{\chi}_{1}^{0})\,\tilde{\nu}_{\ell}\,. \tag{4}\] In this scenario, both the sneutrino and neutralino are invisible, and either one could be the lightest stable particle (LSP).7 For simplicity, we assume that the other superpartners, including \(SU(2)_{L}\) singlet - or right-handed sleptons - are heavy, thus having negligible cross-sections at the LHC. Footnote 7: When the lightest neutralino \(\tilde{\chi}_{1}^{0}\) is the LSP, \(\tilde{\ell}\to\ell\,\tilde{\chi}_{1}^{0}\), and \(\tilde{\nu}\to\nu\,\tilde{\chi}_{1}^{0}\), as illustrated in Fig. 1, produces the \(\ell+\mathit{MET}\) final state. If the sneutrino is the LSP (not shown), then \(\tilde{\chi}_{1}^{0}\to\tilde{\nu}\,\nu\) also maintains the \(\ell+\mathit{MET}\) final state. Sleptons lighter than 100 GeV are excluded by LEP [47; 48; 49; 50; 51]. Sleptons heavier than the LEP bound have negligible cross-section at the Tevatron so we do not consider Figure 4: 68% CL projected sensitivity to \(L_{\mu}-L_{\tau}\) at LHC (ATLAS) (\(m_{Z^{\prime}}=10\,\mathrm{GeV}\)). CDF in this section. LHC searches for di-sleptons [27; 52] are sensitive to sleptons above the LEP bounds but suffer when the sleptons and \(\tilde{\chi}^{0}\) are close by in mass. In particular, when the mass gap \(m_{\tilde{\ell}}-m_{\tilde{\chi}_{0}}\sim m_{W}\), the lepton \(p_{T}\) resembles that of the lepton from SM \(W\)-boson decay. This compressed region of parameter space is dominated by SM events and requires a dedicated analysis. In [53; 54] it has been proposed to use precision measurements to disentangle \(WW\) events from di-slepton production. Yet, there is still some uncovered gap in the parameter space in the experimental results (see our summary of present constraints in Fig. 2b). Addressing this shortcoming of the present searches by filling this gap is a main result of this _letter_. The phenomenology of the process in eq. (4) belongs to category (C), since no on-shell \(W\)-boson is produced (see Fig. 1). As shown in Fig. 3, NP produces a rather flat and extended \(m_{T}\) distribution with a rising S/B ratio at "high-\(m_{T}\)", since the process is not initiated by the decay of a resonance. The contamination in the \(Z\)-boson sample due to \(pp\to\bar{\ell}\bar{\ell}\to\bar{\ell}\ell\tilde{\chi}_{0}\tilde{\chi}_{0}\) is limited to \(\mathcal{O}(10^{-5})\). For this model, we follow the same procedure as in Sec. III of marginalizing on \(\Delta_{m_{W}}\) for varying NP parameters. For each point on the \(m_{\tilde{\ell}}-m_{\tilde{\chi}_{0}^{0}}\) plane, \(m_{W}\) is varied as an input in the template, and the minimum \(\chi^{2}\) is obtained from the fit. The \(m_{W}\) determination is largely governed by the peak positions of \(p_{T}^{\ell}\) and \(m_{T}\) spectra. Therefore, the rather flat kinematic distributions of NP contributions make a milder impact on the \(m_{W}\) measurement than what is shown in Fig. 4. Sensitivity projections are reported in Fig. 2b as functions of \((m_{\tilde{\ell}},m_{\tilde{\chi}_{0}})\). The sneutrino mass is fixed at the lowest allowed value in the MSSM, assuming the large \(\tan\beta\) limit [46]. Two sets of expected sensitivities are reported in Fig. 2b, corresponding to the inclusion or not of pileup. In both cases, the fitting range (see Tab. 1) is chosen to cover part of the unexplored parameter space. Extending the range to "high-\(m_{T}\)", still keeping sufficient control of the systematics, might improve the sensitivity, as shown in Fig. 3. However, far from the "\(m_{W}\)" region, systematics becomes more challenging. This is caused, for instance, by the limited \(Z\)-boson sample available for calibrations, or by the increasing backgrounds. The study of systematics outside of the range presently used for each kinematic distribution employed in the \(m_{W}\) measurement can only be carried out by the experimental collaborations. Here we are pointing out the huge gain in sensitivity to NP that can be obtained by enlarging the fitting range. Ideally ATLAS and CMS experiments will find the best range of each kinematic variable for which the experiment can keep systematics under control so as to maximize the sensitivity to NP. A major result of ours is that the same analysis used for the \(m_{W}\) measurement, with only a slightly extended fitting range, can put new bounds and potentially discover new physics in an unexplored parameter space of MSSM. ## V Conclusion New physics resulting in \(\ell+\mathit{MET}\) is an irreducible "background" for the \(m_{W}\) measurement. The kinematic distributions arising from NP do not match those of the SM \(W\)-boson. Consequently, a simultaneous fit to NP parameters and \(m_{W}\) is required to capture this contamination of NP. This more general procedure also tests the robustness of the extraction of \(m_{W}\). Concerning the sensitivity to NP, the inclusion of possible NP worsens the goodness of the fit of the data to (pure) SM template. This results in strong bounds on the NP hypothesis. Yet, given the underlying uncertainties, the distributions contaminated by NP can also modify the extracted value of \(m_{W}\) (Fig. 4). In this _letter_, we followed this path through two examples: anomalous \(W\)-boson decay via an invisible \(L_{\mu}-L_{\tau}\)\(Z^{\prime}\)-boson and slepton-sneutrino production in the MSSM. We find that the LHC, provided sufficient control of the systematics, is potentially sensitive to an uncovered parameter space of the MSSM and provides a competitive probe for the \(L_{\mu}-L_{\tau}\)\(Z^{\prime}\)-boson, as shown in Fig. 2. A faithful assessment of this effect requires precise simulations of the experimental environment. The paradigm that we follow in this _letter_ is general and applies to all NP scenarios producing \(\ell+\mathit{MET}\), pinpointed in Sec.II. This is postponed to a future publication [31]. ###### Acknowledgements. The authors would like to thank Alberto Belloni, Boditha Jayatilaka, Rafael Lopes de Sa, Sarah Eno, Tao Han, Philip Harris, Jakub Kremer, Patrick Meade, Federico Meloni, Javier Montejo Berlingen, Pier Francesco Monni, Felix Yu, Gustavo Marques-Tavares for discussions. The work of K. A., S. A., L. R. and D. S. is supported by NSF Grant No. PHY-2210361 and by the Maryland Center for Fundamental Physics. The work of D. K. is supported by the DOE Grant No. DE-SC0010813. The work of A. V. K. is supported by the \begin{table} \begin{tabular}{c|c|c|c} & ATLAS [5; 6] (\(\mu\)) & \(\tilde{\ell}_{\mu}\tilde{\nu}_{\mu}\) & \(L_{\mu}-L_{\tau}\) \\ \hline \(p_{T}^{\ell}\) (GeV) & \(>30\) (analysis) & \(>30\) & \(>20\) \\ & \(>18\) (trigger) & & \\ \(p_{T}^{\rm miss}\) & \(>30\) & \(>30\) & \(>20\) \\ \(m_{T}\) (GeV) & \(>60\) & \(>60\) & \(>40\) \\ \(|\bar{u}_{T}|\) (GeV) & \(<30\) & \(<30\) & \(<30\) \\ \(m_{T}\) range (GeV) & \([60,100]\) & \([60,120]^{*}\) & \([40,100]\) \\ & & \([60,140]\) & \\ \(p_{T}^{\ell}\) range (GeV) & \([30,50]\) & \([30,60]^{*}\) & \([20,50]\) \\ & & \([30,70]\) & \\ \end{tabular} \end{table} Table 1: Kinematic range considered for our fit. \(\vec{u}_{T}\) is the hadronic recoil vector. The range with \(*\) is considered when we include no pileup effects. We construct bins of 2 GeV for \(m_{T}\) and 1 GeV for \(p_{T}^{\ell}\)[5]. DOE Grant No. DE-SC0010007. The work of R. F. is partially supported by Ministero dell'Universita e della Ricerca MUR under the grant PRIN 202289JEW4.
2307.02345
LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning
Modern reinforcement learning (RL) can be categorized into online and offline variants. As a pivotal aspect of both online and offline RL, current research on the Bellman equation revolves primarily around optimization techniques and performance enhancement rather than exploring the inherent structural properties of the Bellman error, such as its distribution characteristics. This study investigates the distribution of the Bellman approximation error through iterative exploration of the Bellman equation with the observation that the Bellman error approximately follows the Logistic distribution. Based on this, we proposed the utilization of the Logistic maximum likelihood function (LLoss) as an alternative to the commonly used mean squared error (MSELoss) that assumes a Normal distribution for Bellman errors. We validated the hypotheses through extensive numerical experiments across diverse online and offline environments. In particular, we applied the Logistic correction to loss functions in various RL baseline methods and observed that the results with LLoss consistently outperformed the MSE counterparts. We also conducted the Kolmogorov-Smirnov tests to confirm the reliability of the Logistic distribution. Moreover, our theory connects the Bellman error to the proportional reward scaling phenomenon by providing a distribution-based analysis. Furthermore, we applied the bias-variance decomposition for sampling from the Logistic distribution. The theoretical and empirical insights of this study lay a valuable foundation for future investigations and enhancements centered on the distribution of Bellman error.
Outongyi Lv, Bingxin Zhou
2023-07-05T15:00:29Z
http://arxiv.org/abs/2307.02345v4
# LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning ###### Abstract Modern Reinforcement learning (RL) can be categorized into online and offline variants. As a pivotal aspect of both online and offline RL, current research on the Bellman equation revolves primarily around optimization techniques and performance enhancement rather than exploring the inherent structural properties of the Bellman error, such as its distribution characteristics. This study investigates the distribution of the Bellman approximation error in both online and offline settings through iterative exploration of the Bellman equation. We observed that both in online RL and offline RL, the Bellman error conforms to a _Logistic distribution_. Building upon this discovery, this study employed the Logistic maximum likelihood function (\(\mathrm{LLoss}\)) as an alternative to the commonly used MSE Loss, assuming that Bellman errors adhere to a normal distribution. We validated our hypotheses through extensive numerical experiments across diverse online and offline environments. In particular, we applied corrections to the loss function across various baseline algorithms and consistently observed that the loss function with Logistic corrections outperformed the MSE counterpart significantly. Additionally, we conducted Kolmogorov-Smirnov tests to confirm the reliability of the Logistic distribution. This study's theoretical and empirical insights provide valuable groundwork for future investigations and enhancements centered on the distribution of Bellman errors. ## 1 Introduction Modern Deep Reinforcement Learning (RL) has witnessed remarkable advancements in diverse applications, encompassing strategy games (Mnih et al., 2013; Kaiser et al., 2019) to Capacitated Vehicle Routing Problem (CVRP) problems (Kwon et al., 2020; Hottung et al., 2021; Bi et al., 2022). RL operates by guiding an agent to actively interact with an environment through a series of actions, aiming to maximize the expectation of rewards over time. The cumulative reward concerning the current state is captured by the _Bellman equation_(Bellman, 1954). Although the Bellman equation's recursive nature theoretically guides conventional RL towards optimal or near-optimal solutions, its computational demands raise concerns, especially when dealing with extensive state and action spaces (Patterson et al., 2022). In the realm of online RL, _Soft Actor Critic_ (SAC), introduced by Haarnoja et al. (2018, 2018), incorporates the soft Bellman operator to enhance the overall reward and improve model performance and stability, catalyzing significant advancements in RL techniques (Christodoulou, 2019; Ward et al., 2019). On a parallel front, Kumar et al. (2020) found that offline RL has underscored concerns regarding substantial overestimations in action (Q-value) estimations. This insight prompted subsequent developments of the Conservative Q-Learning (CQL) framework, sparking renewed interest in refining offline RL methodologies (Bayramoglu et al., 2021; Lyu et al., 2022; Kostrikov et al., 2021; Garg et al., 2023). The conventional practice of employing Bellman equations for Q-iterations has gradually waned in modern RL discourse. Instead, a shift of preference has been observed in updating the iterative Q-function with the maximum-entropy policy to ensure robust modeling and mitigate estimation errors using the Bellman operator (Ziebart, 2010). SAC deploys an auxiliary policy network to circumvent intractable estimations over log-partitioned Q-values. More recently, Extreme Q-Learning (XQL) (Garg et al., 2023) defines a novel sample-free objective towards optimal soft-value functions in the maximum entropy RL setting, thereby obviating the necessity for conventional network itera tions. These frameworks mark a significant departure from established practices and offer exciting prospects for advancements in RL optimization techniques (Hansen-Estruch et al., 2023; Hejna and Sadigh, 2023). In parallel with the evolution of optimization techniques, researchers have exhibited a substantial interest in minimizing the _Bellman error_(Baird, 1995), a metric denoting the disparity between the current state-action value estimation and the value outlined by the Bellman equation. The objective is to precisely represent the value function of state-action pairs under the current policy (Geist et al., 2017). Following these efforts, researchers have strived to modify the objective function (Feng et al., 2019; Dai et al., 2018; Fujimoto et al., 2022) or optimize the update rules (Bas-Serrano et al., 2021; Gong et al., 2020) to minimize the Bellman error. However, despite various attempts to achieve an adequate policy by indirectly addressing the distribution of the Bellman error, there lacks a straightforward analysis of the main properties of the Bellman error, particularly in terms of exploring more suitable error distributions beyond the normal distribution. To the best of our knowledge, this study presents the first comprehensive exploration rooted in the Logistic distribution of the Bellman error. Drawing inspiration from Garg et al. (2023), we define the _Gumbel error_ to depict the gap between the estimation and true values of the \(Q\) function, which yields a Logistic distribution character for the Bellman error within the realm of online RL. We rigorously derived that under certain conditions, the Bellman error is inherently a biased estimate, additionally, there exists an irreducible, artificially uncontrollable error. In such circumstances, the selection of an appropriate distribution type becomes crucial, rendering the traditional assumption of mean square error no longer suitable. This proposition is in line with Bas-Serrano et al. (2021). We further extend our examination to the loss of the \(V\) function under the online condition and found that it also conforms to a Logistic distribution. Empirical validation of our theoretical propositions spans \(8\) distinct online environments and \(9\) offline environments, encompassing both the empirical distribution of the Bellman error and the performance of trained networks. The results unveil a robust preference for the Logistic distribution of Bellman errors within online RL, surpassing both Gumbel and Gaussian distributions. ## 2 Preliminaries RL explores the expected cumulative reward by a Markov decision process defined by a tuple \((\mathcal{S},\mathcal{A},\mathbb{P},r,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) respectively denote the state and action spaces, \(\mathbb{P}(\boldsymbol{s}^{\prime}|\boldsymbol{s},\boldsymbol{a})\) is the state transition probability from state \(\boldsymbol{s}\) that drives toward the next state \(\boldsymbol{s}^{\prime}\), \(r\) defines the reward of taking an action \(\boldsymbol{a}\) at the current state \(\boldsymbol{s}\), in state \(\boldsymbol{s}\), the reward obtained by performing action \(\boldsymbol{a}\) is defined as \(r(\boldsymbol{s},\boldsymbol{a})\), \(\gamma\in(0,1)\) is the discount factor on future rewards. In online RL, an agent constantly interacts with the environment to enrich the state-action pair \((\boldsymbol{s},\boldsymbol{a},\boldsymbol{s}^{\prime},r)\) upon a behavior policy \(\pi(\boldsymbol{a}|\boldsymbol{s})\) with respect to its Reply-Buffer, which is a container for the state-action pairs and facilitating network training. In contrast, in offline RL, the agent can not interact with the environment. Figure 1: The evolving distributions of Bellman error and \(V\) loss computed by equation 8 and 13 that using all data from the Reply-Buffer at different epochs (100, 200, 400, 700 and final) of online RL training on **LunarLanderContinuous-v2**. As a result, the agent relies solely on an elaborate dataset (comprising a multitude of \((\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r)\) tuples) for learning, and it is not able to add any new \((\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r)\) pairs to the Reply-Buffer. ### Objectives in Reinforcement Learning The target of RL, as defined by _Actor-Crtic_ (AC) algorithm (Konda and Tsitsiklis, 1999), is to find the optimal policy \(\pi(\mathbf{a}|\mathbf{s})\) that maximizes the cumulative discounted reward at a fixed time step \(T\), _i.e._, \[\mathbb{E}_{\mathbf{a}_{t}\sim\pi(\mathbf{a}_{t}|\mathbf{s}_{t})}\left[\sum_{t=0}^{T}\gamma ^{t}r(\mathbf{s}_{t},\mathbf{a}_{t})\right].\] Alternatively, _Soft AC_ (SAC) Haaropia et al. (2018a,b) encompass soft conditions in future rewards to facilitate the learning of policy \(\pi\) with the regularization strength \(\beta\), _i.e._, Later on, Garg et al. (2023) introduces the KL divergence between the policy \(\pi\) and the prior distribution of a reference distribution \(\mu\) to augment the reward function in the objective: \[\mathbb{E}_{\mathbf{a}_{t}\sim\pi(\mathbf{a}_{t}|\mathbf{s}_{t})}\left[\sum_{t=0}^{T} \gamma^{t}(r(\mathbf{s}_{t},\mathbf{a}_{t})-\beta\log\frac{\pi(\mathbf{a}_{t}|\mathbf{s}_{t}) }{\mu(\mathbf{a}_{t}|\mathbf{s}_{t})}\right]. \tag{1}\] In online RL, \(\mu(\mathbf{a}|\mathbf{s})\) is typically sampled from a uniform distribution, while in offline RL, \(\mu(\mathbf{a}|\mathbf{s})\) is usually sampled from the empirical distribution of offline data. This approach aims better to approximate the behavioral policy (Neu et al., 2017). Equation 1 serves as a special case of the remaining two, which will be elaborated in Section 2.2. ### (Soft) Bellman Equation in RL In Section 2.1, we noticed that equation 1 is the most general one. Therefore, we will consider it as the baseline for further analysis. In fact, when the value of \(\mu\) is 1, equation 1 will degenerate back to SAC, and when the value of \(\beta\) is 0, equation 1 will degenerate back to AC. We thus focus on studying the general form in equation 1. The goal is to obtain the optimal Bellman iterative equation, which has been widely used in \(Q\)-learning (Watkins and Dayan, 1992): \[Q^{t+1}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})+\gamma\max_{\mathbf{a}^{\prime}}(Q^{t}( \mathbf{s}^{\prime},\mathbf{a}^{\prime})). \tag{2}\] This objective is obtained through the Bellman iterative operator, specifically the soft Bellman iteration has a general version corresponding to equation 1 is: \[Q^{k+1}(\mathbf{s},\mathbf{a})\leftarrow\operatorname*{arg\,min}_{Q}\biggl{[}r(\mathbf{s},\mathbf{a})+\mathbb{E}_{\mathbf{s}^{\prime},\mathbf{a}^{\prime}\sim\pi}\left[Q(\mathbf{s}^{ \prime},\mathbf{a}^{\prime})-\beta\log\frac{\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}{ \mu(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}\right]-Q^{k}(\mathbf{s},\mathbf{a})\biggr{]}^{2},\] The soft Bellman iteration can be solved from equation 3: \[Q^{k+1}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})+\mathbb{E}_{\mathbf{s}^{\prime},\mathbf{a}^{ \prime}\sim\pi}\biggl{[}Q^{k}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})-\beta\log \frac{\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}{\mu(\mathbf{a}^{\prime}|\mathbf{s}^{\prime} )}\biggr{]}. \tag{3}\] If we want to take the optimal strategy (corresponding to the \(\max(\cdot)\) operator in equation 2, we need to find an optimal \(\pi^{*}\) which satisfies: \[\pi^{*}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})=\operatorname*{arg\,max}_{\pi}( \mathbb{E}_{\mathbf{s}^{\prime},\mathbf{a}^{\prime}\sim\pi}\left[Q(\mathbf{s}^{\prime},\bm {a}^{\prime})-\beta\log\frac{\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}{\mu(\mathbf{a}^ {\prime}|\mathbf{s}^{\prime})}\right]), \tag{4}\] where \(\sum_{\mathbf{a}^{\prime}}\pi^{*}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})=1\). Using the Lagrange multiplier method, we can easily get the following: \[\pi^{*}(\mathbf{a}|\mathbf{s})=\frac{\mu(\mathbf{a}|\mathbf{s})e^{Q(\mathbf{s},\mathbf{a})/\beta}}{ \sum_{\mathbf{a}}\mu(\mathbf{a}|\mathbf{s})e^{Q(\mathbf{s},\mathbf{a})/\beta}}. \tag{5}\] The details for deriving can be found in Appendix A.5, it is a general conclusion. If we take equation 5 to equation 4, then equation 4 can be further simplified as: \[\mathbb{E}_{\mathbf{s}^{\prime},\mathbf{a}^{\prime}\sim\pi}\left[Q(\mathbf{s}^{\prime},\mathbf{a }^{\prime})-\beta\log\frac{\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}{\mu(\mathbf{a}^{ \prime}|\mathbf{s}^{\prime})}\right]\rightarrow\mathbb{E}_{\mathbf{s}^{\prime}}\bigg{[} \beta\log\sum_{\mathbf{a}^{\prime}}\mu(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})e^{Q(\mathbf{s }^{\prime},\mathbf{a}^{\prime})/\beta}\bigg{]}. \tag{6}\] Hence, the max component: \(\max_{\mathbf{a}^{\prime}}Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})\) of equation 2 is reflected in equation 6, this means if we take the best policy \(\pi^{*}\), then we will have: \[\max_{\mathbf{a}^{\prime}}Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})=\beta\log\sum_{\mathbf{a }^{\prime}}\mu(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})e^{Q(\mathbf{s}^{\prime},\mathbf{a}^{ \prime})/\beta}. \tag{7}\] Garg et al. (2023) found that estimating the log sum part in equation 7 through sampling is challenging. Thus, they employed a regression-based approach to circumvent the need for sampling estimation. ## 3 Characterization of Bellman Error with Logistic Distribution ### Bellman Error Analysis Firstly, we establish the probability density functions (PDF) of the Gumbel distribution and the Logistic distribution, where \(\mathrm{Gumbel}(x;\mu,\beta)=\frac{1}{\beta}\exp[-[\frac{x-\mu}{\beta}+\exp[- \frac{x-\mu}{\beta}]]]\), and \(\mathrm{Logistic}(x;\mu,\beta)=\frac{1}{\beta}\frac{\exp[-(x-\mu)/\beta]}{(1+ \exp[-(x-\mu)/\beta])^{2}}\), and \(X\sim\mathrm{Gumbel}(\mu,\beta)\) (or \(\mathrm{Logistic}(\mu,\beta)\)) indicates \(X\) follows a Gumbel (or Logistic) distribution with parameters \(\mu\) and \(\beta\). In this paper, we define the online Bellman error as \(\varepsilon^{\theta}\) under the \(Q\) network parameter \(\theta\) and policy \(\pi:\) \[\varepsilon^{\theta}(\mathbf{s},\mathbf{a})=\left[r(\mathbf{s},\mathbf{a})+\gamma_{\mathbf{a}^{ \prime}\sim\pi}\hat{Q}_{\theta}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})\right]-\hat{ Q}_{\theta}(\mathbf{s},\mathbf{a}). \tag{8}\] where \(\hat{Q}_{\theta}\) means the estimated \(Q\) network with parameter \(\theta\). For the purpose of theoretical analysis, we consider a deterministic environment, meaning that each \((\mathbf{s},\mathbf{a})\) pair uniquely determines the subsequent state \(\mathbf{s}^{\prime}\), while disregarding the influence of state transition probabilities. This simplification enhances the analysis of error propagation and is more practical for real-world applications. First, let's consider the Bellman error up to the \(t\)-th iteration. Similar to Garg et al. (2023), we also think the cause for the Bellman error is the deviation between the estimated \(Q\) value and the real \(Q\) value in each iteration : \[\hat{Q}^{t}(\mathbf{s},\mathbf{a})=\overline{Q}(\mathbf{s},\mathbf{a})+\epsilon^{t}(\mathbf{s}, \mathbf{a}). \tag{9}\] \(\overline{Q}\) represents the one without any gaps. We believe the real \(Q\) value will not appear any error during the max iterative process, which means: \[\overline{Q}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})+\gamma\max_{\mathbf{a}^{\prime}}( \overline{Q}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})).\] While it is generally impossible to access the real \(\overline{Q}\), we only have the estimated \(\hat{Q}\). In general, we initialize a \(\hat{Q}^{0}(\mathbf{s},\mathbf{a})\) for iterating, whether through tabular methods or neural networks, with a random initialization. The approximated \(\hat{Q}^{t}(\mathbf{s},\mathbf{a})\) value could be approached by this iteration: \[\hat{Q}^{t+1}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})+\gamma\max_{\mathbf{a}^{\prime}}( \hat{Q}^{t}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})).\] In order to further explore their relationship with Bellman error, we derived these Lemmas for Theorem 1. **Lemma 1**.: _(Fisher & Tippett, 1928) For i.i.d. random variables \(X_{1},\ldots,X_{n}\sim f(X)\), \(f(X)\) has the exponential tails, then \(\lim_{n\rightarrow\infty}max_{i}(X_{i})\) follows the Gumbel distribution._ **Lemma 2**.: \(X\) _follow the Gumbel distribution \(\mathrm{Gumbel}(A,B)\), \(C,D\) are both constant, \(D>0\). \(Y=X+C\) will follow \(\mathrm{Gumbel}(C+A,B)\), \(Z=DX\) will follow \(\mathrm{Gumbel}(DA,DB)\)._ **Lemma 3**.: _For mutually independent random variables \(X_{1},\ldots,X_{n}\), which satisfies \(X_{i}\sim\mathrm{Gumbel}(C_{i},\beta)\). Then \(\max_{i}(X_{i})\sim\mathrm{Gumbel}(\beta\ln\sum_{i}e^{\frac{\hat{Q}}{\hat{Q}_{i }}},\beta)\)._ **Lemma 1**, **Lemma 2** and **Lemma 3** describe the basic properties of the Gumbel distribution, which are important and necessary. **Lemma 4** gives the distribution analysis for \(\epsilon(\boldsymbol{s},\boldsymbol{a})\) which has been defined in equation 8. The proof for these facts can be found in Appendix A.1,A.2. Next, we will propose the Lemma 4. The following presents the relationship between \(\overline{Q}\) and \(\hat{Q}\). **Lemma 4**.: _For \(\epsilon^{t}(\boldsymbol{s},\boldsymbol{a})\), which has been defined in equation 9, if the following three assumptions can be satisfied:_ _1. The action space is infinity, and it is a countable set._ _2. There is a injection mapping from pair \((\boldsymbol{s},\boldsymbol{a})\) to the next state \(\boldsymbol{s}^{\prime}\), which means the next state is only determined by the current state \(\boldsymbol{s}\) and action \(\boldsymbol{a}\)._ _3. for any \((\boldsymbol{s},\boldsymbol{a})\) pair, \(\hat{Q}^{0}(\boldsymbol{s},\boldsymbol{a})\) come from the same exponential tail distribution \(P\) ( for example \(N(0,1)\) is feasible )._ _Under these hypothesis, \(\gamma\max_{\boldsymbol{a}^{\prime}}(\hat{Q}^{0}(\boldsymbol{s}^{\prime}, \boldsymbol{a}^{\prime}))\) will follow the Gumbel distribution, if we suppose:_ \[\gamma\max_{\boldsymbol{a}^{\prime}}(\hat{Q}^{0}(\boldsymbol{s}^{\prime}, \boldsymbol{a}^{\prime}))\sim Gumbel(C_{1},\beta_{1}).\] _Then for each \(t\), \(\epsilon^{t}(\boldsymbol{s},\boldsymbol{a})\) will follow the \(\mathrm{Gumbel}\) distribution, actually, we will have:_ \[\epsilon^{t}(\boldsymbol{s},\boldsymbol{a})\sim Gumbel(C_{t}(\boldsymbol{s}, \boldsymbol{a})-\gamma\max_{\boldsymbol{a}^{\prime}}(\overline{Q}(\boldsymbol {s}^{\prime},\boldsymbol{a}^{\prime})),\beta_{t}).\] _Where:_ \[C_{1}(\boldsymbol{s},\boldsymbol{a})=C_{1}\] \[C_{2}(\boldsymbol{s},\boldsymbol{a})=\gamma(C_{1}(\boldsymbol{s}, \boldsymbol{a})+\beta_{1}ln\sum_{i}e^{\frac{r(\boldsymbol{s}^{\prime}, \boldsymbol{a}_{i})}{\beta_{1}}}).\] \[C_{t}(\boldsymbol{s},\boldsymbol{a})=\gamma(\beta_{t-1}ln\sum_{i}e^{\frac{r( \boldsymbol{s}^{\prime},\boldsymbol{a}_{i})+C_{t-1}(\boldsymbol{s}^{\prime}, \boldsymbol{a}_{i})}{\beta_{t-1}}})(t\geq 3).\] \[\beta_{t}=\gamma^{t-1}\beta_{1}.\] _Additional, Special Situation Discussion_ _: If for \(\forall s_{1},s_{2}\), we have two set:_ \[S_{1}=[r(\boldsymbol{s}_{1},\boldsymbol{a}_{1}),r(\boldsymbol{s}_{1}, \boldsymbol{a}_{2}),...,r(\boldsymbol{s}_{1},\boldsymbol{a}_{n})...]\] \[S_{2}=[r(\boldsymbol{s}_{2},\boldsymbol{a}_{1}),r(\boldsymbol{s}_{2}, \boldsymbol{a}_{2}),...,r(\boldsymbol{s}_{2},\boldsymbol{a}_{n})...]\] _and_ \[S_{1}\backslash S_{2}=\emptyset.\] _Then in this case, we have:_ \[\epsilon^{t}(\boldsymbol{s},\boldsymbol{a})\sim Gumbel(C_{t}-\gamma\max_{ \boldsymbol{a}^{\prime}}(\overline{Q}(\boldsymbol{s}^{\prime},\boldsymbol{a} ^{\prime})),\beta_{t}).\] _Where_ \[C_{t}=\gamma(C_{t-1}+\beta_{t-1}ln\sum_{i}e^{\frac{r(\boldsymbol{s}^{\prime}, \boldsymbol{a}_{i})}{\beta_{t-1}}})(t\geq 2).\] This Lemma reveals that the error between the true \(Q\) and estimated \(Q\) is complex. This Lemma also give a good foundation for proving Theorem 1. You can find the proof in Appendix A.3. **Lemma 5**.: _For i.i.d. random variables \(X\sim\mathrm{Gumbel}(C_{X},\beta)\), \(Y\sim\mathrm{Gumbel}(C_{Y},\beta)\), then \((X-Y)\sim\mathrm{Logistic}(C_{X}-C_{Y},\beta)\)._ The following **Theorem** 1 gives a discussion about the Bellman error distribution, it may come as a surprise that this error is not unbiased, but a biased estimation: **Theorem 1**.: _(Logistic distribution for Bellman error): During network training, \(\varepsilon^{\theta}(\mathbf{s},\mathbf{a})\), which defined by equation 8, assume that it also represents that the network has been updated \((\theta)\) times. If the assumptions of Lemma 4 are all satisfied, then \(\varepsilon^{\theta}(\mathbf{s},\mathbf{a})\) will follow the Logistic distribution, actually:_ \[\varepsilon^{\theta}(\mathbf{s},\mathbf{a})\sim Logistic(C_{\theta}(\mathbf{s},\mathbf{a})- \beta_{\theta}ln\sum_{i}e^{\frac{r(\mathbf{s}^{\prime},\mathbf{a}_{i})+C_{\theta}(\bm {s}^{\prime},\mathbf{a}_{i})}{\beta_{\theta}}},\beta_{\theta}).\] _Besides, if we consider the **Special Situation** in Lemma 2, then:_ \[\varepsilon^{\theta}(\mathbf{s},\mathbf{a})\sim Logistic(-\beta_{\theta}ln\sum_{i}e^{ \frac{r(\mathbf{s}^{\prime},\mathbf{a}_{i})}{\beta_{\theta}}},\beta_{\theta}).\] The detailed proof of Theorem 1 is in Appendix A.4. According to Theorem 1, when we want to optimize this Bellman error, we certainly hope that the expectation of Bellman error is as close to zero as possible, _i.e._, \[\mathbb{E}\bigg{[}\varepsilon^{\theta}(\mathbf{s},\mathbf{a})\bigg{]}\to 0,\] Theorem 1 told us that this distribution will continue to change from "flattened" to "elongated" because the variance will continue shrinking during updating, and the center of the distribution will be offset to \(0\), but this does not mean the error is unbiased. From Theorem 1, we can see that the center of the initial error distribution will deviate greatly from 0, and the variance will be vary large, as the iteration proceeds, it will continue to shift towards \(0\) and become taller, This is consistent with what is shown in Figure 1. But during network updating, we always use the \(\mathrm{MSELoss}\), which is used based on the assumption of a Normal distribution \(N(0,\sigma)\). This would be very unreasonable and theoretically unfounded. In fact, in most deep RL networks, such a \(\mathrm{MSELoss}\) function is commonly used when updating a batch of tuple \((\mathbf{s},\mathbf{a},r,\mathbf{s}^{\prime})\) from the Reply-Buffer. \(\mathrm{MSELoss}\) is derived from the likelihood function: \[\log[\prod_{i=1}^{n}p(\varepsilon_{i})]=-n\log(\sqrt{2\pi}\sigma)-\sum_{i=1}^ {n}\frac{\varepsilon_{i}^{2}}{2\sigma^{2}}\propto\sum_{i=1}^{n}-\frac{1}{2}( \varepsilon_{i})^{2}. \tag{10}\] Using a similar approach, we will discuss the replacement of \(\mathrm{MSELoss}\) with regard to likelihood functions in Section 3.3. ### Distribution of the V Loss in Online RL As mentioned in Section 2.2, we start the analysis from the perspective of the policy \(\pi\) by employing the optimal soft Bellman operator in SAC (Haarnoja et al., 2018), the parameter \(\mu\) is chosen to follow a uniform distribution, and \(\beta\) is set to \(1\). It can be observed from equation 5 that: \[\pi^{*}(\mathbf{a}|\mathbf{s})=\frac{e^{Q(\mathbf{s},\mathbf{a})}}{\sum_{\mathbf{a}}e^{Q(\mathbf{s}, \mathbf{a})}}. \tag{11}\] When training the policy network with parameter \(\delta\): \(\pi^{\delta}\), our objective is to allow the policy to learn this distribution as closely as possible, hence employing the Kullback-Leibler (KL) divergence as the loss function for policy loss: \[Loss(\pi^{\delta})=\mathrm{KL}\left(\pi^{\delta}(a|s)||\frac{e^{Q(\mathbf{s},\mathbf{a })}}{\sum_{\mathbf{a}}e^{Q(\mathbf{s},\mathbf{a})}}\right).\] **Theorem 2**.: _(The relationship between the estimated policy and the true policy): There exists a gap between the true policy and the estimated policy under the parameter \(\theta\), which related on whether equation 9 is accurately estimated, in fact:_ \[\hat{\pi}(a|s)=\frac{\sum_{\mathbf{a}}e^{\overline{Q}(\mathbf{s},\mathbf{a})}}{\sum_{\mathbf{a }}e^{\overline{Q}(\mathbf{s},\mathbf{a})}+e^{\overline{Q}(\mathbf{s},\mathbf{a})}}e^{\epsilon ^{\theta}(\mathbf{s},\mathbf{a})}\overline{\pi}(a|s). \tag{12}\] From this, we can derive the distribution for the \(V\) function in SAC, according to the theory proposed by Haarnoja et al. (2018), we often approximate the loss function for calculating the value function \(V\) in the network as follows: \[\mathrm{Loss}^{V}(\mathbf{s})=\left[\hat{V}(\mathbf{s})-\left[\hat{Q}_{\theta}(\mathbf{s}, \mathbf{a})-\log(\hat{\pi}(\mathbf{a}|\mathbf{s}))\right]\right]. \tag{13}\] This implies employing the \(V\) network to approximate \(Q(s,a)-\log(\pi(a|s))\). However, due to \(Q(s,a)\) and \(\pi\) not being the ground truth, this introduces an error term into the estimation. Let us assume this error term to be \(\epsilon^{V}(s)\) and we call it \(V\) Loss. **Theorem 3**.: _(**Logistic distribution for V Loss function):**\(V\) Loss will asymptotically follow a Logistic distribution related to \(\epsilon^{\theta}(\mathbf{s},\mathbf{a})\), in fact:_ \[\epsilon^{V}(s)\sim\mathrm{Logistic}\left(-\log\left(\frac{\sum_{\mathbf{a}}e^{ \overline{Q}(\mathbf{s},\mathbf{a})}}{\sum_{\mathbf{a}}e^{\overline{Q}(\mathbf{s},\mathbf{a})+ \epsilon^{\theta}(\mathbf{s},\mathbf{a})}}\right),\beta_{1}\right). \tag{14}\] _and we will have:_ \[\epsilon^{V}(s)\overset{\epsilon^{\theta}(\mathbf{s},\mathbf{a})\to 0}{\longrightarrow} \mathrm{Logistic}\left(0,\beta_{1}\right). \tag{15}\] Therefore, for the \(V\) network, Logistic regression should still be used instead of Normal regression in the iteration process. In Section 3.3, we will use the likelihood function to provide an alternative for \(\mathrm{MSELoss}\) function for \(Q\) and \(V\) iteration, which is called \(\mathrm{LLoss}\). In Figure 1, we depict the evolving trajectories and fitted curves of Bellman error and \(V\) Loss distributions over online training time in LunarLanderContinuous-v2 (for scenarios at 100, 200, 400, 700 time steps, and the final training). From this, it is evident that the distributions gradually become sharper and exhibit a strong fit to the Logistic distribution. The graphical representation of the evolving offline Bellman error distribution over training time can be found in Appendix C.2. In Figures 2, we present the graphical depiction of optimal parameter values for partial Normal, Gumbel, and Logistic distributions under the evaluation of goodness-of-fit tests in terms of Bellman error and \(V\) Loss for online RL and only Bellman error for offline RL. Our experimental methodology is identical to that of XQL Garg et al. (2023). Additionally, we provide the corresponding Kolmogorov-Smirnov (KS) statistic magnitudes (An, 1933) in Table 3-4 to provide support for our theory. We conducted testing using 8 distinct online environments and 9 distinct offline environments(the names of which are specified in Section 4). Further details can be found in Section 4 and Appendix C.3. ### Logistic Likelihood Q-Learning In order to show the fairness of Logistic, we use the unbiased Logistic distribution instead of the Normal distribution, because it is simple and feasible. Although the theory tells us that it is not Figure 2: The distribution for Bellman error for two online and offline environments, and \(V\) Loss for only online parts during the half of the training epochs. biased for Bellman error. But the bias is not a easy thing to make some additional estimates. Perhaps this may become a possibility for future research. Here, we pretend that the error service is assigned from the \(\operatorname{Logistic}(0,\sigma)\), not \(N(0,\sigma)\). Next, we will provide specific likelihood functions to seek corresponding loss functions as replacements. We know that the PDF of the Logistic distribution is: \[p(\varepsilon_{i})=\frac{1}{\sigma}\frac{e^{\frac{-\varepsilon_{i}}{\sigma}}}{ (1+e^{\frac{-\varepsilon_{i}}{\sigma}})^{2}}. \tag{16}\] With the log-likelihood function, we can get: \[\log\left[\prod_{i=1}^{n}p(\varepsilon_{i})\right]=-n\log(\sigma)+\sum_{i=1}^{ n}\left[-\frac{\varepsilon_{i}}{\sigma}-2\log\left(1+e^{\frac{-\varepsilon_{i}}{ \sigma}}\right)\right]. \tag{17}\] Finally, we can derive the Logistic Loss function (\(\operatorname{LLoss}\)) as: \[\operatorname{LLoss}=\frac{1}{N}\sum_{i=1}^{N}\left[\frac{\varepsilon_{i}}{ \sigma}+2\log\left(1+e^{\frac{-\varepsilon_{i}}{\sigma}}\right)\right]. \tag{18}\] Theorem 4 reveals the relationship between \(\operatorname{MSELoss}\) and \(\operatorname{LLoss}\). **Theorem 4**.: _(**Relationship between \(\operatorname{LLoss}\) and \(\operatorname{MSELoss}\)**) _The \(\operatorname{MSELoss}\) can be utilized as an approximate estimation of \(\operatorname{LLoss}\)._ \[\operatorname{LLoss}=\ln 4+\frac{1}{2}\operatorname{MSELoss}+\operatorname{o}( \varepsilon^{3}).\] Theorem 4 demonstrates that the \(\operatorname{MSELoss}\) can be regarded as an approximate form of \(\operatorname{LLoss}\) when higher-order terms are neglected. ## 4 Experiment Our experiments are all carried out using the control variable method in both online and offline RL, which means we only change the Loss function from \(\operatorname{MSELoss}\) to \(\operatorname{LLoss}\) to prove the effectiveness of \(\operatorname{LLoss}\). For online RL, we employ SAC, and CQL as the baseline algorithm for improvement. Conversely, for offline RL, because of the inferior performance of SAC in the offline environment, we opted not to utilize it and instead turned to employing IQL (Kostrikov et al., 2021), which demonstrated comparatively superior performance. We also conducted a stability analysis of the parameter \(\sigma\) and conducted respective KS tests on the Bellman error distribution and \(V\) Loss distribution for each environment. We provided detailed information about the hyperparameters settings used for both online and offline training in Appendix B. ### Experiment Protocol Our online model was trained for 160,000 iterations in 8 gym environments while our offline model was trained for 500 iterations in 9 D4RL environments (Fu et al., 2020). They both have a stopping criterion set to the variance of Reward remaining below a threshold, where the online threshold is 5 for 1000 epochs, and the offline threshold is 5 for 50 epochs. Our program was executed under **gym** (ver.0.23.1), **mujoco** (ver.2.3.7), and **D4RL** (ver.1.1). ### Results Analysis Online RLWe made improvements based on the SAC and CQL algorithms code from their Git Hub. Our methods are respectively referred to as LSAC and LCQL. We only replaced \(\operatorname{MSELoss}\) with \(\operatorname{LLoss}\) to observe the improvement in performance. Besides any other settings in the program are the same. This means that the difference in performance is entirely attributed to the variation in loss functions. In the comparison with XQL, we fine-tuned XQL based on the \(\beta\) range proposed by its authors. The details regarding the fine-tuning process and model settings are provided in Appendix B. In our setting, Most of the environments were run under the condition of 200 max steps. We trained each environment for 160,000 iterations, we also incorporated a variance threshold of 5 to determine convergence for 1000 epochs. Based on the rolling epoch timeline, we plotted and stored the Average Reward for each 2000 epoch. It can be observed from Figure 3 that compared to \(\mathrm{MSELoss}\), \(\mathrm{LLoss}\) demonstrates more prominent performance in both SAC and CQL. More details for the result and enhancement can be seen in Table 1 and Appendix C.4. \begin{table} \begin{tabular}{l r r r r r} \hline & **SAC** & **CQL** & **XQL** & **LSAC** & **LCQL** \\ & & & & (Ours) & (Ours) \\ \hline LunarLander-Continuous (v2) & 19.99 & 104.15 & -489.19 & 133.95 & 112.57 \\ & & & & 570.09 & 8.08 \\ \hline HalfCheetah (v2) & 696.96 & 653.62 & 684.96 & 714.54 & 675.33 \\ & & & & 2.52 & 3.32 \\ \hline Hopper (v4) & 509.47 & 495.34 & 487.08 & 544.72 & 515.30 \\ & & & & 6.92 & 4.03 \\ \hline Walker2d (v2) & 221.46 & 194.59 & 3.42 & 251.63 & 219.27 \\ & & & & 13.62 & 12.68 \\ \hline HumanoidStandup (v4) & 14,157.95 & 14,166.01 & 8,030.26 & 16,781.59 & 14,258.06 \\ & & & & 18.53 & 0.65 \\ \hline InvertedPendulum (v4) & 1001 & 1001 & 1001 & 1001 \\ & & & & 0.00 & 0.00 \\ \hline InvertedDouble-Pendulum (v2) & 8466.48 & 4295.46 & 3290.36 & 8941.93 & 4647.64 \\ & & & & 5.62 & 8.20 \\ \hline BipedalWalker (v3) & 68.69 & 43.91 & 64.56 & 77.59 & 71.96 \\ & & & & 12.96 & 63.88 \\ \hline avg. improvement & & & & 78.78 & 12.61 \\ \hline \end{tabular} \end{table} Table 1: The average reward and enhanced ratio after the online training, the second line indicates the enhancement ratios relative to the baseline algorithms (LSAC compared to SAC, LCQL compared to CQL). Figure 4: The average reward variation of IQL, LIQL across four environments in the context of offline training. Figure 3: The average reward variation of SAC, LSAC, and XQL across four environments in the context of online training. Offline RLWe conducted improved experiments based on the IQL components integrated from rlkit's Git Hub. We set the maximum step length as 100, and the maximum iteration count as 500, and incorporated a variance threshold of 5 to determine convergence for 50 epochs. The method of controlling variables is the same as in the online setting. Our algorithm is referred to as LIQL. Due to some dimensional discrepancies between the IQL algorithm provided by rlkit and the IQL algorithm, we use the improvement ratio relative to the IQL baseline as the measure of algorithm performance. The change in the average reward during training is depicted in Figure 4, and relevant details are presented in Table 2 and Appendix C.4. The results also indicate that our model exhibits the highest enhancement ratio. KS TestsWe conducted KS tests on the Bellman error for each environment, and for the online section, we included KS tests for \(V\) Loss. The test results are presented in Table 3 and Table 4. If you want the complete version and additional details of the validation tables, we provided them and tests for \(V\) Loss in Appendix C.5. The KS tests results indicate that our assumption of the Logistic distribution is more accurate than the other two distributions. Sensitivity AnalysisWe conducted some sensitivity analysis on the variation of \(\sigma\) across different environments in both online and offline settings. We conducted proportional \(\sigma\) variations for each environment to observe the changes in the final average reward and maximum average reward. More details can be found in Appendix C.6. The results indicate that the performance of our approach within a certain range of \(\sigma\) variations outperforms the MSELoss and exhibits a certain level of robustness. \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline & & \multicolumn{6}{c}{ \begin{tabular}{c} R\({}^{2}\) \(\uparrow\) \\ \end{tabular} } & \multicolumn{6}{c}{KS statistic \(\downarrow\)} \\ \cline{2-10} & **Logistic** & **Gumbel** & **Normal** & **Logistic** & **Gumbel** & **Normal** \\ \hline \hline \multicolumn{1}{l}{LunarLanderContinuous-v2} & **0.985** & \(0.971\) & \(0.975\) & **0.052** & \(0.070\) & \(0.071\) \\ HalfCheetah-v2 & **0.991** & \(0.990\) & \(0.989\) & **0.026** & \(0.047\) & \(0.033\) \\ Hopper-v4 & **0.989** & \(0.985\) & \(0.981\) & **0.067** & \(0.073\) & \(0.085\) \\ Walker2d-v2 & **0.988** & \(0.967\) & \(0.975\) & **0.054** & \(0.084\) & \(0.072\) \\ HumanoidStandup-v4 & **0.667** & \(0.641\) & \(0.628\) & **0.269** & \(0.322\) & \(0.291\) \\ InvertedPendulum-v4 & **0.983** & \(0.963\) & \(0.971\) & **0.115** & \(0.175\) & \(0.117\) \\ InvertedDoublePendulum-v4 & **0.999** & \(0.981\) & \(0.998\) & **0.021** & \(0.079\) & \(0.023\) \\ BipedalWalker-v3 & **0.997** & \(0.979\) & \(0.990\) & **0.039** & \(0.101\) & \(0.057\) \\ \hline \hline \end{tabular} \end{table} Table 3: The fitness and KS tests for Bellman error (online). \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline & & \multicolumn{6}{c}{\begin{tabular}{c} enhancement over IQL (\%) \\ \end{tabular} } \\ \cline{3-10} & & **IQL** & **LIQL** & **XQL** & **CQL** & **TD3+BC** & **one-step RL** \\ \hline \multirow{3}{*}{\begin{tabular}{l} **online** \\ \end{tabular} } & hopper-v2 & 228.19 & 240.71 & 5.49 & 7.24 & -11.77 & -10.56 & 2.11 \\ & walker2d-v2 & 138.42 & 161.26 & 16.50 & 4.09 & -7.41 & 6.89 & -10.11 \\ & halfcheetah-v2 & 319.41 & 335.44 & 5.02 & 0.63 & -7.18 & 1.89 & 4.47 \\ \hline \multirow{3}{*}{\begin{tabular}{l} **online** \\ \end{tabular} } & hopper-v2 & 243.92 & 264.30 & 8.35 & 1.36 & 2.94 & 0.90 & -13.80 \\ & walker2d-v2 & 153.09 & 176.44 & 15.25 & 2.71 & 4.46 & 10.69 & 2.96 \\ & halfcheetah-v2 & 215.28 & 221.31 & 2.80 & 2.75 & 0.31 & -35.70 & -33.02 \\ \hline \multirow{3}{*}{ \begin{tabular}{l} **online** \\ \end{tabular} } & hopper-v2 & 224.49 & 237.38 & 5.74 & 17.05 & 15.19 & 7.10 & 7.73 \\ & walker2d-v2 & 261.75 & 270.15 & 3.21 & 0.46 & -0.73 & 0.45 & 4.61 \\ & halfcheetah-v2 & 305.14 & 347.30 & 13.82 & 3.58 & 5.65 & 4.61 & 12.90 \\ \hline \multicolumn{3}{l}{avg. improvement} & & 8.46 & 4.43 & 0.16 & -1.53 & 3.10 \\ \hline \hline \end{tabular} \end{table} Table 2: The average reward and enhanced ratio after the offline training, all algorithms have calculated the enhancement ratios relative to the IQL, and it can be observed that LIQL demonstrates the most significant improvement. ## 5 Conclusion Our research findings indicate that modifying \(\mathrm{MSELoss}\) to \(\mathrm{LLoss}\) is significantly more effective and straightforward in both online and offline scenarios, resulting in a noteworthy enhancement of algorithmic performance. One of our strengths lies in the ease of implementing loss function modifications within the code. The validation of correctness is accomplished through the amalgamation of KS tests, clearly underscoring the substantial potential for future advancements and optimizations by exploring the fundamental properties of loss functions from a distributional perspective. However, despite our conducted sensitivity analysis, we maintained the experimental \(\sigma\) value as a constant in our study. We believe that in the future, similar to the approach of SAC, a more in-depth investigation into the automatic update of \(\sigma\) could greatly enhance the model's performance. In addition, we hope to conduct more research on bias estimation to explore more mysterious properties. A dynamic biased estimator may be a future direction.
2310.09807
Measurement of the transverse single-spin asymmetry for forward neutron production in a wide $p_T$ range in polarized $p+p$ collisions at $\sqrt{s} = 510$ GeV
Transverse single-spin asymmetries $A_{\textrm{N}}$ of forward neutrons at pseudorapidities larger than 6 had only been studied in the transverse momentum range of $p_{\textrm{T}} < 0.4$ GeV/$c$. The RHICf Collaboration has extended the previous measurements up to 1.0 GeV/$c$ in polarized $p+p$ collisions at $\sqrt{s}~=~510$GeV, using an electromagnetic calorimeter installed in the zero-degree area of the STAR detector at the Relativistic Heavy Ion Collider. The resulting $A_{\textrm{N}}$s increase in magnitude with $p_{\textrm{T}}$ in the high longitudinal momentum fraction $x_{\textrm{F}}$ range, but reach a plateau at lower $p_{\textrm{T}}$ for lower $x_{\textrm{F}}$. For low transverse momenta the $A_{\textrm{N}}$s show little $x_{\textrm{F}}$ dependence and level off from intermediate values. For higher transverse momenta the $A_{\textrm{N}}$s show also an indication to reach a plateau at increased magnitudes. The results are consistent with previous measurements at lower collision energies, suggesting no $\sqrt{s}$ dependence of the neutron asymmetries. A theoretical model based on the interference of $\pi$ and $a_1$ exchange between two protons could partially reproduce the current results, however an additional mechanism is necessary to describe the neutron $A_{\textrm{N}}$s over the whole kinematic region measured.
M. H. Kim, O. Adriani, E. Berti, L. Bonechi, R. D'Alessandro, Y. Goto, B. Hong, Y. Itow, K. Kasahara, Y. Kim, J. H. Lee, S. H. Lee, T. Ljubicic, H. Menjo, G. Mitsuka, I. Nakagawa, A. Ogawa, S. Oh, T. Sako, N. Sakurai, K. Sato, R. Seidl, K. Tanida, S. Torii, A. Tricomi
2023-10-15T11:54:40Z
http://arxiv.org/abs/2310.09807v1
Measurement of the transverse single-spin asymmetry for forward neutron production in a wide \(p_{\rm T}\) range in polarized \(p+p\) collisions at \(\sqrt{s}=510\) GeV ###### Abstract Transverse single-spin asymmetries \(A_{\rm N}\) of forward neutrons at pseudorapidities larger than 6 had only been studied in the transverse momentum range of \(p_{\rm T}<0.4\) GeV/\(c\). The RHIC Collaboration has extended the previous measurements up to 1.0 GeV/\(c\) in polarized \(p+p\) collisions at \(\sqrt{s}~{}=~{}510\) GeV, using an electromagnetic calorimeter installed in the zero-degree area of the STAR detector at the Relativistic Heavy Ion Collider. The resulting \(A_{\rm N}\)s increase in magnitude with \(p_{\rm T}\) in the high longitudinal momentum fraction \(x_{\rm F}\) range, but reach a plateau at lower \(p_{\rm T}\) for lower \(x_{\rm F}\). For low transverse momenta the \(A_{\rm N}\)s show little \(x_{\rm F}\) dependence and level off from intermediate values. For higher transverse momenta the \(A_{\rm N}\)s show also an indication to reach a plateau at increased magnitudes. The results are consistent with previous measurements at lower collision energies, suggesting no \(\sqrt{s}\) dependence of the neutron asymmetries. A theoretical model based on the interference of \(\pi\) and \(a_{1}\) exchange between two protons could partially reproduce the current results, however an additional mechanism is necessary to describe the neutron \(A_{\rm N}\)s over the whole kinematic region measured. ## I Introduction With discovery of a large transverse single-spin asymmetries (\(A_{\rm N}\)) for forward neutron production [1] from the first polarized \(p+p\) collisions at a center of mass energy (\(\sqrt{s}\)) of 200 GeV at the Relativistic Heavy Ion Collider (RHIC), the spin-dependent production mechanism of the forward neutron has attracted great interest over ten years. The discovery also inspired the PHENIX experiment to measure the neutron \(A_{\rm N}\)s at \(\sqrt{s}=62\) GeV, 200 GeV, and 500 GeV [2] at transverse momenta (\(p_{\rm T}\)) less than 0.4 GeV/\(c\) and indicated a possible \(p_{\rm T}\) dependence of the neutron \(A_{\rm N}\). The one-pion-exchange (OPE) model [3; 4; 5], that successfully described the unpolarized forward neutron production [6], introduced an interference between spin flip \(\pi\) and spin nonflip \(a_{1}\) exchange between the two protons. This theoretical framework reproduced the PHENIX data reasonably well showing that the neutron \(A_{\rm N}\)s increased with increasing \(p_{\rm T}\) with little \(\sqrt{s}\) dependence [7]. Recently, the \(A_{\rm N}\)s at \(\sqrt{s}=200\) GeV in Ref. [2] were extracted as function of longitudinal momentum fraction (\(x_{\rm F}\)) and \(p_{\rm T}\)[8]. The results were consistent with the model calculations, but only relatively low transverse momenta were accessed. The \(A_{\rm N}\) is defined by a left-right cross section asymmetry as \[A_{\rm N}=\frac{d\sigma_{\rm left}-d\sigma_{\rm right}}{d\sigma_{\rm left}+d\sigma_{ \rm right}}, \tag{1}\] where \(d\sigma_{\rm left(right)}\) is the particle production cross section in the left (right) side of the beam polarization. \(A_{\rm N}\)s of forward particle production at pseudorapidities (\(\eta\)) larger than 6 at RHIC are especially important to study the production mechanism of the particles in a region where perturbative quantum chromodynamics is not applicable. Thus far the neutron \(A_{\rm N}\) has been studied only in a narrow kinematic range in \(p_{\rm T}<0.4\) GeV/\(c\), measurements at higher \(p_{\rm T}>0.4\) GeV/\(c\) have been awaited to study the production mechanism of forward neutrons in more detail. Here the RHIC forward (RHICf) Collaboration has extended the kinematic range of the previous measurements up to 1.0 GeV/\(c\) with one order of magnitude better position and \(p_{\rm T}\) resolutions not only to explicitly explore the kinematic dependence of the neutron \(A_{\rm N}\) in a wide \(p_{\rm T}\) and \(x_{\rm F}\) ranges but also to study the \(\sqrt{s}\) dependence by comparing the results with those of PHENIX. This paper is organized as follows. The experimental setup and data taking of the RHICf experiment are presented in section II. The selection criteria for good events and neutron candidates are explained in section III. Section IV describes the procedures of the background subtraction, unfolding, and asymmetry calculation. The results are discussed in section V and the paper is summarized in section VI. ## II The RHIC Experiment In June 2017, the RHICf experiment measured forward neutral particles produced in \(\eta>6\) from transversely polarized \(p+p\) collisions at \(\sqrt{s}=510\) GeV in the zero-degree area of the STAR detector system at RHIC. Figure 1 shows the experimental setup of the RHICf experiment. STAR employs two Zero-Degree Calorimeters (ZDCs) [9] located 18 m east and west, from the nominal beam collision point. The former LHCf Arm1 detector [14], which will be called RHICf detector [10] hereafter, was installed in front of the west ZDC. A thin scintillator front counter (FC) was also positioned in front of the RHICf detector to suppress charged hadron background. The RHICf detector consists of two sampling calorimeters that have 20 mm \(\times\) 20 mm (small tower, TS) and 40 mm \(\times\) 40 mm (large tower, TL) effective areas, respectively. Both are composed of 17 layers of tungsten absorbers with 1.6 nuclear interaction lengths in total, 16 layers of GSO scintillator plates, and 4 XY hodoscope layers covered by 1-mm-wide GSO bars. RHICf used 90\({}^{\circ}\)-rotated transversely polarized beams (radially to the RHIC rings) instead of the usual vertically polarized beams. Neutrons with a wide \(p_{\rm T}\) range of \(0.0<p_{\rm T}<1.0\) GeV/\(c\) were measured by moving the detector vertically. We also requested large \(\beta^{*}\) of 8 m for smaller angular beam divergence. Under these conditions, the luminosity was level of at \(10^{31}\) cm\({}^{-2}\)s\({}^{-1}\). See Ref. [11] for more details on the experimental conditions. ## III Event Reconstruction and Selection Before presenting the analysis selection criteria, the neutron and photon events are defined as follows. A neutron event is defined as an event in which a neutron is produced by a collision and is directed toward the detector. When there is no neutron, a photon event is defined as an event in which at least one photon hits the detector. The neutron events were mainly measured by the shower trigger that is activated when the energy deposits of any three consecutive GSO plates are larger than 45 MeV. Since the shower trigger is sensitive not only to the neutron events but also to the photon events, the neutron candidates were identified by using the variable \(L_{2D}\) defined by \[L_{2D}=L_{90\%}-0.15L_{20\%}, \tag{2}\] where \(L_{x\%}\) is defined by the longitudinal depth for the measured integrated energy deposition in the GSO plates to reach \(x\%\) of the total in units of the radiation length (\(X_{0}\)). While neutrons mainly generate the hadronic showers in deeper layers of the RHICf detector and do not necessarily deposit all their energy in the detector, photons generate the electromagnetic shower in shallow layers and deposit all their energy. Figures 2 (a) and (b) show the \(L_{90\%}\) versus \(L_{20\%}\) and \(L_{2D}\) distributions of the neutron and photon events, respectively, in a Monte Carlo (MC) sample where the \(p+p\) collisions were generated by qgsjet ii-04 [12]. An event was identified as a neutron if the \(L_{2D}\) was larger than 21 \(X_{0}\). This threshold was optimized taking into account the neutron purity and efficiency which were estimated by geant4 [13] simulation with the qgsp_bert 4.0 model. Hit positions of the neutrons were calculated by fitting the energy deposit distribution in the GSO bars using a Lorentzian-based function. One of the four hodoscope layers with the maximum energy deposition was used for the position determination. Energies of the neutrons were reconstructed using a relation between the energy deposit sum of the GSO plates and the incident energy of neutrons obtained by geant4 simulations. The position-dependent light collection efficiency and shower lateral leakage effect were also corrected in the simulation. Although the energy range was different, the above reconstructions were also applied for the previous analyses [14; 15] that used the RHICf detector. See Figure 1: Setup of the RHICf experiment. The data were taken by moving the RHICf detector to cover a wide \(p_{\rm T}\) range of \(0.0<p_{\rm T}<1.0\) GeV/\(c\). Refs. [10; 16; 17] for more details on the reconstruction and correction procedures. In order to study the detector performance for neutron reconstruction, \(10^{5}\) neutrons were generated to the center of the detector in the geant4 simulation and their positions and energies were reconstructed in the same way as for the data. For 200 GeV neutrons, energy and position resolutions of the RHICf detector were 1.1 mm and 37%, respectively. To improve the energy resolution, hadronic showers that developed deeper into the RHICf detector were excluded by requiring \(L_{90\%}<37\)\(X_{0}\). The condition improved the energy resolution of neutrons at, e.g., 200 GeV, from 37% to 30%. The RHICf detector was located downstream of a RHIC dipole magnet, DX. Neutron candidate hits were rejected if they overlapped with the shadow of the DX magnet, or their distance to the detector edge was smaller than 2 mm because of the poor performance in these regions. In principle, only neutral particles can reach the detector from the collision point because the DX magnet sweeps away charged particles. However, the detector can detect charged particles when neutral hadrons hit the DX magnet and create a hadronic shower. Events with ADC values of the FC larger than 25% of the minimum ionizing particle (MIP) peak position were excluded to suppress charged hadron background. ## IV Background subtraction and unfolding In the RHIC ring, the beam circulating clockwise is called "blue beam" and the one circulating counterclockwise "yellow beam". Since the RHICf detector was installed in the direction where the blue beam heads, only the blue beam polarization was considered for the forward \(A_{\rm N}\) measurements. On the other hand, when the backward \(A_{\rm N}\) was measured, only the yellow beam polarization was taken into account. Since RHICf used the beam polarization, which was normal to the direction that the detector moved in Fig. 1, the tower that was off-center of the beam measured only a narrow azimuthal range of \(\sigma_{\rm left\ (right)}\) when the beam polarization was up (down). In such case, the \(A_{\rm N}\) was defined by \[A_{\rm N}=\frac{1}{PD_{\phi}}\Big{(}\frac{N^{\uparrow}-RN^{\downarrow}}{N^{ \uparrow}+RN^{\downarrow}}\Big{)}, \tag{3}\] where \(P\) is the beam polarization, ranging from 0.54 to 0.61 for the blue beam and from 0.53 to 0.61 for the yellow beam, and \(N^{\uparrow(\downarrow)}\) is the number of neutrons detected when the beam polarization is up (down). The beam polarization was measured by carbon target polarimeters [18] and normalized by the absolute polarization measured by a hydrogen jet polarimeter [19]. Systematic uncertainties of the blue and yellow beam polarizations were 3.7% and 3.4%, respectively. \(R\), estimated by the charged particle rates from the STAR's beam beam counter [20] and vertex position detector [21], is the ratio of luminosities with the polarization of the blue beams up and down, and ranged from 0.958 to 0.995. \(D_{\phi}\) is a dilution factor estimated by \[D_{\phi}=\frac{1}{N}\sum_{i}\sin\phi_{i}, \tag{4}\] where \(\phi_{i}\) is the azimuthal angle of a neutron with respect to the beam polarization in the \(i\)th event and \(N\) is the number of total detected neutrons. \(D_{\phi}\) was used to compensate the dilution of \(A_{\rm N}\) originated from a finite \(\phi\) distribution of neutrons. The average value of \(D_{\phi}\) was 0.977. If the neutron was measured by the tower on the beam center, the azimuthal angles were divided into 8 equidistant bins and the azimuthal modulation of the Figure 2: (a) \(L_{90\%}\) versus \(L_{20\%}\) and (b) \(L_{2D}\) distributions of neutron and photon events in the qgsjet ii-04 sample. The black lines correspond to the threshold to select neutron candidate, which is \(L_{2D}=21X_{0}\). \(A_{\rm N}\) was measured by \[A_{\rm N}(\phi)=\frac{1}{P}\Bigg{(}\frac{\sqrt{N_{\phi}^{\uparrow}N_{\phi+\pi}^{ \downarrow}}-\sqrt{N_{\phi+\pi}^{\uparrow}N_{\phi}^{\downarrow}}}{\sqrt{N_{ \phi}^{\uparrow}N_{\phi+\pi}^{\downarrow}}+\sqrt{N_{\phi+\pi}^{\uparrow}N_{ \phi}^{\downarrow}}}\Bigg{)}, \tag{5}\] where \(N_{\phi(\phi+\pi)}^{\uparrow(\downarrow)}\) is the number of neutrons detected in azimuthal angular bin \(\phi(\phi+\pi)\) when the blue beam polarization is up (down). The \(A_{\rm N}\) was then calculated by fitting the azimuthal modulation with a sine function where magnitude and phase were left as free parameters. In order to study the kinematic dependence of the neutron \(A_{\rm N}\), \(x_{\rm F}\) and \(p_{\rm T}\) values were divided into equidistant intervals of 0.1 and 0.05 GeV/\(c\), respectively. Due to the finite position and energy resolutions of the detector, kinematic values of the neutrons were unfolded, but the background contaminations in the neutron candidates were subtracted first before unfolding. Two background sources for the photon and charged hadron events were considered. The contaminations in the two background event samples were subtracted for the up and down polarization cases separately: \[N_{\rm neu}^{\uparrow}=N_{\rm trig}^{\uparrow}-N_{\rm pho}^{ \uparrow}-N_{\rm cha}^{\uparrow} \tag{6}\] \[N_{\rm neu}^{\downarrow}=N_{\rm trig}^{\downarrow}-N_{\rm pho}^{ \downarrow}-N_{\rm cha}^{\downarrow}, \tag{7}\] where \(N_{\rm trig}^{\uparrow(\downarrow)}\), \(N_{\rm neu}^{\uparrow(\downarrow)}\), \(N_{\rm pho}^{\uparrow(\downarrow)}\), and \(N_{\rm cha}^{\uparrow(\downarrow)}\) are the number of triggered, neutron, photon, and charged hadron events, respectively, when the blue beam polarization is up (down). The charged hadron events are defined as at least one charged hadron hits the detector when there is no neutron produced by the collision, that leads towards the detector. In order to estimate the \(N_{\rm pho}^{\uparrow}\) and \(N_{\rm pho}^{\downarrow}\), we performed a template fit of the \(L_{2D}\) distribution by scaling the neutron and photon events of the same kinematic bin in the qgsjet ii-04 sample separately. Figure 3 shows an example of the template fit for one kinematic bin. The down-to-up ratios of the neutron and photon events, \(N_{\rm neu}^{\downarrow}/N_{\rm neu}^{\uparrow}\) and \(N_{\rm pho}^{\downarrow}/N_{\rm pho}^{\uparrow}\), in Fig. 3 estimated by the scaled templates are \(1.077\pm 0.014\) and \(0.920\pm 0.012\), which is consistent with the sign of the previously measured neutron [1; 2] and \(\pi^{0}\) asymmetries [11]. Figure 4 shows the \(A_{\rm N}\)s of the neutron and photon events calculated using the template fits and enhanced samples before unfolding. The neutron and photon enhanced samples were selected by applying \(L_{2D}>21X_{0}\) and \(L_{90\%}<18X_{0}\)[11], respectively. Consistencies between the two \(A_{\rm N}\)s calculated by the above two methods prove that the numbers of neutrons and photons were correctly estimated by the template fit. The photon contamination estimated by the template fit, which was less than 0.7% of the total neutron candidate sample was subtracted. In Fig.3, The larger \(L_{2D}\) values of data in \(15<L_{2D}<21X_{0}\) indicate that the photon energy distribution of data is higher than that of MC because photons with higher energy generally deposit energy over a larger longitudinal region, making the \(L_{2D}\) value larger than for lower-energy photons. To study the effect of the discrepancies, the photon event template of the \(i\)th \(x_{\rm F}\) bin was replaced by the one of the \(i+1\)th \(x_{\rm F}\) bin. The template fit was improved, but the \(A_{\rm N}\) difference between the two template fits after unfolding was negligible, which was less than 0.0007. We concluded that the effect of the discrepancies was negligible, thereby we did not consider the systematic uncertainty of the template fit. Another template fit was performed to the ADC distribution of the FC to estimate the \(N_{\rm cha}^{\uparrow}\) and \(N_{\rm cha}^{\downarrow}\) by scaling the neutron and charged hadron event templates of the same kinematic bin in the qgsjet ii-04 sample separately. Fig. 5 shows an example of the template fit. The average contamination of charged hadron events in the neutron candidate sample, which was selected by apply Figure 3: Template fit of the \(L_{2D}\) distribution for the events where the blue beam spin orientation is (a) up and (b) down. The arrows show the threshold for selecting the neutron candidates, which is \(L_{2D}=21X_{0}\). The kinematic range of the \(L_{2D}\) distribution is \(0.50<x_{\rm F}<0.60\) and \(0.30<p_{\rm T}<0.35\) GeV/\(c\). ing \(L_{2D}>21X_{0}\), was 0.2%, which was subtracted from the up and down polarization events separately. Since the template fit of the ADC distribution was an independent process of the one performed to the \(L_{2D}\) distribution, the two following cases were considered to study the systematic uncertainty in the charged hadron subtraction process: every charged hadron event (1) had at least one photon and (2) did not have any photon. In the case of (1), only the photon contamination was subtracted because the charged hadron contamination was less than the photon. In the case of (2), the two contaminations were subtracted respectively. The difference between the two cases was negligible on the \(A_{\rm NS}\), being less than 0.0004. Therefore, we also did not assign a systematic uncertainty to the process of the charged hadron subtraction. According to qgsjet ii-04, the neutron candidate sample was composed to 95.0% of neutrons, 3.5% \(\Lambda\)s, and 1.5% neutral kaons, after background subtraction. The kinematic values of the neutrons, \(x_{\rm F}\), \(p_{\rm T}\), and \(\phi\), were unfolded using the Bayesian unfolding method [22] as implemented in the RooUnfold [23] package of root[24]. For prior, a MC sample where the neutrons from 0 to 255 GeV were uniformly generated on the detector was used to avoid any bias from the particular particle productions. The iterative procedure was stopped when the \(\chi^{2}\) change between two outputs of consecutive iterations became smaller than 1. The variation of \(A_{\rm N}\) by uncertainties of the unfolded data points was considered as one of the systematic uncertainties. This uncertainty is the dominating systematic uncertainty. We generated finite asymmetries by assigning up and down spin patterns in the qgsjet ii-04 sample and confirmed that the unfolded spectra reproduced the input \(\langle x_{\rm F}\rangle\), \(\langle p_{\rm T}\rangle\), and \(A_{\rm N}\) well within the total uncertainty that included the statistical and systematic uncertainties. The differences between the reconstructed and input \(\langle x_{\rm F}\rangle\) and \(\langle p_{\rm T}\rangle\) were less than 0.04 and 0.02 GeV/\(c\), respectively. Besides the systematic uncertainty of the unfolding process, the uncertainty of the beam center calculation was also considered. The beam center was measured by two methods [11] and half of the \(A_{\rm N}\) difference between the two methods was assigned as systematic uncertainty. ## V Results Figure 6, Table 1, and Table 2 summarize the \(A_{\rm NS}\) for forward neutron production as function of \(\langle x_{\rm F}\rangle\) and \(\langle p_{\rm T}\rangle\) measured by the RHICf experiment. Figure 6 (a) shows the neutron \(A_{\rm NS}\) as a function of \(p_{\rm T}\) in three different \(x_{\rm F}\) ranges. In the low \(x_{\rm F}\) range, the neutron \(A_{\rm N}\) reaches a plateau at low \(p_{\rm T}\). In the high \(x_{\rm F}\) range, the plateau does not seem to be reached yet while the absolute value of the \(A_{\rm N}\) explicitly increases in magnitude with \(p_{\rm T}\). Figure 6 (b) shows the \(A_{\rm NS}\) as a function of \(x_{\rm F}\) in five different \(p_{\rm T}\) ranges. The backward \(A_{\rm NS}\) are all consistent with zero. In the low \(p_{\rm T}\) range \(<0.20\) GeV/\(c\), the forward \(A_{\rm N}\) reaches a plateau of low \(A_{\rm N}\) at low \(x_{\rm F}\) (about 0.5) with little \(x_{\rm F}\) dependence. In the high \(p_{\rm T}\) range \(>0.20\) GeV/\(c\), the asymmetries appear to be leveling off at higher \(x_{\rm F}\) (about 0.7), showing a clear \(x_{\rm F}\) dependence. The \(x_{\rm F}\) dependence in the high \(p_{\rm T}\) range was observed for the first time by the RHICf experiment. Figure 7 (a) shows the comparison between the RHICf and PHENIX data as a function of \(p_{\rm T}\). In the range of low \(p_{\rm T}\)\(<0.2\) GeV/\(c\) and \(x_{\rm F}\)\(>0.4\) that is overlapping with the PHENIX data at \(\sqrt{s}\)= 200 GeV, the asymmetries Figure 5: Template fit of the ADC distribution of FC when the blue beam is polarized up. The arrow shows the threshold to suppress the charged hadron background, which is ADC\(>\)0.25MIP. The kinematic range of the ADC distribution is \(0.10<x_{\rm F}<0.20\) and \(0.05<p_{\rm T}<0.10\) GeV/\(c\). Figure 4: The neutron and photon \(A_{\rm NS}\) calculated using the template fits and enhanced samples. Note that the \(x_{\rm F}\) is a reconstructed value that is not unfolded and different \(p_{\rm T}\) bins were integrated. The central \(x_{\rm F}\) values for the points from the template fit were shifted for better visibility. are consistent with those by RHICf at \(\sqrt{s}=510\) GeV. Figure 7 (b) shows the comparison between the two experiments as a function of \(x_{\rm F}\). In the low \(p_{\rm T}\) range that PHENIX covers at \(\sqrt{s}\)=200 GeV, the asymmetries are again consistent at both energies and show a flat \(x_{\rm F}\) dependence. Figures 7 (a) and (b) suggest that there is no or only a weak \(\sqrt{s}\) dependence. The RHICf data is also compared to model calculations [7] based on the \(\pi\) and \(a_{1}\) exchange, as shown in Fig. 8. The model did not predict the \(x_{\rm F}\) dependence of the neutron \(A_{\rm N}\). In the high \(x_{\rm F}\) range, the \(A_{\rm N}\)s are mostly consistent with the model calculations. However, the model does not reproduce the \(A_{\rm N}\)s in the low \(x_{\rm F}\) range where the asymmetries are significantly smaller. This may be because fragmentation is expected to dominate neutron production at low \(x_{\rm F}\) over Reggeon exchange. The \(\pi\) and \(a_{1}\) exchange model partially reproduces the current results, but does not explain the \(x_{\rm F}\) dependence. In Fig. 6 (a), \(A_{\rm N}\)s in \(0.40<x_{\rm F}<0.60\) and \(0.60<x_{\rm F}<1.00\) are consistent in \(p_{\rm T}<0.3\) GeV/\(c\), but a \(x_{\rm F}\) dependence is observed for higher \(p_{\rm T}\). In Ref. [7], spin effects by the absorptive corrections, which are initial/final state interactions, start to increase from \(p_{\rm T}\)\(\sim\)0.2 GeV/\(c\). However, it is also expected in that calculation that the absolute value of the neutron \(A_{\rm N}\) is larger in \(0.40<x_{\rm F}<0.60\) than that in \(0.60<x_{\rm F}<1.00\), which is opposite to the measurements. Other Regge poles like \(\rho\) and \(a_{2}\) may enhance the asymmetry in \(0.60<x_{\rm F}<1.00\) because the spin effect by the \(\rho\) and \(a_{2}\) exchange can also have a finite contribution compared to the \(\pi\) and \(a_{1}\) exchange in the higher \(x_{\rm F}\) region [25]. More comprehensive theoretical considerations are necessary to understand the \(x_{\rm F}\) dependence in \(p_{\rm T}>0.3\) GeV/\(c\). Thus far no Reggeon exchange model and absorptive corrections can explain the \(x_{\rm F}\) dependence in \(p_{\rm T}<0.3\) GeV/\(c\), therefore more precise theoretical calculations, or the inclusion of new processes other than the above production mechanism may be necessary to explain the present results. Figure 6: \(A_{\rm N}\) for forward neutron production as function of (a) \(p_{\rm T}\) and (b) \(x_{\rm F}\). Error bars correspond to the statistical uncertainties, and the boxes represent the total systematic uncertainties. Figure 7: Comparison of the RHICf results with those of PHENIX as function of (a) \(p_{\rm T}\) and (b) \(x_{\rm F}\). ## VI Summary The RHICf Collaboration installed the RHICf detector at the zero-degree area of the STAR detector and measured the \(A_{\rm N}\) for forward neutron production in polarized \(p+p\) collisions at \(\sqrt{s}=510\) GeV. This measurement covered a wide \(p_{\rm T}\) range with high resolution to better understand the production mechanism for forward neutrons. The resulting \(A_{\rm N}\) increases in magnitude with \(p_{\rm T}\) in the high \(x_{\rm F}\) range, but reaches a plateau in the low \(x_{\rm F}\) range. There are indications that the asymmetries also level off at high \(x_{\rm F}\), but the magnitude increases with increasing \(p_{\rm T}\) bins. No \(\sqrt{s}\) dependence was observed when the RHICf data was compared with PHENIX. The existing theoretical calculation based on the \(\pi\) and \(a_{1}\) exchange between two protons reproduced only part of the data. To understand the present results, some additional spin effects beyond the \(\pi\) and \(a_{1}\) exchange scenario will be necessary. ###### Acknowledgements. We thank the staff of the Collider-Accelerator Department at Brookhaven National Laboratory, the STAR Collaboration and the PHENIX Collaboration to support the experiment. We especially acknowledge the essential supports from the STAR members for the design and the construction of the detector manipulator, installation/uninstallation, integration of the data acquisition system, operation and management of all these collaborative activities. This work was supported by the Japan-US Science and Technology Cooperation Program in High Energy Physics, JSPS KAKENHI (Nos. JP26247037, JP18H01227, and JP21H04484), the joint research program of the Institute for Cosmic Ray Research (ICRR), University of Tokyo, the NRF grants for the Center for Extreme Nuclear Matters (CENuM) funded by MSIT of Korea (No. 2018R1A5A1025563), and "UNICT " program, University of Catania. Figure 8: Comparison of the RHICf results with the theoretical calculations.
2302.06925
The Missing Margin: How Sample Corruption Affects Distance to the Boundary in ANNs
Classification margins are commonly used to estimate the generalization ability of machine learning models. We present an empirical study of these margins in artificial neural networks. A global estimate of margin size is usually used in the literature. In this work, we point out seldom considered nuances regarding classification margins. Notably, we demonstrate that some types of training samples are modelled with consistently small margins while affecting generalization in different ways. By showing a link with the minimum distance to a different-target sample and the remoteness of samples from one another, we provide a plausible explanation for this observation. We support our findings with an analysis of fully-connected networks trained on noise-corrupted MNIST data, as well as convolutional networks trained on noise-corrupted CIFAR10 data.
Marthinus W. Theunissen, Coenraad Mouton, Marelie H. Davel
2023-02-14T09:25:50Z
http://arxiv.org/abs/2302.06925v1
# The Missing Margin: How Sample Corruption Affects Distance to the Boundary in ANNs ###### Abstract Classification margins are commonly used to estimate the generalization ability of machine learning models. We present an empirical study of these margins in artificial neural networks. A global estimate of margin size is usually used in the literature. In this work, we point out seldom considered nuances regarding classification margins. Notably, we demonstrate that some types of training samples are modelled with consistently small margins while affecting generalization in different ways. By showing a link with the minimum distance to a different-target sample and the remoteness of samples from one another, we provide a plausible explanation for this observation. We support our findings with an analysis of fully-connected networks trained on noise-corrupted MNIST data, as well as convolutional networks trained on noise-corrupted CIFAR10 data. ## 1 Introduction The study of artificial neural networks (ANNs) and their performance on unseen test data embodies many different overlapping themes, such as capacity control [2], loss landscape geometry [3], information theoretical approaches [4], and algorithmic stability [5]. Optimal _classification margins_ remains a popular concept, showing both theoretical support and empirical evidence of being related to generalization [6, 7]. Simply put, the classification margin of a sample with regard to a specific model is the shortest distance the sample will need to move, in a given feature space, in order to change the predicted output value. Hereafter, we refer to this concept as a _margin_. In the literature it is also called'minimum distance to the boundary' or'minimum adversarial perturbation' [8], depending on context. Large margins have long been used to indicate good generalization ability in classification models [9, 10]. The supporting intuition is simple: With a larger margin, a sample can have more varied feature values (potentially due to noise) while still being correctly classified. It is argued that overparameterized ANNs tend to find large margin solutions [11, 12]. Recent studies of the relationship between margin and generalization typically measure the average margins over the training set or some sampling thereof. In this work, we ask whether the margins of individual samples tend to reflect this average behaviour. Specifically, we focus on ANN-based classification models and introduce controlled noise into the training process. Using different types of training sample corruption, we demonstrate a number of intricacies related to input margins and their relation to generalization. Specifically, we contribute the following: 1. By using target and input noise, we point out local margin behaviour that is inconsistent with the global average. We find that, while all margins have a strong tendency to increase, label- and input-corrupted samples maintain significantly smaller margins than uncorrupted samples. We also find that only on-manifold corrupted samples (i.e. label corruption) noticeably affect the margins of clean samples. 2. We discuss the implications of these inconsistencies. Our findings suggest that using the average margins as a metric is only fitting if the set of contributing samples has an equal level of diversity for all models being compared. 3. We probe the underlying mechanisms that lead to these inconsistencies. We hypothesize that label-corrupted samples have reduced margins because of their proximity to clean samples, and the input-corrupted samples have smaller margins due to a lack of incentive to increase them. Our choice of using noise is not arbitrary. Adding artificial noise to a training set and investigating the ability of ANNs to generalize in spite of this corruption is a popular technique in empirical investigations of generalization. A good example of success with such methods is the seminal paper by Zhang et al. [13], where it was shown that overparameterized models can generalize in spite of having enough capacity to fit per-sample random noise such as label corruption or randomized input features. Similar noise has been used extensively to experimentally probe ANNs [14, 15, 16]. The rest of the paper is structured as follows: Section 2 describes related work on classification margins and generalization. In Section 3 we define a margin and describe our exact method of measuring it. Following this, Section 4 presents details on our experimental setup. The resulting margins, along with notable local inconsistencies, are presented and discussed in Section 5. In the final section, we investigate the nature of these local inconsistencies, describing plausible underlying mechanisms. ## 2 Related Work Research on margins extends back much earlier than the advent of powerful ANNs. For example, the effect of margins on generalization performance in linear models such as Support Vector Machines (SVMs) are well-studied [2]. An inherent issue with extending this work to modern ANNs is that their decision boundaries are highly non-linear, high dimensional, and contain high curvature. Finding the closest boundary point to a given sample is often considered intractable, as such, some works opt to rather estimate the margin [17, 6] or define bounds on these margins [18]. Elsayed et al. [19] derive a linear approximation of the shortest distance to the decision boundary. This is then formulated as a penalty term which is used during training to ensure that each sample has at least a specific (chosen hyperparameter) distance to the decision boundary, for both the input space and hidden layers. Networks trained using this loss function exhibit greater adversarial sample robustness, and better generalization when trained on data with noisy labels than networks trained using conventional loss functions. Similarly, Jiang et al. [6] further utilize the same approximation to predict the generalization of a large number of Convolutional Neural Networks (CNNs), by training a linear regression models on the margin distributions of the existing models. However, it is shown in [20] that this approximation likely considerably underestimates the true distance to the decision boundary. Several authors [21, 20, 22] use a simple linear interpolation between two samples of different classes to find a point on the decision boundary which separates them. This is however unlikely to result in a distance that is near the true minimum. In a similar approximate fashion, Sompalli et al. [23] take the average of the distance in five random directions around an input sample, where each directional-distance is calculated separately using a simple bisection method. Karimi et al. [21] introduce 'DeepDIG', an approach based on an auto-encoder that finds samples near the decision boundary that are visually similar to the original training samples. However, this method does not specifically attempt to find the nearest point on the decision boundary. Finally, Youzefsadeh and O'Leary [20] formulate finding the shortest distance to the decision boundary of a given sample as a constrained optimization problem. While this method is highly accurate, it is computationally very expensive, especially for high-dimensional data, e.g. natural images such as MNIST [24] and CIFAR10 [25]. Similar to Youzefsadeh and O'Leary [20], in this work we find actual points in feature space. We have two reasons for this: (a) simplified approximations might lead to misconceptions of the role margins play in a model's ability to generalize, and (b) finding actual points inherently considers the non-linear intricacies in the function mapping. In Section 3 we describe our selected method in detail. ## 3 Formulating the Classification Margin In the previous section, we have given an overview of different methods of calculating the classification margin. We opt to use the most precise method, which formulates the margin calculation as a non-linear constrained minimization problem. Let \(f:X\rightarrow\mathbb{R}^{|N|}\) denote a classification model with a set \(N=\{0,1,...,n\}\) of output classes. For an input sample \(\mathbf{x}\) we search for a point \(\mathbf{\hat{x}}\) on the decision boundary between classes \(i\) and \(j\), where \(i\) is \(\arg\max(f(\mathbf{x}))\), with \(i,j\in N,j\neq i\), and \(\mathbf{\hat{x}}\) is the nearest point to \(\mathbf{x}\) on the decision boundary. Formally, for some distance function \(dist\), we find \(\mathbf{\hat{x}}\) by solving the following constrained minimization problem (CMP): \[\min_{\mathbf{\hat{x}}}dist(\mathbf{x},\mathbf{\hat{x}}) \tag{1}\] such that \[f(\mathbf{\hat{x}})[i]=f(\mathbf{\hat{x}})[j] \tag{2}\] for the \(i^{th}\) and \(j^{th}\) output node, respectively. Finding a point that meets the condition defined in Eq. 2 exactly, is virtually impossible. In practice, a threshold is used, so that a point is considered valid (on the decision boundary) if \(|f(\mathbf{\hat{x}})[i]-f(\mathbf{\hat{x}})[j]|\leq 10^{-3}\). In order to find the nearest point on the decision boundary for all \(j\), we search over each class \(j\neq i\) separately for each sample and choose the one with the smallest distance. As is convention for margin measurements [20, 23], we use Euclidean distance1 as metric, meaning the margin is given by: Footnote 1: In practice, we optimize for the squared Euclidean distance in order to reduce the computational cost of gradient calculations, but report on the unsquared distance in all cases. \[dist(\mathbf{x},\mathbf{\hat{x}})=|\mathbf{x}-\mathbf{\hat{x}}|_{2} \tag{3}\] In order to solve each CMP, we make use of the augmented Lagrangian method [26], using the conservative convex separable approximation quadratic (CCSAQ) optimizer [27] for each unconstrained optimization step, implemented with the NLOpt (non-linear optimization) library [28] in Python. While it is possible to extend the nearest boundary optimization to the hidden layers, it is difficult to compare models with layers of varying dimensionality. Additionally, hidden layer margins can be manipulated and require some form of normalization [6]. As such, we limit our analysis to input-space margins. ## 4 Experimental Setup In order to investigate how sample corruption affects margin measurements, we train several networks of increasing capacity to the point of interpolation (close to zero train error) on the widely used classification datasets MNIST [24] and CIFAR10 [25]. 'Toy problems' such as these are used extensively to probe generalization [15, 23, 13]. We corrupt the training data of some models using two types of noise, defined in Section 4.1, separately. Capacity and generalization are strongly linked. In the overparameterized regime we expect generalization to improve systematically with an increase in capacity and a corresponding increase in average margins. This setup allows us to determine whether expected behaviour is consistent across all samples. ### Controlled Noise We use two specific types of noise, inspired by Zhang et al. [13]: Label corruption and Gaussian input corruption. These have been designed to represent two complications that are often found in real world data and could affect the generalization of a model fitted to them. Label corruption represents noise that comes from mislabeled training data (mislabeling is common in large real-world datasets and even in benchmark datasets [29]), inter-class overlap, and general low separability of the underlying class manifolds. Gaussian input corruption, on the other hand, represents extreme examples of out-of-distribution samples. These are 'off-manifold' samples displaying a high level of randomness. Such samples do not necessarily obscure the true underlying data distribution, but still require a significant amount of capacity to fit the excessive complexity that needs to be approximated when fitting samples with few common patterns. Given a training sample \((\mathbf{x},c)\) where \(\mathbf{x}\in\mathbb{R}^{d}\) and \(c\in N\) for a set of classes \(N\), the corruption of a sample can be defined as follows: * _Label corruption_: \((\mathbf{x},c)\rightarrow(\mathbf{x},\tilde{c})\) where \(\hat{c}\neq c,\hat{c}\in N\). * _Gaussian input corruption_: \((\mathbf{x},c)\rightarrow(\mathbf{g},c)\) where \(\mathbf{g}\in\mathbb{R}^{d}\) and each value in \(\mathbf{g}\) is sampled from \(\mathbb{N}(\mu_{\mathbf{x}},\sigma_{\mathbf{x}})\). Alternative labels are selected at random and \(\mathbb{N}(\mu_{\mathbf{x}},\sigma_{\mathbf{x}})\) is a normal distribution, with \(\mu_{\mathbf{x}}\) and \(\sigma_{\mathbf{x}}\) the mean and standard deviation of all the features in the original sample \(\mathbf{x}\). Henceforth, we will drop the 'Gaussian' when referring to 'Gaussian input corruption'. ### MNIST Models For the MNIST dataset, we train three distinct sets of Multilayer Perceptrons (MLPs), with each set containing models of identical depth and identically varied width: * **MNIST**: A set of clean MNIST models. These serve as baselines, showing the level of generalization and margin sizes to be expected should the models not have been trained on any corrupted data. * **MNISTlc** (MNIST-label-corrupted): Models with the same capacities as the previous set, but where \(20\%\) of the training set is label corrupted. * **MNISTgic** (MNIST-Gaussian-input-corrupted): Models with the same capacities as the clean models, however, \(20\%\) of the training set is input corrupted. All models for these tasks have the following hyperparameters in common. They use a \(55\:000/5\:000\) train-validation split of the training data. They are all single hidden layer ReLU-activated MLPs with widths ranging from \(100\) to \(10\:000\) hidden layer nodes, and a single bias node. Stochastic gradient descent (including momentum terms) is used to minimize the cross-entropy loss on mini-batches of size \(64\) selected at random. The initial learning rate is set to \(0.01\) and then multiplied by \(0.99\) every \(5\) epochs. For each set, we train three random initializations. Note that we train the **MNISTlc** models for \(1\:000\) epochs and models from the other two sets for \(100\) epochs. This is because the label-corrupted dataset required more epochs to interpolate. The resulting generalization ability of all three sets are depicted in Fig. 1 (left). Note that, as expected for all three tasks, with more capacity we see an improvement in validation set performance. Also note that only the label corruption results in any significant reduction in validation performance, as also previously reported in [16]. ### CIFAR10 Models We also replicate the MNIST experiments using CNNs trained on the CIFAR10 dataset, with \(10\%\) corruption (where applicable). We use a similar architecture to the'standard' CNN used by Nakkiran et al. [15]. Each CNN consists of four ReLU-activated convolutional layers, with \([k,2k,4k,8k]\) output channels respectively, where we choose various values of \(k\) between \(10\) and \(64\) to create a group of models with varying capacity. Each model also includes max and Figure 1: Validation error for MNIST models (left) and CIFAR10 models (right). Values are averaged over three random seeds and shaded areas indicate standard deviation. Note that all models interpolated except for the smallest two capacities for **MNISTlc** and **CIFAR10lc**. The maximum train error (across all models) was \(0.0573\). average pooling layers, and a final fully connected layer of \(400\) nodes. The three sets of CIFAR10 models **CIFAR10**, **CIFAR10lc**, and **CIFAR10gic**, refer to clean, label-corrupted, and input-corrupted models, respectively. All models are trained on a \(45\ 000/5\ 000\) train-validation split of the CIFAR10 dataset. These models are trained for \(500\) epochs in order to minimize a cross-entropy loss function on mini-batches of \(256\) samples using the Adam optimizer. The initial learning rate of \(0.001\) is multiplied by \(0.99\) every \(10\) epochs. Three initialization seeds are used. From the relevant validation errors in Fig. 1 (right), we again observe that more capacity is accompanied by better generalization performance. ### Terminology When describing margin behaviour, we refer to different subselections of margins based on the type of sample (clean or corrupted) as well as the type of model. To prevent confusion, we always refer to these subselections using a name constructed from the type of sample and then the type of model, separated by a colon, as shown in Table. 1. For example, if we are referring to the margins for the uncorrupted samples with regard to a label-corrupted model we will refer to them as the _clean:label-corrupted_ margins. ## 5 Results We calculate the margins for \(10\ 000\) randomly selected training samples, for all of the models defined in Section 4, using the method described in Section 3. Only correctly classified samples are considered. This amounts to solving \(11\) capacities \(\times\)\(3\) random seeds \(\times\)\(3\) datasets \(\times\)\(9\) class pairs \(\times\)\(10k\) samples \(=8\,910k\) individual CMPs for both the MNIST and CIFAR10 models. In order to solve such a large number of CMPs we utilize \(240\) CPU cores split over \(10\) servers, by making use of GNU-Parallel [30]. Next, we investigate the central tendencies of the clean and corrupt samples separately, as capacity increases (Section 5.1) and discuss possible implications of our observations (Section 5.2). ### Local Inconsistencies Fig. 2 shows the mean margins as a function of model capacity. We note that expected margins tend to increase along with capacity, in all cases for all types of samples. This is consistent with what we expect when using margins as an indicator of generalization. However, we see that noise-corrupted and clean data demonstrate different tendencies: we see that margins for both _corrupt:label-corrupted_ and _corrupt:input-corrupted_ samples tend to be significantly smaller than for the clean samples in the same models at the same capacity. Furthermore, we observe that the _clean:clean_ and _clean:input-corrupted_ margins are similar, especially at higher capacities, while _clean:label-corrupted_ margins are seen to be significantly smaller. It is interesting that the average margins, measured on CIFAR10, are smaller than the MNIST ones. The CIFAR10 features contain \(3\ 072\) dimensions and MNIST only \(784\). We expect Euclidean distance in higher dimensions to be larger. We suspect that the reason for this contradiction is a large degree of inherent inter-class overlap in CIFAR10 that results in the observed small margins. Another discrepancy between the MNIST and CIFAR10 results is that the _corrupt:input-corrupted_ margins for CIFAR10 models are comparable to the _corrupt:label-corrupted_ margins. For MNIST the _corrupt:input-corrupted_ margins are significantly larger than the _corrupt:label-corrupted_ margins. This observation is important to note when we take a deeper look at the underlying mechanisms (in Section 6). \begin{table} \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & **clean samples** & **corrupt samples** & **overall samples** \\ \hline **clean model** & _clean:clean_ & N/a & _overall:clean_ \\ \hline **label-corrupted model** & _clean:label-corrupted_ & _corrupt:label-corrupted_ & _overall:label-corrupted_ \\ \hline **input-corrupted model** & _clean:input-corrupted_ & _corrupt:input-corrupted_ & _overall:input-corrupted_ \\ \hline \end{tabular} \end{table} Table 1: Sample corruption terminology. Next, we look at the distributions of margins underlying the means in Fig. 2. These results are shown in Fig. 3. We construct histograms of the margins measured at each capacity. These histograms share a common set of bins on the horizontal axes. We see that most of these distributions are right-skewed distributions with a long tail containing relatively large margins. This indicates that the mean might be a slightly overestimated measure of the central tendency of margins. The margins for the **CIFAR10c** (right-middle) set show similar trends to that of the **MNISTlc** set (left-middle), but the distributions are even more right-skewed, indicating that their central tendencies are even smaller than the means in Fig. 2 (right) suggest. We also see that the _corrupt:input-corrupted_ margin distributions of the **MNISTgic** set (left-bottom) are normally distributed with a relatively low variance, compared to the other distributions. This suggests that all models are constructing similar decision boundaries around _corrupt:input-corrupted_ samples. There is not much diversity in how far samples tend to be from their nearest boundary. The _corrupt:label-corrupted_ margins for the **MNISTlc** set (left-middle), on the other hand, show much higher diversity. The shape of the distribution changes drastically as capacity increases. At the critically small capacities we see a distribution resembling the _corrupt:input-corrupted_ margin distributions and at higher capacities some _corrupt:label-corrupted_ samples obtain almost outlying small margins. ### Discussion We notice a few key inconsistencies regarding the use of a global average margin as an indicator of the extent to which a model separates samples by means of a decision boundary: 1. Samples that are on-manifold, but problematic in terms of their class separability (of which label-corrupted samples are extreme examples) have much lower margins than the global mean would suggest. 2. Samples that are off-manifold and remote (of which the input-corrupted samples are extreme examples) also have much smaller margins, even though these samples do not affect the generalization ability of the model significantly. 3. In general, the margin distributions are not normally distributed. A mean might be skewed by the fact that the margin distributions have long tails containing extremely large margins values. Given these inconsistencies, we can ask whether the global average margin metrics, which are often used to predict or promote generalization, are sound. It seems that the corrupted margins tend to increase in proportion with the clean margins (see Fig. 2). Assuming the models to be compared fitted exactly the same training samples, this implies that an average margin will only work if the two models for which margins are being compared contain an approximately equal number of samples with these distinct locally inconsistent margin behaviours. If a small set of samples are averaged over, this could become a problem. If the models to be compared have varying training set performance or were not trained on exactly the same training set this could become an even more significant problem. It can also be argued that margin-based generalization predictors are more sensitive to off-manifold noise than to on-manifold noise. That is because the small _corrupt:label-corrupted_ margins rightly indicate the poor generaliza Figure 2: Mean margins for MNIST models (left) and CIFAR10 models (right). tion of label-corrupted models. However, the small _corrupt:input-corrupted_ margins erroneously also indicate poor generalization. Figure 3: Margin distributions for MNIST models (left) and CIFAR10 models (right) trained on clean (top), label-corrupted (middle) and input-corrupted (bottom) train sets. Within each plot, from top to bottom, distributions are ordered by ascending model size. The relevant capacity metric is shown on the right. Green and red distributions are constructed from clean and corrupted samples, respectively. ## 6 A Deeper Look Now that we have discussed the observed inconsistencies and their possible implications, we explore the origin of these inconsistencies further. We do this by posing three concrete questions based on the results in the previous section. After each question, we propose possible answers, producing additional measurements where these can shed light on the underlying phenomena. **(1) Why are the _overall:label-corrupted_ margins so small?** We note that label corruption is expected to result in many samples that have different targets while being close to each other in the input space. One can think of a sample's minimum distance to another sample with a different target as its absolute maximum possible margin since a boundary needs to be drawn between them, assuming both have been correctly classified during training. We propose that this is the main factor contributing to the small _overall:label-corrupted_ margins. To test this hypothesis, we randomly select \(10\ 000\) training samples from the data that the models in the **MNISTIc** and **CIFAR10lc** sets are trained on. We then measure the'maximum margin' as the minimum Euclidean distance between each sample \((\mathbf{x}_{1},c_{1})\) and its nearest neighbour \((\mathbf{x}_{2},c_{2})\) (selected from the entire train set) so that: \[\min_{\mathbf{x}_{2}}|\mathbf{x}_{1}-\mathbf{x}_{2}|_{2},\ \ c_{1}\neq c_{2} \tag{4}\] We do this before and after the data is label corrupted. We then construct a scatter plot of these \(10\ 000\) training samples with the distance as measured with the original targets on the horizontal axis and the potentially corrupted targets on the vertical axis. The resulting scatter plots are shown in Fig. 4. All samples below the provided identity function line had their max margin reduced due to label corruption. Note that, as expected, the presence of label corruption causes many samples, corrupted and clean, to have drastically reduced upper bounds to their margins. If our hypothesis is true then the smallest margins should correspond to the _clean:label-corrupted_ and _corrupt:label-corrupted_ samples that are the closest to each other in the input space. We confirm this by constructing a Euclidean distance matrix comparing the \(1\ 000\)_clean:label-corrupted_ samples with the smallest margins and the \(1\ 000\)_corrupt:label-corrupted_ samples with the smallest margins for one of the biggest models from the **MNISTIc** set. This is presented in Fig. 5. Note that it is the samples that are relatively close to each other that tend to have the smallest margins. From these results we conclude that a significant factor leading to the small margins observed in label-corrupted models, is the proximity (in the input space) of samples with different targets. This also accounts for the observation that _corrupt:label-corrupted_ margins tend to be smaller than _clean:label-corrupted_ margins. There are more _clean:label-corrupted_ samples than _corrupt:label-corrupted_ samples. Therefore, fewer clean samples are moved closer to a different target sample. The result is that fewer _clean:label-corrupted_ margins are reduced. We performed the same analysis on one of the largest **CIFAR10lc** models in Fig. 6. Note that the phenomenon is less clear here, but this is to be expected when considering the relatively few samples that have had their maximum margins reduced significantly in the CIFAR10 case, see Fig. 4 (right). Figure 4: Maximum margins before vs. after label corruption for **MNISTIc** (left) and **CIFAR10lc** (right). Green points represent clean samples and red points represent corrupt samples. The dashed line indicates \(y=x\). **(2) Why are the _corrupt:input-corrupted_ margins smaller than the _clean:input-corrupted_ margins?_** To determine whether a similar phenomenon is reducing the _corrupt:input-corrupted_ margins we generate a similar scatter plot to Fig. 4 but for the **MNISTgic** and **CIFAR10gic** sets. This is seen in Fig. 7. In contrast to the _overall:label-corrupted_ samples we see that virtually all _overall:input-corrupted_ samples have either increased or unchanged maximum margins. Strikingly, the _corrupt:input-corrupted_ samples, for the **MNISTgic** set, have extremely high maximum margins. This is an apparent contradiction. If the proximity of different-target samples reduces the average margin and _corrupt:input-corrupted_ samples are extremely distant, at least in the **MNISTgic** case, from any other sample (minimum of \(10\) in Fig. 7), why are _corrupt:input-corrupted_ margins small? And why are _clean:input-corrupted_ margins slightly smaller than _clean:clean_ margins, when they have slightly larger maximum margins? Regrettably, we do not have as strong an argument for the inconsistency regarding input-corrupted samples as we do for label corruption. We speculate that the relatively small _corrupt:input-corrupted_ margins are a result of the monoteness of these samples. They are so far off manifold and far from each other that there is little incentive to increase their respective margins beyond a certain model-specific maximum. The previously-mentioned lack of variance in margins we observe for _corrupt:input-corrupted_ margins in Fig. 3 supports this notion. With regards to the **CIFAR10gic** maximum margins, Fig. 7 (right), we observe that the _corrupt:input-corrupted_ samples do not undergo such a drastic increase in maximum margin after input corruption. Consequently, we deduce that they are not as remote (and off manifold) as their MNIST counterparts. This deduction is consistent with two observations we made earlier: Figure 5: Euclidean distance between \(1\ 000\)_clean:label-corrupted_ and \(1\ 000\)_corrupt:label-corrupted_ samples with the smallest margins for a \(1\times 10\ 000\) model **MNISTlic**. Entries in the dissimilarity matrix are ordered by corruption status first, then by margin size. See the corresponding margins in the axis plots for the exact margins. Red curves refer to corrupted margins and green curves refer to clean margins. * (See Fig. 1): **CIFAR10gic** models achieve a noticeably worse generalization ability than **CIFAR10** models. This is in contrast to the effect of input corruption in the MNIST models. * (See Fig. 2): For the CIFAR10 models, _corrupt:input-corrupted_ margins are similar to _corrupt:label-corrupted_ margins, again in contrast to corresponding MNIST models. **(3) Why are _clean:input-corrupted_ margins smaller than _clean:clean margins?_** Without the _corrupt:input-corrupted_ samples in close proximity to the _clean:input-corrupted_ samples we might expect them to have similar margins to _clean:clean_, across all capacities. However, we see that they tend to be slightly smaller. We hypothesize that this is a result of the capacity it requires to fit _corrupt:input-corrupted_ samples. It is known that these samples are fitted later and require more capacity than clean samples [31, 16], and that margins tend to increase with capacity. It is then reasonable to conclude that the lack of _clean:input-corrupted_ margin size is a result of the lack of available capacity. This idea is supported by Fig. 2 (left), where we observe that the difference between average _clean:clean_ margins and average _clean:input-corrupted_ margins decreases with added capacity, and disappears completely when models become large enough. **What is the takeaway?** To summarize, there seems to be two main mechanisms contributing to the observed local inconsistencies in margin behaviour. Figure 6: Euclidean distance between \(1\ 000\)_clean:label-corrupted_ and \(1\ 000\)_corrupt:label-corrupted_ samples with the smallest margins for a \(k=64\) model **CIFAR10lc**. Entries in the dissimilarity matrix are ordered by corruption status first, then by margin size. See the corresponding margins in the axis plots for the exact margins. Red curves refer to corrupted margins and green curves refer to clean margins. 1. Samples that are very close to different-target samples will inevitably have reduced margins. This kind of reduced margin is indicative of poor generalization because it is very likely to pertain to in-distribution and on-manifold regions of feature space that are difficult to model. 2. Samples that are extremely remote, being very distant from any other sample, will also obtain reduced margins, due to a lack of incentive to increase them. This kind of reduced margin is not indicative of poor generalization because it is likely to pertain to out-of-distribution and off-manifold regions of feature space. ## 7 Conclusion In this work we show that some training samples are consistently modeled with small margins while affecting generalization in different ways. Specifically, we find that samples representing on-manifold corruption (e.g. label corruption) and off-manifold corruption (e.g. Gaussian input corruption) both have margins that are smaller than those of uncorrupted training samples, though only the presence of the former significantly affects generalization. This is a novel observation of a phenomenon that will require consideration if margins are used in methods to predict generalization of ANNs. We use label and Gaussian input corruption as tools but hypothesize that similar behaviour is possible in natural datasets that contain significant inter-class overlap in the input space, or a large portion of off-manifold samples, respectively. We conclude that a global average margin will be more useful in predicting generalization if it considers these local inconsistencies, or contains an equal proportion of these kinds of samples for all models being compared. In addition to providing a precise comparison of the way in which different types of margins change with increased capacity, we explore some of the possible reasons for the behaviour observed. Specifically, we find that the reduced margins accompanying label corruption (i.e. on-manifold corruption) are a result of the maximum margin (closest sample of a different class) being reduced by label corruption, for a large portion of the train set. For input corruption (i.e. off-manifold corruption), we speculate that the reduced margins come from a lack of incentive to increase these margins, since the input-corrupted samples are remote from other training samples. These are hypotheses that we will be investigating further in future work. We also plan to extend this work to hidden layers, as hidden layer margins have been shown to similarly relate to generalization behaviour [7, 6]. Additionally, a more concrete understanding of the influence (or lack thereof) off-manifold samples have on the margins of on-manifold samples will be useful. ## Acknowledgements We thank and acknowledge the Centre for High Performance Computing (CHPC), South Africa, for providing computational resources to this research project. Figure 7: Maximum margins before vs. after corruption for **MNISTgic** (left) and **CIFAR10gic** (right). Green points represent clean samples and red points represent corrupt samples. The dashed line indicates \(y=x\).